id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
6601861
pes2o/s2orc
v3-fos-license
The significance of LRPPRC overexpression in gastric cancer LRPPRC is a multifunctional protein involved in mitochondrial gene expression and function, cell cycle progression, and tumorigenesis. We analyzed LRPPRC gene expression in 253 paired cases of gastric cancer and noncancerous regions and six gastric cancer cell lines to demonstrate the importance of LRPPRC expression for the prediction of prognosis of gastric cancer. Our results showed that LRPPRC expression in gastric cancer tissues is significantly higher than that in paired control tissue (P < 0.001). Patients with higher LRPPRC expression showed a poorer overall survival rate than those with lower LRPPRC expression (P < 0.001). Multivariate analysis demonstrated that lymph node metastasis (N), distant metastasis (M), TNM stage, and LRPPRC expression were independent prognostic factors for gastric cancer (P = 0.004, 0.002, 0.017, 0.004 respectively).Moreover, Western blotting showed that LRPPRC expression was increased in SGC7901, BGC823, MKN45, and XGC9811cells. The in vitro proliferation assay showed that LRPPRC expression is inversely associated with gastric cancer cells growth. Our results indicated that LRPPRC could be used as a predictive marker for patient prognosis of gastric cancer and may be a novel therapeutic target for gastric cancer in future. Introduction Gastric cancer is a disease with one of the poorest prognoses, being the second cause of tumor-related mortality in the world. Five-year overall survival is 25 % or less, especially in USA, Europe, and China [1,2]. Every year, 1 million new cases of gastric cancer are diagnosed and 700,000 die of this disease worldwide [3,4].Most patients with gastric cancer are diagnosed with advanced gastric cancer, and overall survival rate remains poor. To provide new insights into the pathology of the disease and to permit earlier diagnosis, there is a need for new prognostic tumor markers that are more sensitive than those currently available, such as CEA and CA19-9 [5]. In 2000, Small and Peeters [6] described a set of proteins with 35 amino acid repeat sequences that were dubbed 'pentatricopeptide repeat cassette proteins'. Members of the pentatricopeptide repeat (PPR) protein family play important roles in mitochondrial RNA metabolism in metazoans, plants, and yeast [7]. LRPPRC protein (also known as LRP130) [8,9], a member of PPR protein family, regulates the stability and handling of mature mitochondrial mRNA and participates in the formation of the transcriptional activator PGC-1 involved in liver glucose homeostasis, energy metabolism, and nuclear receptor activation [10]. Previous reports have showed that LRPPRC was highly expressed in most cancers, such as hepatoma cancer, lung adenocarcinoma, esophageal squamous cell carcinoma, colon cancer and lymphoma, and significantly associated tumorigenesis and invasion [11]. However, the LRPPRC expression in gastric cancer and its correlation with gastric cancer clinicopathological characteristics is still unclear. In this study, we first examined LRPPRC expression in 253 paired cases of gastric cancer and paired noncancerous regions and six gastric cancer cell lines to investigate the relevance of LRPPRC expression and its relation to clinicopathological characteristics. Besides, an in vitro study was performed to observe the LRPPRC effect on gastric cancer cell proliferation. In conclusion, our results suggested that LRPPRC is a novel independent marker for the prognosis with functional relevance in gastric cancer. Clinical tissue samples Our study included 253 patients (153 males, 100 females; mean age 65.5 years; range 34-83), who underwent surgery at Xijing Hospital, Fourth Military Medical University (Xi'an, China), were recruited between May 2003 and August 2005 after obtaining their written informed consent. Cancer tissues, along with normal tissues that were at least 5 cm away from the cancer, were obtained from the patients. The median follow-up period for survivors was 47 months (range 0-128 months) by telephone and mail. The study items included age, gender, location of the tumor, tumor stage, depth of invasion, lymph node metastasis, distant metastasis, and tumor-node-metastasis (TNM) stage. Patient characteristics are summarized in Table 1. All of the patients were staged using the 7th edition of the International Union Against Cancer TNM staging system. Of the 253 patients, 66(26 %) had T1-stage, 60(24 %) had T2-stage, 84(33 %) had T3-stage, and 43(17 %) had T4-stage gastric cancer. Tissues were fixed in 10 % formaldehyde, embedded in paraffin, cut into 4-lm sections, and mounted on slides. Immunohistochemical staining The cancer and noncancerous tissues from 253 patients were embedded in paraffin and cut into sections for immunohistochemical analysis. Slides were baked at 60°C for 2 h, followed by deparaffinization with xylene, and rehydrated, after being washed three times in PBS (phosphate-buffered saline), then using a pressure cooker with 10 nM citrate buffer (PH 6.0) for 5 min. After rinsing with PBS. They were then treated with 3 % hydrogen peroxide for 12 min in methanol to quench endogenous peroxidase activity, followed by incubation with 1 % bovine serum albumin to block nonspecific binding for 1 h. The antigen-antibody reaction was carried out overnight at 4°C with the anti-LRPPRC antibody diluted 1:500(Santa Cruz Biotechnology Inc., Santa Cruz, CA). Rinsed for three times in PBS and incubated with a horseradish-peroxidase-conjugated anti-IgG antibody (1:3,000; Santa Cruz) for 1 h. Finally, the sections were developed with 3,3 0 -diaminobenzidine solution for 2 min, washed briefly in running water, counterstained with hematoxylin, dehydrated through a graded series of alcohol to xylene and were then mounted with Permount onto coverslips. Images were obtained under a light microscope (Olympus BX51;Olympus, Japan) equipped with a DP70 digital camera. As negative controls, tissue sections were processed under the same experimental conditions described above, except that they were incubated overnight at 4°C in blocking solution without the anti-LRPPRC antibody. Immunohistochemical analysis Staining of LRPPRC was detected mainly in the cytoplasm of tumor cells. The degree of immunostaining was reviewed and scored independently by two pathologists who did not know the clinical features or survival status of the patients then viewed the stained tissue slides separately. An average value of two independent scores was presented in the present study [12][13][14]. The values are presented as mean ± standard deviation (SD) from independent experiments conducted in triplicate. Western blot Cells were washed twice with cold PBS and lysed on ice in RIPA buffer with protease inhibitors and quantified by BCA method. 50 mg Protein lysates were resolved on 8 % SDS polyacrylamide gel, electrotransferred to polyvinylidene fluoride membranes (Millipore, Bedford, MA) and blocked in 5 % nonfat dry milk in Tris-buffered saline (pH = 7.5). Membranes were immunoblotted overnight at 4°C with anti-LRPPRC polyclonal antibodies as IHC described above, respectively, then followed by their respective secondary antibodies. Signals were detected by enhanced chemiluminescence (Pierce, Rockford, IL). For Immunofluorescence, the binding of primary antibody was visualized by anti-rabbit IgG antibody, and the slides were then examined by a confocal laser scanning microscope. Proliferation assays In gastric cancer cell lines transfected with siRNA, 1 9 10 5 cells were seeded in 12-well dishes and cultured for 96 h to determine proliferation. Viable cells were counted every day by reading the absorbance at 490 nm using a 96-plate reader BP800 (Dynex Technologies, Chantilly, VA, USA). Each experiment was performed in triplicate. Statistical analysis All statistical analyses were performed using the SPSS½(QUANER) version 16.0 software package (SPSS Inc. Chicago, IL, USA). A paired samples t test was used to analyse the differences between the gastric cancer samples and the paired adjacent noncancerous tissue samples. Associations between LRPPRC expression and clinicopathological characteristics were analyzed by the Mann-Whitney test and the Kruskal-Wallis test. Survival curves were estimated using the Kaplan-Meyer method, and the log rank test was used to calculate differences between the curves. Prognostic factors were examined by univariate and multivariate analyses (Cox proportional hazards model). A probability level of 0.05 was chosen for statistical significance. LRPPRC expression in clinical tissue specimens LRPPRC expression was investigated by immunohistochemistry in 253 gastric cancer tissues and paired noncancerous tissues. We found that positive LRPPRC expression in gastric cancer tissues (219/253, 86.6 %) was significantly higher than that in paired noncancerous tissues (132/253, 52.2 %). The difference in LRPPRC staining between gastric cancer tissues and paired noncancerous tissues was statistically significant (P \ 0.001) (Fig. 1). In Relationship between LRPPRC expression and prognosis The overall survival analysis using the Kaplan-Meyer method revealed that the prognosis of gastric cancer patients whose tumors with higher or moderate LRPPRC expression showed significantly shorter survival than those with no or weak LRPPRC expression (P \ 0.001; Fig. 2). Table 2 provides the univariate and multivariate analyses of factors related to patient prognosis. Univariate analysis shows that the following factors were significantly related to postoperative survival: depth (P = 0.004), lymph node metastasis (P = 0.007), distant metastasis (P \ 0.001), TNM stage (P = 0.004), and LRPPRC expression (P \ 0.001). Furthermore, a Multivariate analysis indicated that lymph node metastasis (N P = 0.004), distant metastasis (M P = 0.002), TNM stage (P = 0.017), and LRPPRC expression (P = 0.004) were independent prognostic factors of overall survival for the patients with gastric cancer. In vitro assessment of LRPPRC expression knockdown Because LRPPRC expression was higher in gastric cancer tissues than that in paired noncancer tissues, six gastric cancer cell lines were chosen for the proliferation study. We first examined the expression of LRPPRC in six gastric cancer cell lines by Western blot. Our results showed that LRPPRC protein level was higher expressed in gastric cancer cell line SGC7901, BGC823, MKN45, and XGC9811 (Fig. 3a) as compared with the other gastric cancer cell lines. After cell transfection, the expression of LRPPRC in transfected cells was determined by Western blotting. It was found that LRPPRC expression was significantly reduced in SGC7901,BGC823,MKN45, and XGC9811 (Fig. 3b). In proliferation assay, there were differences in cell numbers of SGC7901 between NC and LRPPRC siRNA (P \ 0.05; Fig. 4). There was no statistically significant difference in the number between the NC and LRPPRC siRNA in the other cell lines. Discussion Gastric cancer is usually a disease of the aged, with the mean patient age ranging between 50 and 70 years. It is thought that gastric cancer results from a combination of environmental factors and an accumulation of generalized and specific genetic alterations [16]. The treatment for gastric cancer includes a combination of surgery, chemotherapy, and radiation therapy. There have been some studies on the prognostic impact of tumor markers in gastric cancer, but the previous studies have not evaluated the relevance in LRPPRC expression and tumor prognosis. The assessment of biological prognostic factors is of clinical importance, especially for a disease with poor outcome such as gastric cancer. The primary aim of this study was to determine LRPPRC expression and its correlation with clinicopathological, characteristics, and prognosis of patients with gastric cancer. To investigate the potential oncogenic role of LRPPRC in gastric cancer, we first examined the expression level of LRPPRC in a series of paired gastric cancer tissues with the adjacent nonneoplastic tissues. The expression of LRPPRC was verified by immunohistochemistry in gastric cancer and corresponding normal tissues. Results showed that the LRPPRC expression levels were significantly increased in tumor tissue samples, compared with that in the adjacent nontumor tissue samples, as illustrated in Fig. 1. The similar result was obtained in other studies showing that LRPPRC protein was indeed relatively upregulated in gastric cancer and others carcinoma tissues. 16 In addition, according to analysis of the correlation between expression level of LRPPRC and patients' characteristics, one of the interesting findings was that expression level of LRPPRC was significantly associated with the depth of tumor invasion, lymph nodes metastasis (N stage), and distant metastasis (M stage) ( Table 1).Tian et al. [11] reported that the lung adenocarcinoma cell line A549, treated with LRPPRC, had high invasive ability. The present in vitro study showed that LRPPRC expression is associated with tumor growth, and the inhibition of LRPPRC may lead to a reduction in gastric cancer proliferation. These results suggest that LRPPRC may play an important role in the tumorigenesis of gastric cancer (Fig. 4). One of the most important findings in this study was that high expression of LRPPRC in gastric cancer patients was significantly associated with poor prognosis and low overall survival, as shown in Fig. 2, indicating that high LRPPRC protein level is a marker of poor prognosis for patients with gastric cancer. Multivariate analyses in Table 2 further revealed that only lymph node metastases and distant metastases, TNM stage, and expression of LRPPRC were independent prognostic factors in patients with gastric cancer. In summary, our results show that high LRPPRC protein expression is correlated with the depth of tumor infiltration and is an unfavorable independent prognostic factor for surgically resected gastric cancer. The metastasis mechanism of LRPPRC in gastric cancer may involve a multitude of epigenetic pathways and needs to be addressed in the future. These findings suggest that LRPPRC can serve as a predictive marker of patient outcome in gastric cancer. Since the number of patients in this study was small, further study of a larger patient population is necessary to confirm its clinical significance in gastric cancer. Conflict of interest The authors declare that they have no competing interests. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, There were significant differences between NC and LRPPRC siRNA. In the other 3 cell lines, there was no significant difference between NC and LRPPRC siRNA (a SGC7901; b BGC823; c MKN45; d XGC9811). Values are mean ± SD for three independent experiments. NC negative control distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
2018-04-03T04:26:03.857Z
2013-12-28T00:00:00.000
{ "year": 2013, "sha1": "73251b8be54b0c2ab50e51877cb1de36af25ddcd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12032-013-0818-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "73251b8be54b0c2ab50e51877cb1de36af25ddcd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
11166015
pes2o/s2orc
v3-fos-license
Circulating Biomarkers in Bladder Cancer Bladder cancer is a molecularly heterogeneous disease characterized by multiple unmet needs in the realm of diagnosis, clinical staging, monitoring and therapy. There is an urgent need to develop precision medicine for advanced urothelial carcinoma. Given the difficulty of serial analyses of metastatic tumor tissue to identify resistance and new therapeutic targets, development of non-invasive monitoring using circulating molecular biomarkers is critically important. Although the development of circulating biomarkers for the management of bladder cancer is in its infancy and may currently suffer from lower sensitivity of detection, they have inherent advantages owing to non-invasiveness. Additionally, circulating molecular alterations may capture tumor heterogeneity without the sampling bias of tissue biopsy. This review describes the accumulating data to support further development of circulating biomarkers including circulating tumor cells, cell-free circulating tumor (ct)-DNA, RNA, micro-RNA and proteomics to improve the management of bladder cancer. INTRODUCTION Bladder cancer is the sixth most cancer in the united states and an estimated 75,000 new cases of urinary bladder cancer will be diagnosed in 2016 [1]. The median age of diagnosis is 73 years with twothirds of cases occurring in men, making medical comorbidities a significant factor influencing patient management [2]. It lags behind other solid organ cancers with one of the lowest increases in 5-year survival in recent decades. A majority of new cases are non-muscle invasive bladder cancer (NMIBC) with tumors largely confined to the mucosa (Ta-70%) or less often the submucosa (T1-20-25%) or flat high grade lesions/Carcinoma in situ (CIS-5-10%) [3]. The natural history of NMIBC is characterized by a tendency to recur locally as a function of stage, grade, size and multiplicity. Thus, NMIBC remains one of the few examples of a malignancy where serial tumor profiling separated by time is feasible due to repeated cystoscopic biopsies and resections [4]. For the treatment of muscle invasive bladder cancer (MIBC), neoadjuvant cisplatin based combination chemotherapy followed by radical cystectomy (RC) is proven to prolong survival [5,6]. Adjuvant cisplatin-based combination therapy is considered reasonable in those with extra-vesical or node positive disease following RC, in those who have not received neoadjuvant therapy [7]. Maximal cystoscopic resection followed by concurrent chemoradiotherapy allows bladder preservation in selected patients based on tumor and patient characteristics [8]. However, long-term survival and cure remain elusive for most patients with extra-vesical or node positive disease at the time of RC. Moreover, clinical staging is suboptimal, with approximately 30-50% of cT2N0 patients upstaged at the time of RC [9]. The overall survival for advanced disease remains abysmal even with optimal first line cisplatin-based combination chemotherapy. The median overall survival (OS) is approximately 15 months, with 5-year OS rate of approximately 5% [10]. Moreover, the majority of patients do not receive cisplatin-based chemotherapy, partly due to renal dysfunction, poor performance status (PS ≥ 2) or comorbidities (cardiac dysfunction, neuropathy, hearing loss) [11,12]. Second-line and salvage systemic chemotherapy with taxanes or vinflunine yields even more dismal outcomes with a median OS of 6 to 8 months [13]. Recently, atezolizumab, a programmed death (PD)-ligand (L)-1 inhibitor, has been added to the therapeutic armamentarium for post-platinum therapy [14]. However, the median OS with atezolizumab remains around 8 months and overall response rate (ORR) is approximately 15%, although the excellent tolerability and duration of response represent a major advance for these patients. All of the regimens above have been developed in unselected patients. There is an urgent need for newer agents to provide large increments of outcomes in appropriately selected patients, i.e. precision medicine. Given the difficulty of serial analyses of metastatic tumor tissue to study evolving mechanisms of resistance and new therapeutic targets, the development of non-invasively obtained circulating biomarkers for molecular tumor profiling is a priority. Moreover, early detection and clinical staging are suboptimal with currently employed modalities. Hence, there is an urgent need to identify superior non-invasive molecular tools for bladder cancer detection, staging, surveillance and therapy. This review will provide an overview of circulating biomarkers to enhance therapy for UC ( Fig. 1). CURRENTLY EMPLOYED CLINICAL AND LABORATORY PROGNOSTIC FACTORS Risk stratification of patients based on traditional laboratory and clinical markers has been reported in several studies. The importance of stage, grade, multiplicity and prior recurrences is recognized in NMIBC [15]. Similarly, in the MIBC setting, pathologic stage confers a major prognostic impact [16]. In the first-line cisplatin-based chemotherapy setting, PS and visceral metastasis were identified to confer poor prognosis almost 2 decades ago [17]. Thereafter, a couple of other first-line models have further refined prognostication by the addition of other variables such as albumin, hemoglobin and leukocytosis [18,19]. In the salvage therapy context, PS, hemoglobin and liver metastasis were described to be major adverse prognostic factors. This model has also been enhanced by the addition of treatmentfree interval and albumin as additional prognostic factors [20,21]. CURRENTLY APPROVED AND EMERGING NON-INVASIVE URINARY BIOMARKERS While there are no commercially approved circulating molecular biomarkers for use in the clinic to manage bladder cancer, multiple urinary biomarkers have been commercially approved for NMIBC detection and surveillance using mostly protein based assays and a DNA based assay [22]: BTA Stat (Polymedco), BTA Trak (Polymedco), NMP22 Bladder Cancer Test (Matritech), NMP22 BladderChek (Matritech), ImmunoCyt/uCyt+ (DiagnoCure) and UroVysion (Abbott Molecular). The BTA (bladder tumor antigens) detect human complement factor H related protein produced by bladder cancer cells, with the BTA TRAK being a quantitative assay [23] while the BTA STAT is a qualitative test [24]. The NMP (Nuclear Matrix proteins) are proteins that play an important role in the structural framework of the nucleus and are involved in DNA replication and regulation of gene expression [25]. The NMP22 Bladder Cancer test kit is quantitative, while the NMP22 BladderChek test is a qualitative assay [26,27]. The ImmunoCyt/uCyt+ -detects exfoliated bladder cancer cells in urine using fluorescent monoclonal antibodies for a high molecular weight form of carcinoembryonic antigen and 2 bladder tumor cell-associated mucins [28]. The UroVysion test uses a multi-targeted FISH assay to identify UC related chromosomal alterations-aneuploidy for chromosomes 3,7 and 17 and loss of 9p21 locus of the p16 tumor suppressor gene [29,30]. Unfortunately, although many of these urinary markers are quite sensitive compared with cytology, they suffer from low specificity and cannot replace cystoscopy. Other novel urinary assays with preliminary promise include DNA alteration assays in frequently mutated genes and protein assays (e.g. telomerase) [31,32]. In one study, urinary cell free (cf)-DNA and cellular DNA was analyzed and compared to matched formalin-fixed paraffin embedded (FFPE) tumor DNA in 23 patients. Urinary DNA was highly representative of DNA derived from tumors. cfDNA from urine had a higher tumor genome border and allowed greater detection (90%) of key somatic mutations in (BRAF, KRAS, EGFR, IDH1, IDH2, PTEN, PIK3CA, NRAS and TP53) than cellular DNA from urine [32]. Additionally, mutations of telomerase reverse transcriptase (TERT) promoter have been detected in urinary cfDNA, which mirrors tumor tissue [31,[33][34][35]. TUMOR TISSUE ALTERATIONS TO GUIDE THERAPY FOR BLADDER CANCER The cancer genome atlas (TCGA) network identified recurrent tumor tissue mutations in genes involved in cell-cycle regulation, chromatin regulation and kinase signaling pathways, including potential therapeutic targets in PI3K/AKT/MTOR, receptor tyrosine kinases and MAPK pathways [36]. Indeed, bladder cancer appears to demonstrate one of the highest somatic tumor mutation burdens [37]. TCGA and other groups have also identified multiple intrinsic subtypes based on gene expression profiling [38][39][40][41]. Emerging data indicate that the basal subtype may be more chemo-sensitive, the luminal subtype may be responsive to FGFR, HER2, and HER3 inhibitors and the mesenchymal/claudin-low subtype may be responsive to T-cell checkpoint inhibitors [42]. Retrospective analysis of tumor tissue from a large phase II trial evaluating atezolizumab suggested that the expression of PD-L1, mutation burden and intrinsic subtype may be complementary and assist in the selection of patients for PD-1/PD-L1 inhibitors [43,44]. Cisplatin-based neoadjuvant chemotherapy appears more active in patients with MIBC harboring somatic alterations of DNA repair genes (ATM, RB1, FANCC, ERBB2 and ERCC2), while the p53-like intrinsic subtype may be platinum-resistant [45][46][47][48]. Emerging data indicate that targeted agents may demonstrate substantial activity in selected populations with appropriate somatic alterations [49][50][51][52]. Circulating tumor cells: Enumeration and profiling Circulating tumor cells (CTCs) have long been the targets for 'liquid biopsies' and may provide insights into tumor tissue alterations. Conversely, CTCs are subject to the bias of selecting viable tumor cells capable of circulating in blood, and may not be representative of the entire malignancy. A major challenge has been to improve the sensitivity of CTC detection methods. As CTCs are present at very low concentrations in peripheral blood, enrichment is required prior to detection to improve sensitivity. Enrichment is based on physical or biological properties that can discriminate CTCs from normal cells in circulation. Density-based separation is based on differential migration due to differences in density, while negative enrichment uses antibody mediated removal of hematopoietic cells and other non-cancer cells in circulation. Magnetic activated cell sorting uses magnetically labeled antibodies to capture CTCs, with EpCAM commonly used for positive enrichment and is currently FDA-approved. Cell size has been used to guide separation using a filtration-based approach to isolate epithelial cells, but is limited in application due to size variations within single populations. Separation based on magnetopheretic mobility can also be used for enrichment but is limited by the targets chosen. Following enrichment, multiple techniques exist for detection. Fluorescence assisted cell sorting (FACS) uses a flow cytometry based assay resulting in high purity and ability to accommodate high flow rates of up to 50,000 cells per second. Only a limited amount of cells can be analyzed due to its throughput design, and cell viability can also be reduced. Fiberoptic array scanning technology cytometry allows analysis of larger volumes of blood, eliminating the need for enrichment and improving cell viability. Microfluidic devices which utilize electric field gradients for cell sorting or dielectreopheresis can allow single cell isolation, which has significant advantages over other cell separation methods. ISET or isolation by size of epithelial tumor cells performs filtration-based enrichment based on larger tumor cell size, followed by immunohistochemical or cytological evaluation. The Adna test utilizes reverse transcription-polymerase chain reaction (RT-PCR) to detect CTCs having tumor-associated transcripts but false positives due to contaminating nucleic acid and loss of cell viability are limitations. The microfluidic platform utilized in CTC-Chip uses EpCAM-coated micro posts to isolate target CTCs, and is currently limited to CTCs that express EpCAM. CellSearch, which is FDA approved for CTC detection in peripheral blood, uses magnetic activated cell sorting with anti-EpCAM antibodies for enrichment followed by detection using positive selection by anti-cytokeratin and negative selection by anti-CD45 (lymphocyte common antigen) fluorescent dyes. Since CellSearch uses epithelial markers, non-epithelial phenotypes or cancer cells that have undergone epithelialmesenchymal transition (EMT) are not detectable by CellSearch and other available CTC detection methods [53]. CTCs that have undergone EMT may be potentially identified by markers such as vimentin, N-cadherin and Twist [54]. CTCs were detected in circulating blood in 44% of 33 patients with metastatic UC when using the CellSearch platform. Higher numbers of CTCs were seen in patients with greater number of metastatic sites [55]. These findings were replicated in other studies demonstrating CTCs in approximately 50% of patients with metastatic UC [56,57]. CTCs may be more prevalent in pre-treated patients as suggested by a study of 70 patients with platinum refractory UC, in whom CTCs by CellSearch were detectable in 66%, with Her2 positive CTCs in 3 patients [58]. However, CTCs by CellSearch were detected in only 21% of 43 patients prior to undergoing RC, and did not correlate with more advanced pathological extra-vesical or node positive disease [59]. Another larger study of 100 consecutive patients undergoing RC identified preoperative CTCs by CellSearch in 23% of patients. The presence of CTCs was associated with higher risks of recurrence and cancer specific and overall mortality [60]. This study also reported high concordance between the HER2 immunohistochemistry (IHC) status of CTCs and the fluorescent in situ hybridization (FISH) gene expression status of corresponding primary tumors (14 of 22 patients, 64%) and lymph node metastases (100%). Thirteen of 16 patients (81%) with HER2-negative primary tumors also had HER2-negative CTCs, but only 1 of 4 of patients with HER2-positive primary tumors also had HER2-positive CTCs. In addition, heterogeneous HER2 expression was occasionally observed among the identified CTCs. As many as 20% of 102 patients with T1G3 NMIBC exhibited CTCs by CellSearch, which predicted decreased time to first recurrence and time to progression to muscle-invasive or metastatic disease [61]. Thus, CTCs may be useful even in NMIBC to identify those with high risk of recurrence who could potentially benefit from early systemic therapy. CTCs were found in 24 of 54 patients (44%) with T1G3 NMIBC in another study and 92% of CTCs expressed survivin [62]. Similarly, CTC positive patients exhibited shorter disease-free survival. Preliminarily, CTC profiling of bladder cancer specific cells appears feasible for MUC7, epidermal growth factor receptor (EGFR), cytokeratin-20, survivin, folate receptor ␣ ligand and uroplakin II by RT-PCR [62][63][64][65][66][67][68]. Another study using a novel IsoFlux™ System microfluidic collection device appeared to demonstrate increased sensitivity to capture CTCs compared with CellSearch, and next generation sequencing could be performed for a selected panel of genes [69]. Newer methodologies to isolate a larger number of CTCs continue to be studied, which may enhance the ability to more comprehensively perform molecular profiling in most patients [70][71][72]. One study simultaneously assessed 2 platforms -the ScreenCell Cyto platform which uses a size-selective enrichment method followed by central pathological review and the immunomagnetic Adna TestSelect kit to assess gene expression level of EPCAM and MUC1 by RT-PCR -in 3 cohorts of patients undergoing neoadjuvant therapy for MIBC, first line MVAC for metastatic disease and second line anti-TGF␤ therapy. The combined strategy to identify CTCs showed promise in the ability to detect relapse with concordant results across both platforms [73]. A meta-analysis including studies evaluating all stages of disease demonstrated an overall low sensitivity of 35.1% for the detection of CTCs using mostly PCR and CellSearch assays, although higher stages more commonly displayed CTCs [74]. The technology to improve the detection and yield of true CTCs is still in its infancy and will continue to undergo refinement. Overall, while CTC positive patients appear more likely to have advanced disease, their low sensitivity across multiple studies and across stages appears to limit their utility as a screening or diagnostic tool. Potentially, the further development of platforms designed to identify CTCs that have undergone EMT may improve the utility of this circulating biomarker. Circulating tumor DNA profiling Cell free DNA consists of fragments of 120 to 200 base pairs and is present in the circulation and originates from both healthy cells and tumor cells. Circulating tumor (ct)-DNA consists of trace fractions of overall circulating DNA. Somatic mutations in ctDNA are widely representative of the underlying tumor genome and may provide better understanding of tumor heterogeneity, with less 'sampling error' of tissue biopsies and could be more representative of disseminated disease. ctDNA is detectable in >75% of patients with advanced solid malignancies, being identified in all patients with CTCs [75]. A recent retrospective study demonstrated that ctDNA exhibiting somatic DNA variants was detectable in plasma and urine of 12 patients using droplet digital PCR even in patients with non-invasive bladder cancer, with higher levels preceding disease progression [76]. Furthermore, ctDNA variants disappeared in disease-free patients. Massive parallel NGS (Next generation sequencing) was employed to evaluate a panel of 70 potentially therapeutically actionable genes (Guardant 360, Guardant Health, Palo Alto, CA) for mutations, amplification, fusions and indels in a recent study of patients with advanced urothelial carcinoma of the bladder [77]. Among 143 patients with advanced bladder cancer, 127 (89%) had detectable ctDNA for profiling. These data suggest that bladder cancer may exhibit one of the highest prevalences of ctDNA. Alterations were frequently observed in p53, DNA repair genes (BRCA1/2) cell-cycle controlling genes (FGFR2/3, APC, CDKN2A) epigenetic modifying genes (ARID1A) and kinase genes (EGFR, HER2, PIK3CA, KRAS, RAF) which resembled alterations previously reported from tumor tissue by the TCGA. Serial profiling was also presented for a subset of patients, demonstrating the extinction of pre-existing alterations and the emergence of new alterations, which may be hypothesized to confer resistance and represent new therapeutic targets. While lack of concordant tissue testing is a limitation of this study, further studies are needed to confirm that ctDNA mirrors the aggregate of genomic alterations in contemporaneously obtained tumor tissue from various locations, which will provide a method of assessing tumor response and early identification of drug resistance especially in the era of targeted therapies. Methylation of circulating cell-free DNA Epigenetic modifications have been identified as consistent major somatic molecular alterations in bladder cancer, with DNA methylation occurring frequently. Thus, methylated DNA has been studied as an epigenetic tumor marker, with some hypermethylation signatures being tumor specific [78]. Circulating methylated genes have been identified in the plasma of 27 bladder cancer patients in a study, 21 of whom had NMIBC with the remaining having muscle invasive disease [79]. p14ARF promoter hypermethylation was associated with multicentric foci, larger tumors and relapse. Seventeen cases demonstrated accord between plasma and tumor DNA alterations, while 6 cases exhibited a new alteration in plasma. Although the concordance rate was 63%, perfect concordance is not expected due to various factors including sampling bias of tumor biopsies and presumably, DNA from all sites of tumors and germline DNA in the circulation. Analysis of cell free DNA in other small studies of patients with invasive or non-invasive bladder cancer revealed hypermethylation of cell-free serum DNA, especially promoters of specific genes such as APC, GSTP1, TIG1, DAPK, p16 and cadherin genes, and some of these aberrations were associated with aggressive clinicopathologic features and conferred poor prognosis [80][81][82][83][84]. These data accord with and resemble data derived from analysis of NMIBC tumor tissue [82,85]. RNA and Micro (mi)-RNA profiling Tumor tissue gene expression profiling of MIBC has led to the proposal of major subtypes of bladder cancer including basal, luminal and p53 subtypes. Potentially, these classifications can be proposed based on circulating mRNA for more advanced stages of disease [39,40]. Peripheral blood mononuclear cells (PBMCs) have been subjected to gene expression (RNA) profiling in multiple conditions, although bladder cancer has not been evaluated by this assay. Recently, robust platforms such as NanoString have been employed to assess gene expression from circulating blood in other settings, which utilizes minute quantities of RNA, and may be worth exploring in bladder cancer patients too [86]. MicroRNAs (miRNA) are a class of small, conserved non protein-coding RNAs that regulate gene expression by controlling translation of target mRNAs. Indeed, the potential relevance miRNAs in driving bladder cancer progression has been demonstrated from studies of tumor tissue also. For example, one study of tumor tissue demonstrated that miR-21, which downregulates the p53 pathway was increased in MIBC. Conversely, NMIBC exhibited miRNA expression profiles that led to increased FGFR3 expression [87]. miR expression profiles were prognostic and may assist in developing precision medicine. For example, high tumor levels of miR-21 and miR-372 were associated with poor outcomes, and miR-203 were associated with better outcomes in separate studies in the context of cisplatin-based chemotherapy for patients with advanced UC or MIBC, respectively [88,89]. miRNAs are better able to circulate without significant degradation in blood and withstand processing due to smaller size. Their expression can be measured by real time quantitative PCR or miRNA microarray platforms [90]. Studies of serum miRNA expression in bladder cancer have identified miRNA profiles that may distinguish NMIBC from MIBC or confer prognostic impact. In a recent large cohort, genome-wide miRNA analysis by sequencing was performed followed by RT-PCR on serum from 207 patients with MIBC, 285 with NMIBC and 193 controls [91]. This study reported that miR-486-3p and miR-103a-3p were associated with overall survival of patients with MIBC. In another report, these authors also identified an association of miR-152 with tumor recurrence of NMIBC, and the potential value of a 6 miR panel to diagnose bladder cancer [92]. Plasma miR-205 was upregulated in 89 bladder cancer patients compared to 56 healthy control subjects and in MIBC compared to NMIBC patients [93]. Plasma miR-497 and miR-663b were identified to be potentially promising circulating diagnostic biomarkers in another study, which included mostly NMIBC (70%) evaluated 56 bladder cancer and 60 control patient samples in the training phase and 109 bladder cancer and 115 controls in the validation phase [94]. Proteomics Proteomic studies have been conducted on various blood derived samples for a longer time, but are plagued by the non-specific nature of protein molecules in blood. Serum levels of soluble Ecadherin, MMP2, MMP7, endostatin, TGF-␤ and uPA have all been reported to be associated with higher pathologic stage or prognostic for poorer clinical outcomes in different settings [95][96][97][98][99]. High circulating interleukin (IL)-8 levels at baseline were associated with poor outcomes in the setting of sunitinib or pazopanib [100,101]. More sophisticated serum protein analysis using matrix-assisted laser desorption ionization (MALDI) time-of-flight (TOF) mass spectrometry (MS) have been evaluated more recently. In one study of serum samples from 105 patients with bladder cancer, 98 healthy controls, and 45 prostate cancer patients, MALDI-TOF-MS demonstrated potential diagnostic utility in identifying patients with bladder cancer [102]. Furthermore, nuclear magnetic resonance (NMR)-based metabolomics has identified differences of serum metabolic profiles between bladder cancer and control subjects [103]. Interestingly, serum samples from bladder cancer patients exhibited decreased levels of isoleucine, leucine, tyrosine, lactate, glycine and citrate, coupled with increased levels of lipids and glucose. Current discovery driven proteomic studies are still unable to detect uncommon proteins at low concentrations of 1 ng/ml or less, which are more likely to be biomarkers. The large dynamic concentration range of upto 12 orders of magnitude of plasma proteins can mask the targeted plasma proteins present in lower quantities. Despite advancements in techniques including sample fractionation and protein depletion, sensitivity remains low. Moreover, the standardization and reproducibility of results is hindered by technological challenges, variations due to time of collection, age, sex and also genetic variations [104]. Rapid enzymatic degradation of some proteins requiring immediate sample processing and significant variations in some proteins based on fasting vs non-fasting states entail attention to details of uniform timing of sample collections. Therefore, the development of circulating proteomic biomarkers faces intrinsic challenges and needs to occur with significant thought. CONCLUSION The development of molecular biomarkers, particularly circulating biomarkers, for the management of bladder cancer is in its infancy. Circulating biomarkers have inherent logistical advantages owing to their non-invasiveness, but suffer from lower sensitivity of detection and moderate correlation with tumor tissue alterations. Conversely, circulating molecular alterations may capture tumor heterogeneity without the sampling bias of tissue biopsy. Moreover, serial assessment of circulating biomarkers is readily accomplished, which can facilitate cancer screening, knowledge of tumor biology, identify minimal residual or occult disease, early detection of recurrence, track resistance mechanisms and inform therapeutic intervention. Indeed, one recently initiated basket trial, TAPUR (Targeted Agent and Profiling Utilization Registry) allows the enrollment of patients to one of multiple arms of targeted therapy based on molecular alteration detected by ctDNA profiling using a Clinical Laboratory Improvement Amendments (CLIA)-certified, College of American Pathologists (CAP)-accredited laboratory, which has registered the assay with the National Institutes of Health (NIH) Genetic Test Registry. For example, given the preliminary success of ctDNA profiling to identify alterations in kinase genes, epigenetic modulating genes and p53, UC patients could be enrolled in trials such as TAPUR based on accredited ctDNA profiling. Notably, one phase III trial could not detect a favorable impact of adjuvant cisplatin-based chemotherapy following cystectomy in patients with tumor tissue p53 alteration by IHC, and this paradigm could be employed in metastatic disease using ctDNA profiling [105]. Circulating panels could be developed in conjunction with developing tumor tissue based profiling using the neoadjuvant therapy paradigm such as the randomized phase II neoadjuvant trial that compares GC vs. MVAC (NCT02177695) and evaluates COXEN (Coexpression Extrapolation) molecular tumor profiling, using an algorithm based on resistance patterns from a panel of 60 diverse cancer cell lines to predict sensitivity to chemotherapy [106]. Potentially, capture of data for multiple circulating biomarkers (CTCs, DNA including methylation status, RNA, miRNA, proteins) on multiplex platforms may be complementary and optimize a comprehensive circulating biomarker panel to advance the ultimate goals of early diagnosis, optimal clinical staging, monitoring and precision medicine. Moreover, the integration of molecular panels from interrogating another non-invasively obtained fluid, urine, may also complement circulating biomarker panels. Emerging imaging technology using novel radiotracers are also worthy of attention in this context, given their non-invasiveness. The task of studying and validating such comprehensive platforms in large prospective studies is undeniably challenging and will require a multidisciplinary and international effort in partnership with regulatory agencies.
2017-07-15T19:25:11.492Z
2016-10-24T00:00:00.000
{ "year": 2016, "sha1": "7539654db874183e5c5e2be74294f5a83409038f", "oa_license": "CCBYNC", "oa_url": "https://content.iospress.com/download/bladder-cancer/blc160075?id=bladder-cancer/blc160075", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "7539654db874183e5c5e2be74294f5a83409038f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237606087
pes2o/s2orc
v3-fos-license
Socioeconomic inequalities in the incidence of alcohol-related liver disease: A nationwide Danish study Summary Background There is socio-economic inequality in total alcohol-related harm, but knowledge of inequality in the incidence of specific alcohol-related diseases would be beneficial for prevention. Registry-based studies with nationwide coverage may reveal the full burden of socioeconomic inequality compared to what can be captured in questionnaire-based studies. We examined the incidence of alcohol-related liver disease (ALD) according to socioeconomic status and age. Methods We used national registries to identify patients with an incident diagnosis of ALD and their socioeconomic status in 2009–2018 in Denmark. We computed ALD incidence rates by socioeconomic status (education and employment status) and age-group (30–39, 40–49, 50–59, 60–69 years) and quantified the inequalities as the absolute and relative difference in incidence rates between low and high socioeconomic status. Findings Of 17,473 patients with newly diagnosed ALD, 78% of whom had cirrhosis, 86% had a low or medium-low educational level and only 20% were employed. ALD patients were less likely to be employed in the 10 years prior to diagnosis than controls. The incidence rate of ALD correlated inversely with educational level, from 181 (95% CI, 167–197) to 910 (95% CI, 764–1086) per million person-years from the highest to the lowest educational level. By employment status, the incidence rate per million person-years was 211 (95% CI, 189–236) for employed and 3449 (95% CI, 2785–4271) for unemployed. Incidence rates increased gradually with age leading to larger inequalities in absolute numbers for older age-groups. Although ALD was rare in the younger age-groups, the relative differences in incidence rates between high and low socioeconomic status were large for these ages. The pattern of socioeconomic inequality in ALD incidence was similar for men and women. Interpretation This study showed substantial socioeconomic inequalities in ALD incidence for people aged 30–69 years. Funding The study was supported by grants from the Novo Nordisk Foundation (NNF18OC0054612) and the Research Fund of Bispebjerg Hospital. Research in context Evidence before this study We initially searched MEDLINE in April, 2020, using keywords including "liver", "cirrhosis", "alcohol", "socioeconomic status", "socioeconomic position", "deprivation", and "inequalities". Previous studies found a socioeconomic gradient in total alcohol-related morbidity and mortality. A systematic review published in 2015 on the relation between socioeconomic status and alcohol-attributable harms concluded that few studies had investigated the socioeconomic pattern of single alcoholrelated diseases and that such knowledge would be beneficial for prevention purposes. For instance, prevention programs of liver disease that develops after chronic heavy drinking would be different than prevention programs of alcohol-related accidents resulting from acute alcohol poisoning. Moreover, application of both an absolute and relative measure of inequality in disease is recommended by the World Health Organization. Added value of this study This nationwide study, based on a population with access to universal healthcare, social security benefits, and free education, showed substantial inequalities in the incidence of alcohol-related liver disease in ages 30À69 years. This is the first study of socioeconomic inequality in alcohol-related liver disease incidence applying both an absolute and relative measure of inequality. Application of the absolute measure of inequality showed a huge burden of alcohol-related liver disease incidence for people of low socioeconomic status after the age of 40 years. This follows the increasing incidence of alcoholrelated liver disease with age until 60À70 years. Application of the relative measure of inequality revealed that the inequality was present already in the age-group of 30À39 years. Moreover, the study showed that the difference in employment status between alcohol-related liver disease patients and controls was evident several years before the ALD diagnosis pointing to a window of opportunity for prevention. Implications of all the available evidence The huge socioeconomic inequality in alcohol-related disease should make governments and healthcare institutions consider alcohol control policies such as minimum unit pricing which has greater impact among groups of lower socioeconomic status. On the individual level, research is needed to investigate an effect of liver-specific prevention programs. For example, noninvasive screening for liver disease followed by treatment of the underlying cause may be offered at the social security offices to people who are unemployed or receiving disability pension. Introduction Reducing health inequalities is a key strategic objective of the World Health Organization and individual governments [1À3]. Alcohol-related causes are important contributors to inequality in mortality in several European countries [4À6]. Groups of low socioeconomic status are much more likely to die from alcohol-related causes than groups of high socioeconomic status [7]. The socioeconomic pattern in the incidence of specific alcohol-related diseases is less studied and may provide disease-specific targets of prevention [4,8]. Alcohol-related liver disease (ALD) results from chronic heavy alcohol drinking, usually for several years [9]. Worldwide, ALD is responsible for more than 20 million disability-adjusted life years (DALY), accounting for 25% of all DALYs lost due to alcohol [10]. Socioeconomic status is related to numerous exposures, resources, and susceptibilities that may affect health. No single indicator of socioeconomic status captures its full effect on health during the whole course of life, thus the assessment of several indicators may help to identify vulnerable groups [11]. Education, used as one marker of socioeconomic status, is obtained in early adulthood and is usually fixed hereafter [11]. In Danish health surveys, heavy drinking is reported twice as frequently in men of low education until the age of 65 years [12]. Employment status may change during a lifetime and can reflect the current socioeconomic status. In Danish and international health surveys, individuals who are unemployed or outside the labor force are more likely to be heavy drinkers than employed ones [13,14]. For public health purposes, identifying socioeconomic groups with a high risk of developing ALD could enable targeted preventive interventions for liver disease and alcohol use disorders. Registrybased studies with complete coverage may reveal the full burden of socioeconomic inequality in alcohol-related disease, compared to that which can be captured in questionnaire-based studies. Heavy drinkers of low socioeconomic status are less likely to participate in questionnaire-based studies than are heavy drinkers of high socioeconomic status [15À17]. Therefore, we carried out a nationwide, registry-based study aiming to describe the inequality in ALD incidence by education and employment status. Methods All 5.8 million Danish citizens have access to universal, taxfinanced healthcare and social security benefits, regardless of labor market history. We used healthcare and socioeconomic registries to identify newly diagnosed patients with ALD and their socioeconomic status. Registries were linked by a personal identification number: a unique identifier assigned to all Danish residents since 1968 [18]. We obtained aggregated data on the socioeconomic status of the general population to calculate incidence rates of ALD in Denmark 2008À2019 by socioeconomic status and age. Alcohol-related Liver Disease We identified patients in the National Patient Registry and Cause of Death Registry with an incident diagnosis of ALD between 2009 and 2018. Only patients at least 30 years old were included since final educational attainment was assumed to be acquired at this age. Only patients up to age 70 were included since population data for comparison above 70 years were not available. The National Patient Registry was established in 1977 and contains data on all somatic admissions, with emergency and outpatient contacts added in 1995 [19]. The Cause of Death Registry has recorded causes of death among all Danish citizens since 1970 [20]. In both registries, diagnoses are selected by physicians and coded according to the 8th and, since 1994, the 10th edition of the International Classification of Diseases (ICD). We defined ALD in The National Patient Registry by 1) a diagnostic code specifying ALD, or 2) the combination of a diagnostic code with liver disease of unknown etiology and a diagnostic code indicating alcohol use disorder, where these codes were recorded within one year in the National Patient Registry. The year when the liver diagnosis of unknown etiology was registered counted as the year of ALD diagnosis. We defined ALD in the Cause of Death Registry by 1) a diagnostic code specifying ALD, or 2) the combination of a diagnostic code with liver disease of unknown etiology and a diagnostic code indicating alcohol use disorder among the causes of death registered. Patients with the combination of codes for liver disease of unknown etiology and alcohol use disorder accounted for 7% of the total cohort (Supplemental Table S1). See Supplemental Table S2 for diagnostic codes used in this study and Supplemental Figure S1 for the flowchart of the cohort selection. We excluded patients with a diagnostic code indicating liver disease from 1977 to 2009 to exclude prevalent cases of ALD. The severity of ALD was defined according to the incident diagnosis of ALD as either cirrhosis or non-cirrhotic liver disease (all other codes that defined the liver disease). ALD was also classified as cirrhosis if a procedure or diagnostic code indicated variceal bleed or ascites up to and including the day of diagnosis. Socio-economic status The indicators of socioeconomic status used in this study were educational level and employment status. We chose not to use income as part of the definition of socioeconomic status since low income could represent both unemployment with the receiving of social benefits and low paid occupations, since the difference in payments for these are small in Denmark. Highest educational attainment was obtained from the Population Education Registry [21]. About 3% of the population have unknown educational status, either because they are immigrants to Denmark or because their education is not acknowledged by Danish authorities. We grouped educational level according to the International Standard Classification of Education (ISCED), noting that Denmark has no educational program that corresponds to ISCED level 4, post-secondary non-tertiary education [22]. See Supplementary Figure S2 for an overview of the Danish education system. The following four educational levels are used in this study: 1) 'low': unknown education, early childhood education, primary education, and lower secondary education (ISCED levels 0À2 and 9); 2) 'medium-low': high school programs, vocational training, and education preparing for a career in a specific trade or industry (ISCED level 3); 3) 'medium-high': short-cycle tertiary education, bachelor or equivalent (ISCED level 5À6); and 4) 'high': long second-cycle programs, master's or equivalent, doctoral, Ph.D. programs or equivalent (ISCED levels 7À8). Employment status was obtained from the Registry-based Labor Force Statistics the year before the ALD diagnosis. The register holds information on the type of labor market attachment at the end of November every year with the population divided into three main groups according to the International Labor Organization: employed, unemployed, and persons outside the labor force [23]. In this study, individuals outside the labor force were split into groups receiving health benefits implying a temporary situation of sick or maternity leave, etc., and disability pension and retirement implying being permanently outside the labor force. Employed individuals were divided according to their specific occupation with the following hierarchy mentioning the lowest first: 'self-employed', 'other workers', 'skilled workers', 'intermediates', and 'professionals', since professionals and managers were collapsed to one group in this study. See Supplemental Figure S3 for a detailed presentation of the classification of employment status. Aggregated general population data In calculations of ALD incidence, we used publicly available data on the demographics of the Danish population provided by Statistics Denmark. Data on education and employment status were aggregated by sex, five-year age-groups, and individual calendar years. Supplementary Table S3 presents the number of individuals in Denmark by socioeconomic status of 30À69 years. Main analysis: ALD incidence according to socioeconomic status The incidence rate of ALD was calculated for each calendar year between 2009 and 2018, for five-year age groups, sex, and indicators of socioeconomic status (educational level and employment status). For example, the incidence rate for low educational level in 2009 was calculated as the number of newly diagnosed ALD patients in 2009 of low educational level divided by the total number of person-years observed among people of low educational level in 2009. Socioeconomic inequality in disease incidence can be defined as the difference in disease incidence between low and high socioeconomic status [24]. We followed the recommendation of the World Health Organization and calculated both absolute and relative quantifications of socioeconomic inequality in disease [25,26]. Educational level and employment status were analyzed separately. The absolute measure of socioeconomic inequality was the absolute rate difference in ALD incidence between low and high educational levels [26]. The relative measure of socioeconomic inequality was the incidence rate ratio (IRR) of low compared to high educational level. Incidences were estimated with negative binomial model and the absolute rate differences and IRR were adjusted for calendar-year and sex. We stratified analyses of IRRs of ALD by 10-year age groups (30À39, 40À49, 50À59, and 60À69) to investigate the influence of educational level in each age group. We tested for interaction between the effects of age and educational level on ALD incidence by including an interaction term in the IRR model. We used the nested log likelihood to test whether this interaction term increased the model fit. Finally, we estimated population attributable fractions of educational level on ALD incidence [27]. The population attributable fraction is the proportional reduction in ALD in the hypothetical situation where all in the population had the same risk of ALD as the high educational level. It is calculated as the difference between the incidence of ALD in the population and the incidence of ALD in individuals of the highest category of educational level. All analyses were repeated with employment status replacing educational level. Unemployment was considered the lowest socioeconomic status, and the highest rank of employment (professionals) was considered the highest. Employment status in the 10 years before ALD diagnosis To provide context for our findings, we performed a case-control study of educational level and employment status in the years before the ALD diagnosis for patients with ALD and population controls. For each included patient with ALD, Statistics Denmark randomly identified four or five population controls without ALD and matched on sex, age, and birth-year according to the date of ALD diagnosis. Sociodemographic characteristics of controls are found in Supplemental Table S4. We examined employment status in each of the 10 years prior to the diagnosis of ALD, excluding 331 (2%) patients with ALD and 3469 (5%) population controls without complete information for all 10 years. Sex-stratified analysis ALD develops about twice as frequently in men than in women [28]. We ran all analyses stratified by sex to assess whether the pattern of ALD incidence according to socioeconomic status was different for men and women. Role of funding source GA and PJ were supported by a grant from the Novo Nordisk Foundation (NNF18OC0054612). GA was supported by a grant from the Research Fund of Bispebjerg Hospital. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Patient and public involvement No patients were involved in the design or conduct of the study. Results In all, 17,473 patients had a first-time diagnosis of ALD in Denmark in 2009À2018, of whom 12,092 (69%) were men ( Table 1). The median age was 58 years (IQR: 51À64) and 80% of patients were 50À69 years old. Overall 78% of ALD patients had cirrhosis and this proportion were roughly similar according to educational level (Supplemental Table S5). Low level of education was the most common (46%), followed by medium-low (40%), mediumhigh (11%), and high education level (3%). In controls, 30% had a low level of education, 41% had a medium-low, 21% had a medium-higher, and 8% had a high educational level (Supplemental Table S4). For employment status, 59% of ALD patients were outside the labor force on November 30 th of the year before their diagnosis, 21% were unemployed, and 20% were employed. For controls, 25% were outside the labor force, 8% were unemployed, and 67% were employed. ALD incidence by educational level ALD incidence rates correlated inversely with education ranging from 181 (95% CI, 167À197) per million person-years for high educational level to 910 (95% CI, 764À1086) per million person-years for low educational level ( Table 1). The inverse correlation of the incidence rate with educational level was observed in all age-groups (Fig. 1). The absolute rate difference in incidence between low and high educational level gradually increased with age from 165 (95%CI, 142À188) per million person-years for 30À39 years to 1149 (95%CI, 1083À1217) per million person-years for 50À59 years, and then decreased to 590 (95%CI, 320À859) per million person-years for 60À69 years. The relative difference in incidence rates between high and low educational level was larger in younger than in older age groups (p for interaction < 0.0001) ( Table 2). For example, the IRR for low compared to high educational level was 9.8 (95%CI, 6.2À15) for 30À39 years and 2.0 (95%CI, 1.8À2.3) for 60À69 years. Incidence by employment status For employment status, the ALD incidence rate per million person-years was 211 (95% CI, 189À236) in employed, 3449 (95% CI, 2785À4271) in unemployed, and 1,706 (95% CI, 1494À1947) in individuals outside the labor force (Table 1). Among the employed, incidence rates correlated inversely with employment rank: The incidence rate was 101 (95% CI, 78À132) per million person-years in the highest employment rank (professionals) and it increased gradually to 308 (95% CI, 248À383) per million person-years in the lowest employment rank (self-employed). For individuals outside the labor force, individuals receiving disability pension had the highest incidence rate of 2516 (95% CI, 2118À2987) per million person-years. The incidence rate was 1081 (95% CI, 886À1318) per million person- years for individuals receiving health benefits and 992 (95% CI, 807À1220) per million person-years for the retired. The pattern of incidence rates according to employment status was similar in all age-groups ( Table 3). The absolute rate difference in incidence between unemployed and highest employment rank (professionals) gradually increased with age from 468 (95% CI, 408À529) per million person-years for 30À39 years to 6988 (95% CI, 6324À7652) for 60-69 years. The relative difference in incidence rates tended to be higher in younger than in older age groups, although this trend was not as pronounced as it was for educational level (p for interaction < 0.0001) (Supplemental Figure S4). Employment status in the 10 years before ALD diagnosis Patients with ALD were less likely to be employed than controls in the 10 years prior to ALD diagnosis (Fig. 2). For instance, only 59% of patients with ALD were employed on November 30 th of the year that was 10 years before their time of ALD diagnosis, compared with 87% of controls. At five years before the time of ALD diagnosis, 40% of ALD patients were employed compared with 78% of controls. Sex-stratified analysis The sex-stratified analyses showed a similar pattern of socioeconomic inequality in ALD incidence for men and women (Supplementary Figures S5-9). Discussion This nationwide study, based on a population with access to universal healthcare and social security benefits, showed huge inequalities in the incidence of ALD by educational level and employment status in ages 30À69 years. ALD incidence rates increased with age and with the decrease of educational level and employment rank, and were very high in people who were unemployed or receiving disability pension. With respect to absolute differences in incidence rates, the socioeconomic gradient was higher in people aged 40À69 years than in people aged 30À39 years. With respect to relative differences in incidence rates, the socioeconomic gradient was higher in younger people. The difference in employment status between ALD patients and controls was evident several years before the ALD diagnosis. Coverage was nearly complete for data on hospital care and socioeconomic status [19,21,23]. The validity of the ALD diagnosis is high: diagnostic codes for non-specified liver disease and alcoholic cirrhosis in the National Patient Registry had a positive predictive value of 80-100% when compared with discharge summaries and medical records [29À31]. The accuracy of educational level and employment status from Danish registries is also high [21,23], although there may be some misclassification of employment status. For instance, unemployed who are not receiving social benefits are wrongly classified as employed [32]. This misclassification is most likely to be independent of ALD occurrence, thus less likely to influence our findings. In conclusion, this study is likely to represent valid population-based estimates of the ALD incidence by socioeconomic status in Denmark. A socioeconomic gradient of total alcohol-related disease and mortality is observed in many countries [6]. The socio-economic pattern in the incidence of specific alcohol-related diseases such as ALD The absolute rate difference measures the absolute inequality of alcohol-related liver disease incidence between low and high educational levels.The population attributable fraction is the proportional reduction in ALD in the hypothetical situation where all in the population had the same ALD incidence as the high educational level. The absolute rate difference measures the absolute inequality of alcohol-related liver disease incidence between unemployed and highest employment rank (professionals). The population attributable fraction is the proportional reduction in ALD in the hypothetical situation where all in the population had the same ALD incidence as the highest employment rank (professionals). is less studied and may provide disease-specific targets of prevention [4,8]. A nationwide UK study found a nearly three-fold increase in the rate of variceal bleeding in the most deprived quintile compared with the least deprived [33]. A Hungarian case-control study found an increasing likelihood of chronic liver disease with decreasing educational level [34]. A Chinese case-control study had the same observation for education, but, contrary to our results, ALD was positively associated with employment compared to unemployment, which the authors suggested was due to social drinking after work [35]. Why is there a socioeconomic gradient in ALD incidence? First, biases in coding could produce the socio-economic gradient in ALD. It is a limitation of our study that we do not have data on coding practice. There is, however, nothing in our clinical experience to suggest that clinicians are more likely to give a diagnosis code of alcohol-related liver disease to a person of low socio-economic status when in fact the etiology of liver disease is uncertain. The available data suggests that low socioeconomic status may also be an independent risk factor in nonalcoholic liver disease [36]. Thus, we believe that bias in coding is an unlikely contributor to our findings. Second, we believe that differences in the prevalence of hazardous alcohol consumption between socio-economic groups contribute to the observed inequality in ALD incidence seen in this study. It is a limitation of our study that we could not clarify the causal mechanisms, since we lacked data on alcohol consumption and other lifestyle factors. Heavy drinking was reported more frequently in men of low socioeconomic status compared to high socioeconomic status up to the age of 65 years in the Danish National Health Survey 2017 [12]. After the age of 65 years, the picture was the opposite with men of high socioeconomic status being more likely to be heavy drinkers than people of low socio-economic status. This change may partly be explained by a high mortality of people of low socio-economic status who are heavy drinkers [7]. For women, heavy drinking according to socioeconomic status in the Danish National Health Surveys was similar up to the age of 65 years, but after 65 years women of high socioeconomic status were more likely to be heavy drinkers than those of low [12]. The true proportion of heavy drinkers in groups of lower socio-economic status may be even higher than reported in health surveys. People of low socio-economic status who are heavy drinkers are less likely to participate in questionnaire-based studies than heavy drinkers of high socioeconomic status. For example, for both men and women, alcohol-related mortality was three times higher in low education non-participants of a health survey compared with non-participants of a high educational level [17]. A study from the UK indicates that individuals of lower socio-economic status were more likely to be extreme drinkers (>24 units per day) than those of high socio-economic status [37]. Whether alcohol drinking patterns contribute to the observed socio-economic inequality in ALD incidence needs further investigation. A recent systematic review suggested that heavy episodic drinking explained more of the socio-economic inequality than alcohol use in general [38]. However, for ALD, prior studies suggest that daily rather than episodic drinking increased the risk [39,40]. Future studies should evaluate whether inequality of ALD incidence is different according to specific alcohol use disorders. Early socio-economic disadvantage leads to an increased likelihood of alcohol use disorders in adolescence [41]. Heavy drinking in adolescence diminishes educational attainment and is associated with higher unemployment risk [42]. Low educational attainment is in general associated with higher unemployment risk [43]. Unemployment may lead to an increase in drinking, with chronic heavy drinking common among unemployed and individuals outside the labor force, which is also known to reduce the likelihood of transition back to employment [13,14]. On the other hand, heavy drinking decreases employment performance and increases the risk of job loss, sick leave and ultimately a permanent exit from the labor market and receipt of disability pension [13]. In line with this, we found that patients with ALD, compared with controls, were less likely to be employed, and more likely to be either unemployed or outside the labor market in the 10 years before the diagnosis of ALD, presumably reflecting the influence of heavy drinking on employment performance prior to the ALD diagnosis. The lower ALD incidence for 'persons outside the labor force' than for those unemployed could be due to fact that individuals who drink heavily may be unable to work, but they will not receive disability pension until they develop organ disease (such as ALD), whereas patients with more obvious conditions, such as severe neurologic or psychiatric diseases, who do not drink heavily will more easily get a disability pension [44]. It is therefore plausible that heavy alcohol drinking would lead to unemployment in several years before it led to manifest organ disease and access to disability pension. Alcohol is also a risk factor for several other diseases associated with a high mortality such as chronic pancreatitis, cancer, and heart disease, that we were not able to address in this study. Third, obesity, smoking, inactivity, and poor nutrition may contribute to the socioeconomic inequality in ALD incidence observed in this study. Multiple additional risky health behaviors cluster in heavy drinkers of low socio-economic status, whereas heavy drinkers of high socio-economic status seem to lead a less unhealthy lifestyle besides drinking heavily [45]. Obesity and smoking are both risk factors for chronic liver disease and are unevenly distributed across socio-economic strata [46,47]. For instance, obesity is three times as common in individuals of low compared to high education [12]. We regard the influence of viral hepatitis as negligible in this study since the prevalence of hepatitis B and C in Denmark is below 0.5% [48,49]. Fourth, unknown factors may contribute to the socio-economic gradient in ALD. For example, prospective cohort studies of total alcohol-related harm found that alcohol and other lifestyle factors had only a minor role in mediating the socio-economic inequality of alcohol-related harm [4,8]. Similarly, in the Hungarian case-control study of chronic liver disease, socio-economic inequality in chronic liver disease persisted after adjustments for alcohol and other lifestyle factors [34]. These unknown factors may include peri-and prenatal factors such as maternal smoking, infections, psychosocial stressors due to poor material circumstances, and diet, that each could increase the vulnerability to ALD in people of low socioeconomic status [50]. This is the first study of socio-economic inequality in ALD incidence applying both an absolute and relative measure of inequality [24À26]. Application of the absolute measure of inequality showed the huge burden of ALD incidence for people of low socio-economic status compared to high socio-economic status after the age of 40 years. This follows the previously observed increase in the incidence of ALD with age until 60À70 years [51]. Application of the relative measure of inequality in this study contributed with the finding that the inequality of ALD incidence was present already in the young age-group of 30À39 years. Similarly, a Finnish nationwide study of total alcohol-related mortality found a higher relative inequality according to educational level and employment status for younger than for older ages [16]. The stronger influence of employment status compared with educational level on ALD incidence is in line with employment status reflecting the current socio-economic status, whereas educational level is fixed after early adulthood [16]. This downward social mobility due to heavy drinking is termed social drift [16]. Heavy drinking in the twenties could negatively impact the ability to attain an education, but if heavy drinking begins later in life, the impact is mainly observed for employment status, that is high incidence rates of ALD for unemployment. Implications In 2021 we are facing a substantial economic downturn with high unemployment rates already seen in the US and Latin American countries [52À54]. As a consequence, a rise in heavy drinking and alcohol-related disease may be expected [14]. For instance, alcoholrelated cirrhosis mortality increased remarkably in the US after the financial crisis in 2008 [55]. Governments and healthcare institutions should act now and consider alcohol control policies such as minimum unit pricing which has greater impact among groups of lower socioeconomic status [56,57]. Limiting the availability of alcohol by restricting licenses to sell alcohol in deprived areas may be another promising approach [58]. On the individual level, we hope that our results will motivate research and implementation of liver-specific prevention programs. The finding from this study that patients with ALD were more likely to be unemployed several years before the diagnosis indicates a window of opportunity for such preventive interventions. For example, non-invasive screening for liver disease followed by treatment of the underlying cause may be offered to people who are unemployed when attending social security offices [59]. Systematic liver screening programs may also be delivered to patients hospitalized with alcohol problems or seeking alcohol abuse treatment, who are more likely to have a low educational level than the background population [60,61]. In conclusion, this study showed substantial inequality in ALD incidence by educational level and employment status in Denmark for the ages of 30À69 years. Further research is needed to understand the contribution of heavy drinking, drinking patterns, and other lifestyle factors on the socioeconomic inequality in ALD incidence. Alcohol prevention programs should target groups of low socioeconomic status, in all ages, and may be combined with liver-specific prevention programs. Data availability Data used in this study are not publicly available, but can be applied for at the Danish Health Data Authorities.
2021-09-24T05:20:04.026Z
2021-07-13T00:00:00.000
{ "year": 2021, "sha1": "b67bf8e20deb8ddc680085694547d45daee47402", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.lanepe.2021.100172", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b67bf8e20deb8ddc680085694547d45daee47402", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227217469
pes2o/s2orc
v3-fos-license
Lost in Transition: Health Care Experiences of Adults Born Very Preterm—A Qualitative Approach Introduction: Adults Born Very Preterm (ABP) are an underperceived but steadily increasing patient population. It has been shown that they face multiple physical, mental and emotional health problems as they age. Very little is known about their specific health care needs beyond childhood and adolescence. This article focuses on their personal perspectives: it explores how they feel embedded in established health care structures and points to health care-related barriers they face. Methods: We conducted 20 individual in-depth interviews with adults born preterm aged 20–54 years with a gestational age (GA) below 33 weeks at birth and birth weights ranging from 870–1,950 g. Qualitative content analysis of the narrative interview data was conducted to identify themes related to self-perceived health, health care satisfaction, and social well-being. Results: The majority (85%) of the study participants reported that their former prematurity is still of concern in their everyday lives as adults. The prevalence of self-reported physical (65%) and mental (45%) long-term sequelae of prematurity was high. Most participants expressed dissatisfaction with health care services regarding their former prematurity. Lack of consideration for their prematurity status by adult health care providers and the invisibility of the often subtle impairments they face were named as main barriers to receiving adequate health care. Age and burden of disease were important factors influencing participants' perception of their own health and their health care satisfaction. All participants expressed great interest in the provision of specialized, custom-tailored health-care services, taking the individual history of prematurity into account. Discussion: Adults born preterm are a patient population underperceived by the health care system. Longterm effects of very preterm birth, affecting various domains of life, may become a substantial burden of disease in a subgroup of formerly preterm individuals and should therefore be taken into consideration by adult health care providers. INTRODUCTION Nowadays 5-13% of all infants are born preterm, i.e., at a gestational age (GA) of <37 weeks. Very small preterm infants with a GA of <32 weeks and/or a birthweight <1,500 g comprise about 1,5% of all live births per year (1,2). Due to medical and technical advances in neonatology over the last decades, survival rates of very preterm infants have substantially increased. Since the 1980s, when the causal treatment of lung immaturity by intratracheal surfactant application became widely available, the majority of pre-mature infants, down to the very immature babies born at the threshold of viability, survive (3,4). As a consequence, an increasing number of formerly very preterm infants is growing up, leaving infancy, childhood and adolescence and thus, the professional competency of the pediatric health care provider, behind. Not only do most very low birthweight (VLBW, birthweigth <1,500 g) and extremely low birthweight (ELBW, birthweigth <1,000 g) preterm infants since the 1980s reach adulthood. This fairly new and constantly increasing population of adults born preterm (ABP) is facing a new and-before the era of intratracheal surfactant application rarely existent-challenge: they age. Severe neurologic impairments such as cerebral palsy and hydrocephalus still affect about 2-9% of all very small preterms today. (2) More subtle neuro-cognitive, behavioral and socioemotional dysfunctions have been a major focus of research over the last decades, also affecting 1/4-1/3 of all very small preterm survivors (19)(20)(21)(22). Another important aspect when looking at the long-term trajectories of very preterm birth is that ex-preterms seem to have a higher risk for age-associated diseases, such as cardiovascular disease. This implies that age-associated changes and morbidities may occur earlier and more frequently in ABP compared to adults born at term (11,23). In this context, one discussed mechanism is the "fetal origin hypothesis, " which states that increased risk is programmed during fetal life. Adverse effects during pregnancy, such as intrauterine growth retardation followed by several weeks of unphysiologic extrauterine maturation of VLBW and ELBW infants lead to metabolic reprogramming which may cause diseases in later adulthood (24). Subjects born preterm, who have a low birthweight and more stress during fetal and early life, could be programmed to a different health outcome in later life (25). When considering the available data it becomes clear that prematurity is a burden of potentially lifelong character. To date, very little is known about ABP's health care needs beyond childhood and adolescence (26). After they reach adulthood and transition from pediatric into adult health care, there are no more custom-tailored health care services to address their needs. We hypothesized that their status as "formerly born very preterm" is largely underperceived among adult health care providers and may therefore not be integrated into counseling, prevention, or treatment concepts. In this pilot study, a qualitative approach is applied to address the following research questions: (1) How do ABP perceive their health status as adults? (2) What are ABP's experiences with health care, in particular how do ABP feel perceived by established health care structures and services and what barriers to health care are they facing? MATERIALS AND METHODS In this qualitative study semi-structured in-depth interviews with adults formerly born very preterm were conducted. The rationale for choosing a qualitative research approach was the aim to assess participants' personal perspectives toward health and healthcare access in contrast to quantitative approaches that focus on objectively measurable health outcomes but may provide a less detailed insight into the situation of those formerly born preterm. To date there is a paucity of information on the personal perspectives of adults formerly born preterm with regard to their own quality of life, their fears and their hopes as they age (27). Qualitative research is particularly suitable for exploratory work where the researcher may not be aware of all the relevant aspects of the study subject in advance. Qualitative methods of data collection allow an open approach to the investigated phenomena and enable the researcher to receive a more concrete, more plastic picture from the perspective of those affected (28). Data collection lasted 3 months from November 2018 to February 2019. Participants were mainly recruited via the websites and contacts of two patient and parent advocacy groups, one of them being active on a regional (city of Hamburg and surroundings), the other one on a national level. Eight participants were recruited at an event for ABP organized by the national advocacy group. The remaining 12 participants became aware of the study through independent internet research or randomly volunteered to participate in the study, of which they had heard by word of mouth. Eligible participants were >20 years of age and born with <33 weeks of gestation and/or a birthweight of <1,500 g. None of the patients had attended any kind of systematic long-term followup program regarding their prematurity. In-and exclusion criteria can been seen in Table 1. Participants were sampled purposefully on an entirely voluntary basis. Written consent, in which anonymity and confidentiality was granted, was obtained from all study participants. Ethical approval was granted by the ethical committee of the Hamburg Medical Association. Before the personal interview, each participant filled out a short written questionnaire. The short questionnaire included demographic information, perinatal facts, present diagnoses, level of education, questions about today's relevance of former prematurity and questions on current health care. All participants completed an audiotaped, 30-60-min semistructured interview. The interviews were conducted personally either in-person or by video call by a female medical student with a Bachelor's degree, who was trained by two of the co-authors to conduct the interviews. Interview participants knew the interviewers name and the purpose of the study. After 20 conducted interviews, thematic saturation was reached as no new themes emerged from the data (29). According to simulation studies, for a sample size like ours (n = 20) the probability of encountering new topics or codes is lower than 15% (30). Interviews were transcribed verbatim, following published transcription rules (31). They were repeatedly read by the two first authors and compared to the audio recordings. Each interview was read and coded by one author and then re-evaluated by the other another author. In this way, an interpretative consensus was reached. This method was preferred to parallel independent coding using statistical measures of interrater reliability, such as kappa statistics, because the small number of interviews allowed for a more comprehensive double evaluation. The semi-structured interview guide was developed by review of the literature and expert opinion and was iteratively refined during the data collection period. The final guide is available upon request. We performed a qualitative content analysis on transcribed interviews. (32) In a content analysis approach, the researcher seeks to gain a deep understanding of the concepts involved in the subject, and words and sentences are categorized into main categories and subcategories that make the phenomenon easier to understand. (33) Authors systematically coded participants' responses individually and then as a group to classify codes in common themes. The coded texts were labeled with both deductive and inductive codes. The deductive codes were derived from the interview guide and the inductive codes were developed iteratively by reading and re-reading the transcribed interviews. The final code system contained 11 deductive codes that represent the interviews' main categories: role of prematurity in today's everyday life, physical health status, mental health status, health care, education and professional life, family, romantic relationships/partnerships, and leisure time activities. The applied subcodes allowed a more detailed exploration of the interview content and enabled the recognition of parallels and connections within the individual stories and perspectives of the participants. Lower secondary school diploma 1 5 Vocational school diploma 1 5 No higher-level school diploma 1 5 Cross tables were used to widen analysis in order to understand differences in self-perceived health and health care seeking behavior related to former prematurity. Herefore, different subcodes were combined with each other and with biometric data and then contrasted. This way, contrasting comparisons between different groups within the sample of ABP were made. The software MAXQDA was used to facilitate analysis of the coded transcripts. Representative quotations were selected to illustrate key themes. RESULTS A total of 20 adults formerly born preterm (ABP) were recruited and interviewed ( Table 2). More than half of the participants were female (60%) and all of them were Caucasian (100%). Their age ranged from 20 to 54 years, the majority of participants being in the third decade of life, between 20 and 29 years of age (55%). GA at birth was between 26 and 33 weeks with a median of 29 weeks. Birthweight was between 870 g and 1,950 g with a median of 1,140 g. The sample did not include participants born small for gestational age (SGA), i.e., with a birthweight below Grave's disease 1 5 the 10th growth percentile. There was an overrepresentation of high educational levels with the majority of participants holding a high school diploma corresponding to University entrance level, e.g., the German Abitur or Fachabitur (70%) compared to the general German population (34,35). The study sample comprised ABP with and without physical disabilities. None of the participants had attended any kind of systematic long-term follow-up program regarding their prematurity. Codes and subcodes were established according to the data analysis description in the methods section. A visualization of the final code system is presented in the Supplementary Material: Analytic code-system. Role of Prematurity in Today's Everyday Life Most participants (85%) reported that their former prematurity was still of concern in their everyday life as adults. These concerns included consequences of physical disability, mental health sequelae, difficulties in social/attachment behavior, and barriers in achieving their educational and professional goals. Physical Health Status More than half (65%) of the participants reported chronic clinical diagnoses which they had received from their health care providers ( Table 3). In addition to clinical diagnoses the following themes concerning physical health and fitness in everyday life were repeatedly named: low basal energy level, low physical performance, fast exhaustibility, and increased need for rest ( Table 4). Participants referred to a "weak immune system" by which they meant high disease susceptibility and long recovery periods after acute illness. Low basal energy level 6 30 High susceptibility to disease 5 25 Sleep disorder 3 15 Lack of body awareness 2 10 Current age seems to influence the perception of long-term effects of former prematurity. Physical health sequelae attributed to former prematurity were more often perceived among adults born preterm <30 years of age (67%) and less frequently among adults born preterm >30 years of age (33%). In our sample, the prevalence of physical impairments was higher in the group of younger ABP. All ABP with cerebral palsy (n = 5) were under the age of 34. "I always feel as if I'm constantly at my limits. And others, they are fit, they go out in the evenings, they do this and do that. I have to constantly balance my energy, constantly." (Participant 5, age 53) "And I'm constantly sick. Really, like a weakling, who always has something. And I never feel really fit, I always catch something." (Participant 16, age 26). Mental Health Status Almost half of the participants (45%) reported clinical diagnoses which they had received from their health care providers ( Table 5). 40% of participants reported that they were currently undergoing psychotherapy, 15% said that they had undergone psychotherapy in the past, 10% said they were currently contemplating if they should undergo psychotherapy. Only 5 participants (25%) reported no affiliation to psychotherapy. In addition to clinical mental health diagnoses the following difficulties in everyday life were repeatedly named: a) overstimulation/hypersensitivity, i.e., difficulty in filtering out stimuli from the surroundings (25%) b) anxiety, especially concerning separation/loss in social life and fear of failure in private and professional life (25%) c) awareness of own cognitive constraint (25%) d) high sensitivity to stress (25%) a) "I have extreme problems in filtering out auditive stimuli from the surroundings. For example going to the supermarket is hell for me, it is only possible with my earphones on, otherwise it's not possible." (Participant 16, age 26 years old) b) ". . . I had, for example at school age, especially in the first years, extreme loss and separation fears. So, I could not stay at school by myself, they (the parents) had to stay with me for some time before they could leave. (..) this fear of loss or this separation fear has been present for me throughout the years." (Participant 2, age 31) c) "So, I wouldn't say that I'm 28 years old by my developmental status. I also notice that my age is very often not correctly estimated, mostly people judge me as being 18, 19 or maximal 20 years old." (Participant 3, age 28) d) "Well, stress is an unbelievable burden to me. Much more, I think, than it is for other people. I somehow also need my own rhythm. (. . . ) and too many things assailing me, that really exhausts me. More than (it exhausts) others, I believe." (Participant 5, age 53) Overall, half of all interviewed ABP stated that they were certain, that their pre-mature birth had an influence on their mental health in adult life and another 25% stated, that they assumed an influence of their prematurity on their mental health, but were not sure about causalities. Post-natal separation from the mother was named as one assumed cause affecting mental health in adult life. "I do see a correlation with this having been born too early, with this not being together with or inside my mother for the regular amount of time. I somehow see a correlation (with my mental health today), for me personally, yes." (Participant 2, age 31) "I do believe that a certain amount of hypersensitivity could be a possible aftereffect (of my prematurity). Um, because yes, I know that now only intellectually, when one was exposed to the environment as a preterm, the light and the sounds and one was just not protected anymore, that is why today I just have a stronger sensitivity." (Participant 19, age 43) "I need very much physical contact and I know for sure that this is also related to my pre-mature birth. (. . . ) I was in the incubator for three months and at that time there was no possibility to have much physical contact to my mother and so I definitely know that this comes from my preterm birth." (Participant 4, age 31) Among ABP <30 years of age mental health sequelae which are attributed to prematurity were less perceived than among ABP >30 years of age. In ABP >40 years of age all participants reported mental health sequelae which they attributed to their former prematurity, whereas in the age group of 20-29 years of age mental health sequelae were reported in only 36%. "(. . . ) I sometimes have something like an emotional backflash and I feel things of which I rationally think that they have nothing to do with the here and now, but that these are feelings of me as a child, as a baby, not me as an adult. . . And what I have experienced over and over again, is such a deep loneliness, which I cannot explain rationally, because I do have a large social network, I am well integrated. But nevertheless this loneliness still creeps up on me from time to time, (. . . ) It's like I am catapulted back into another state. Into an earlier one. (Participant 1, age 31) "Yes, definitively. I have three sisters, for example. I mean, everybody can be different of course, but I am. . . the other three are so similar and I somehow am so different. Yes, this is why I believe it must come from being born a preterm." (Participant 5, age 53). Health Care In the short questionnaire preceding the actual interview 70% of the participants said that overall, they were satisfied with their current health care provision. One third (30%) of the participants indicated that they were seeking professional health care regularly, 70% said that they only visit a health care provider on demand or in case of acute illness. In all of the participants the health care provider primarily involved was the general practitioner or family doctor. Depending on their health care status, healthcare from other specialists was sought. When asked how satisfied participants were with regards to their status as former preterms, only 55% of the participants indicated that they were satisfied with their current health care. The majority of participants (80%) claimed that their prematurity is never mentioned by their current health care providers and that their status as adults born preterm does not influence current treatment or counseling options by health care professionals. As a result, only 40% of the participants feel well-informed and advised about the potential risks associated with preterm birth. "(...) that the topic of prematurity played a role until I was maybe 15, 16 years of age (...) meaning that doctors actively asked about being born preterm or something and actually now, in adult life, it is never mentioned again." (Participant 14, age 27) Half of the participants who reported to regularly seek medical treatment said that they were unsatisfied with their current health care. In comparison, of those participants who reported to seek medical treatment only on demand or in case of acute illness only 14% said that they were unsatisfied with health care services. Perceived reasons for not being satisfied with health care were: a) lack of consideration for their prematurity status by current health care providers. b) unmet need for specialized health care services with regard to their former prematurity. Some ABP reported difficulties in receiving adequate treatment targeted to their complex medical conditions. Perceived barriers for receiving adequate treatment were again lack consideration and lack of knowledge about the long-term effects of prematurity by current health care providers as well as the small time-windows health care providers have for treating the individual patient, which impedes them from carefully listening to their patients' complaints and from advising them appropriately. Another problem described were rejected reimbursements by health care insurance companies, an example being the rejected continuation of physiotherapy for an ABP with cerebral palsy. The perception that current health care providers do not sufficiently consider prematurity status was independent of the participants' education level. The proportion of participants stating that prematurity is considered by their current health care provider was equal between those holding a high school diploma corresponding to University entrance level and those holding lower school diplomas (50%, respectively). The older a patient gets the less likely it is that her or his current health care provider informs or counsels her/him with regard to potential risks of former prematurity. More than half of ABP >30 years indicated that their health care provider rarely or never speaks with them about potential long-term risks of former prematurity whereas in the age group of ABP <30 years the majority of the participants indicated that their health care provider does inform them about potential risks of former prematurity. Consequently, older ABP expressed more worries or fears concerning potential long-term sequelae of their former prematurity than younger ABP (33 vs. 18%) and the dissatisfaction with health care services regarding their former prematurity was greater among older ABP >30 years of age. "The regular doctors, no (they do not consider my former prematurity), not really. But it definitively is a topic with my healer and the osteopath, obviously. So it is more of a topic in alternative medicine." (Participant 10, age 39) "I once went to see a pulmonologist, but he was like: "Nope, everything is fine." He did not take me seriously, he acted as if I was a bit of hypochondriac who is unnecessarily still sitting in his waiting room. This may be a personal opinion, but I feel that there is not much knowledge (on the topic of former prematurity) around." (Participant 1, age 31) "Well yes, I would have wished -at least concerning the cerebral palsy -that someone would have told me about how it develops with age. Because nobody is talking about this. (. . . ) I practically had to google all the information myself." (Participant 16, age 26) Prematurity status is more often considered by current health care providers in participants with impaired physical health compared to participants who rarely or never suffer from physical health impairments. "(...) that the topic of prematurity played a role until I was maybe 15, 16 years of age (...) This may also be due to the fact that fortunately you can't see anything from the outside and that I don't have any obvious impairments." (Participant 14, age 27) Of the participants who often or always suffer from mental health sequelae only one third felt that their prematurity status is adequately considered by their current health care providers, whereas the majority stated that their prematurity status rarely or never plays a role in their provider's treatment or counseling approach. "(. . . ) I have had rather negative experiences. It (my prematurity) has been mentioned once in psychotherapy but it was never discussed in depth or taken serious. And my GP said, I was welcome to show her relevant scientific studies if I was interested in the topic but did not address my concerns at all." (Participant 6, age 54). Social/Attachment Behavior A substantial proportion of ABP expressed the perception that their former prematurity negatively influences their social life and their relationships as adults. There is a difference regarding the perceived influence of prematurity on social life and attachment behavior between the age groups of ABP <30 years of age and ABPs >30 years of age. In the older ABP group over half of the participants (56%) perceived that former prematurity negatively influences their social life, in the younger age group this was of concern in about one third of the participants (36%). 75% of participants who often or always suffer from physical health sequelae attributed to their former prematurity reported a negative influence of former prematurity on their social/attachment behavior. Physical limitations resulting from preterm birth sequelae were perceived as barriers for the maintenance of a satisfactory social life. Especially ABP suffering from cerebral palsy described that their medical conditions are likely to hold them back from participating in social activities, often causing a feeling of loneliness. "No, (I can) definitely not (participate in activities with friends without limitations). And often, people have to change plans on behalf of me and I feel very uncomfortable about that." (Participant 16, age 26) "Well, most of the time it is possible for me, but as soon as it goes into the direction of sports activities then it becomes more difficult. For example, if they go skiing at the weekend in the winter, that is not possible. If they go to the high wire garden, that is not possible." (Participant 15, age 30) The perception that former prematurity negatively influences their social/attachment behavior was far less prevalent in participants who reported rare or no physical health sequelae (27%). Participants who suffer from mental health sequelae attributed to their former prematurity were most likely to report a negative influence of former prematurity on their social/attachment behavior (78%) Here, introversion, difficulties in approaching people and difficulties in making friends were named as main barriers. At the time of our interview, only 3 (15%) of the ABPs were in a stable relationship 12 (60%) participants stated that they had been in a relationship before, 2 (10%) participants were divorced, 2 (10%) had never been in a relationship and one participant declined to talk about the issue. Romantic relationships of most ABP were described as emotionally difficult and demanding. Difficulties in permitting closeness as well as a strong sense of dependency were described. Two participants described a lack of enjoyment in physical contact and sexual activity as problematic for their relationships. None of the interviewed participants had children and the subject of parenthood was burdened with fears. Participants expressed concerns about the possibility that their own child could also become a preterm baby and possibly inherit unfavorable mental predispositions. "I totally like children, but to be honest, I don't think I want children just because I'm afraid that my child -it is said that the physical impairments won't get passed on because they are due to oxygen deficiency, so there I cannot transmit much -but I'm afraid that my psychological self will be passed on. And I don't want that." (Participant 11, age 33). Education and Professional Life When asked if they had (yet) reached their professional goals, 55% of all participants answered that yes, they had achieved their professional goals. Of the participants who reported to suffer from physical health sequelae attributed to their former prematurity half said that they had reached their professional goals. Reported barriers were physical limitations due to long-term sequelae of pre-mature birth. One participant with cerebral palsy for example, cannot exercise her dream profession of being a kindergarten teacher, as this would be physically too demanding for her. Another participant is in early retirement because she was not able to cope with the burden of regular working schedules. She is very grateful for not having to exceed her limits, but expressed her wish to be more resilient. Of the participants who reported rare or no physical health sequelae, 64% said that they had reached their professional goals. Barriers to pursuing their careers and to reaching their professional goals were difficulties in finding an adequate training position, unemployment and uncertainty about having chosen the right occupation. Of participants who reported to suffer from mental health sequelae attributed to their former prematurity, substantially less, only 33%, said that they had reached their professional goals. Reported barriers to achieving their professional goals were the perception of a too high workload and the uncertainty about having chosen the right occupation. During the interview all participants were asked: "Given with what you know now, if there was a specialized customtailored health care service for adults born preterm, would you be utilizing this service?" All of the participants (100%) said yes, that they would utilize a specialized health care service for adults born preterm. Expected contents and benefits expressed for such a service are listed in Table 6. Participants also mentioned the importance of accessibility for affected individuals impaired to travel who would still be interested in gathering information and provider's guidance from a respective health care service tailored to the needs of adults born preterm. This could potentially involve a website where questions can be posted and discussed, email and/or phone contact with the provider. DISCUSSION A large body of literature exists on the medical and mental health outcomes of very pre-mature infants in early adulthood. Most studies suggest that a significant proportion of former preterm infants have made a reasonably successful transition into adulthood, even though some differences exist when compared to adults born at term (36)(37)(38)(39). However, to date, hardly any studies report on the personal perspectives of former pre-mature infants and none, to our knowledge, address the question of how former preterms feel embedded in established health care structures or if they face any health care-related barriers (26). Expected contents and benefits of specialized custom-tailored health care services for adults born preterm Improving information and counseling for the individual affected ABP Raising awareness for the topic of ABP among other health care providers and the public Concrete support with applications and referrals (specific health care services, aides and appliances) for ABP Adequate time slots for individualized counseling of every ABP patient Interdisciplinary prevention and treatment approach for ABP Specialized knowledge and experience with the specific patient population of ABP Possibility of exchange with other affected ABP This qualitative study focused on a sample of adults born very preterm, who had not been in any kind of systematic followup program focusing on the trajectories of their prematurity. We aimed to understand how these ABP themselves evaluate their actual health status and to learn about their health care experiences as adults. One overarching theme emerged during our data analysis. It is the participants' perception of the invisibility of and the unawareness for their condition (of being a former preterm). This theme strongly affects various domains of life, including not only personal relationships and professional development but also participants' attitudes toward health and health care. The majority of ABP in our sample perceived that adult health care providers are largely unaware of their patients' prematurity status and of potential long-term sequelae of prematurity. They felt that many, especially more subtle health impairments which they attributed to their former prematurity remain unperceived: low basal energy level, low stress tolerance, hypersensitivity to external stimuli and a higher effort to reach and maintain the same level of performance as peers. The core statement is that former prematurity is not considered (a problem) in current health care. The reported diagnoses, including ICD-10-codeable diseases (40), as well as the self-perceived health impairments concerning participants' fitness and performance capability are in line with the results of former longitudinal and health-related quality of life studies which describe impairments in ABP's physical and mental health state (39,(41)(42)(43)(44)(45)(46)(47). The prevalence of physical and mental health sequelae in our sample was relatively high, which may reflect a recruitment bias, as the majority of participants were recruited with the help of patient and parent advocacy groups and therefore may comprise more individuals with a high subjective or objective burden of disease. Most participants' expressed anxiousness toward founding their own family or becoming parents themselves. As reasons they named fear of their children being born pre-mature as well and fear of passing on unfavorable mental health attributes to their offspring. Of notice, large Scandinavian register studies have shown that individuals born preterm or with low birthweight were less likely to ever be in a registered partnership or to become parents (39,48). Other authors have also found that ABP, with or without impairments, were more likely to be single compared to term born peers (27) and that adults born preterm or with low birthweight were less likely to experience a romantic partnership, sexual intercourse, or parenthood than their peers who were born full-term (49). Multiple participants found it challenging to pursue their educational goals and felt that they were unable to perform professionally as well as others in their age group. This is important as various studies have shown that ABP are less likely to achieve higher education qualifications and to be employed as adults than are their term born peers, leading to adverse impacts on their socio-economic outcomes (46). Our data suggest that the self-perception of ABP changes as they age. Older ABP perceive to have more mental health sequelae which they attribute to their former prematurity than younger ABP. On the socio-emotional level, older participants reported to experience more difficulties in their social/attachment behavior than younger ABP. This may be due to the fact that, overall, socialization and maintenance of friendships become harder as individuals age. Also, all but one participant of the older ABP (age group >30) in our sample were single at the time of interview, which may augment the risk for subjective and objective mental health impairments. These findings are in line with other studies which have found that at young adulthood ABP express a relatively high level of self-perceived healthrelated quality of life, (27,50,51) which can substantially differ from their environments' perceptions. (52) As they reach their fourth, fifth, or sixth decade of life, this positive self-perception may change as new health-related and social challenges are faced and formerly unconsidered concerns appear. Interestingly, a recent Canadian cohort study found that older ABP (i.e., ABP in their 4th decade of life) were less likely to be married or to have children of their own in comparison with term born peers, whereas no differences in these domains were found between younger ABP and those born at term (27). Also, the dissatisfaction with health care services regarding their former prematurity status and potential long-term sequelae is more pronounced in older ABP. In contrast to ABP's perspectives it seems that the further in the past birth and perinatal circumstances lie, the less consideration do they get from adult health care providers. These are conflicting findings that could be problematic as ABP are known to be more susceptible for age-associated diseases, may earlier exhibit risk factors for certain pathologic conditions such as cardiovascular disease and should therefore be monitored more intensively as they get older. (12,23) Participants perceive knowledge gaps-on the patients' as well as on the providers' side-and express a strong desire for information and counseling on the relevant health topics for adults born preterm. Importantly, all participants of our sample claimed that they were interested in utilizing a special, customtailored and individualized health care service, putting today's health issues into the larger frame of their perinatal history. Our interview data show that there is at least a subgroup of ABP "out there, " who suffer from the underperception of being a former preterm with specific needs and elevated health risks. Considering the expected long-term sequelae of prematurity these individuals point toward a blind spot in the German health care system: adequate health services for ABP, after their transition from child-into adulthood. The availability of custom-tailored ABP health care services may provide an institutionalized opportunity for ABP to access health information, health care prevention and counseling with a focus on their individual risk profiles. This, on the provider's side, should imply a more "holistic" medical approach which treats the ABP as an individual with an elevated but variable risk profile rather than focusing on specific symptoms or diagnoses. From a public health perspective the pooling of ABP at an institutionalized service will help to collect a lot of valuable, prospective health data and thus increase our knowledge on the long-term effects of former prematurity. Furthermore, adequate health prevention may help to detect the development of diseases of later adult life in time, when preventive measures are still applicable. This may eventually help to reduce health care related costs for this steadily increasing and aging patient population. Our pilot study has several limitations. As participants were mainly recruited via the websites and contacts of two patient and parent advocacy groups, recruitment bias is a major limiting factor. In this context one has to be very careful when interpreting causalities. Prematurity may be interpreted by participants as the cause for symptoms that are in fact unrelated to preterm birth or of multifactorial origin. Future research on the topic should aim to perform random recruitment in order to achieve higher representativeness of study results. Furthermore, a desirability bias implying that the study participants anticipated what the interviewer expected to hear from them, cannot be ruled out. Another selection bias is possibly caused by the fact that the majority of our study sample comprised participants with a high educational level. As neurocognitive and behavioral dysfunctions are more prevalent in very small preterm infants (45, 53), the overall high educational level of our sample is surprising and not representative for the very low birthweight patient population in general (46). With 70% of study participants holding a highschool diploma the quota is higher than in the general German population (34,35). Our study comprises individuals embedded into the German health care system only. ABP's perceptions with access to other national health care systems may differ from the ones of our German participants. Furthermore, differences between ABP living in the countryside in contrast to ABP living in urban areas may be prevalent. Finally, our study results only reflect participants' personal perspectives on health care providers' practice and do not comprise the providers' side of view. Future research should aim to assess in how far the status of former prematurity is considered among adult health care providers in their everyday practice. CONCLUSION Adults born preterm are a steadily increasing patient population. There seems to be at least a subgroup of ABP who suffer from health impairments related to their former prematurity and who perceive that they have no designated health care structures to turn to. The German healthcare system has yet to recognize ABP as a specific at-risk-population, who may require special services in order to enable effective prevention and to meet ABPs individual health-associated needs. In order to fully understand the impact of very preterm birth and respective health care needs in adulthood we should continue to include the perspectives of individuals born preterm in the process of research and in the development of appropriate guidelines. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethical Committee of the Hamburg Medical Association. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS AP and LT contributed equally to the design of the study, development of the interview guide, transcription and analysis of the data, and preparation of the manuscript. DL contributed to the development of the interview guide, gave methodological advice, supervised data collection, and contributed to data analyses. CE edited scientific English and critically reviewed and revised the manuscript. OK contributed to the conception and design of the study idea and provided professional guidance to the raters throughout the analytic process and revised the manuscript critically for important intellectual content. DS supervised the whole study group, gave advise for the conception and design of the study as well as for the analyses, critically reviewed and revised the manuscript, and approved the final manuscript as submitted. All authors contributed to the article and approved the submitted version.
2020-11-30T14:14:27.586Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "91d9d53e917066680ff1f9e36ab7d558242d92e9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2020.605149/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91d9d53e917066680ff1f9e36ab7d558242d92e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202183129
pes2o/s2orc
v3-fos-license
The effect of marine aggregate parameterisations on nutrients and oxygen minimum zones in a global biogeochemical model Particle aggregation determines the particle flux length scale and affects the marine oxygen concentration and thus the volume of oxygen minimum zones (OMZs) that are of special relevance for ocean nutrient cycles and marine ecosystems, and that have been found to expand faster than can be explained by current state-of-the-art models. To investigate the impact of particle aggregation on global model performance, we carried out a sensitivity study with different parameterisations of marine aggregates and two different model resolutions. Model performance was investigated with 10 respect to global nutrient and oxygen concentrations, as well as extent and location of OMZs. Results show that including an aggregation model improves the representation of OMZs. Moreover, we found that besides a fine spatial resolution of the model grid, the consideration of porous particles, an intermediate to high particle sinking speed and a moderate to high stickiness improve the model fit to both, global distributions of dissolved inorganic tracers and regional patterns of OMZs, compared to a model without aggregation. Our model results therefore suggest that improvements not only in the model 15 physics, but also in the description of particle aggregation processes can play a substantial role in improving the representation of dissolved inorganic tracers and OMZs on a global scale. However, dissolved inorganic tracers are apparently not sufficient for a global model calibration, which could necessitate global model calibration against a global observational dataset of marine organic particles. One potential parameter affecting distributions of dissolved oxygen and thereby the volume and location of OMZs is the biological carbon pump (Volk and Hoffert, 1985). Global ocean model studies show that the biological pump is important for the distribution of dissolved inorganic tracers in the ocean Primeau, 2006, 2008) as well as atmospheric pCO 2 (Kwon et al., 2009;Roth et al., 2014). It further affects the feeding of deep sea organisms (Kiko et al., 2017) as well as the OMZs volume (Kriest and Oschlies, 2015). The biological carbon pump can be subdivided into three components: 5 production of organic matter and biominerals in the euphotic surface layer, particle export into the ocean interior and finally their decomposition in the water column and on the sea floor (Le Moigne et al., 2013). Estimates of the export of organic carbon out of the surface layer range from 5 to 20 Gt C yr -1 , with the large uncertainty illustrating the gap in our understanding of this process (Henson et al., 2011;Honjo et al., 2008;Keller et al., 2012;Laws et al., 2000;Oschlies, 2001). Further uncertainties are associated with the exact shape of the particle flux profile (e.g. exponential function vs. power law; 10 Banse, 1990;Berelson, 2002;Boyd and Trull, 2007;Buesseler et al., 2007;Lutz et al., 2002;Martin et al., 1987) and its possible variations in space and time. Recent studies suggest conflicting evidence with regard to the spatial variation of the particle flux length scale (Guidi et al., 2015;Marsay et al., 2015), which may again be influenced by the methodology of estimating the particle flux profile and thus the potential sensitivity to the considered depth (Marsay et al., 2015). Also, the underlying mechanisms for a potential spatio-temporal variation remain unclear: some studies attribute this to variations in 15 temperature and associated temperature-dependent variation in remineralisation (Marsay et al., 2015), while other studies derive this from variations in particle size distributions (Guidi et al., 2015). One mechanism, that leads to a variation in particle size distribution, consists in the formation of marine aggregates, which exhibit variable sinking speeds. For example, Alldredge and Gotschalk (1988) and Nowald et al. (2009) found sinking rates for aggregates ranging between 10 and 386 m d -1 . Particle sinking speed, and thus the particle flux profile, depends on 20 mineral ballast (Armstrong et al., 2002;Ploug et al., 2008), porosity and particle size (Alldredge and Gotschalk, 1988;Kriest, 2002;Smayda, 1970). Large particles are associated with high sinking speed and fast passage through the water column, resulting in low remineralisation and thus a small OMZs volume, and vice versa. It can therefore be expected that particle aggregation favouring fast sinking speeds can alter the volume of OMZs compared to small particles with low sinking speeds (Kriest and Oschlies, 2015). 25 However, there are still some gaps in our understanding of the parameters that control the aggregation rate as well as the particle's sinking behaviour. For example, in-situ measurements show almost no dependency between diameter and sinking speed (Alldredge and Gotschalk, 1988), whereas aggregates produced on a roller table show a noticeable relationship (Engel and Schartau, 1999). Furthermore, values for stickiness, which defines the probability that after collision two particles stick together, vary over a wide range. Stickiness depends on the chemistry of the particle's surface (Metcalfe et al., 2006) and the 30 particle type (e.g. Hansen and Kiørboe, 1997) and ranges between almost zero and one (e.g. Alldredge and McGillivary, 1991;Kiørboe et al., 1990). Thus, aggregation as one process that induces variations in particle size, and thus sinking speed, is only loosely constrained through its parameters. To explore these relationships further and to examine whether a spatially variable sinking speed improves the fit of a global biogeochemical model to global distributions of dissolved inorganic tracers and regional patterns of OMZs, this study uses the three-dimensional Model of Oceanic Pelagic Stoichiometry (Kriest and Oschlies, 2015), coupled with a module for particle aggregation and size-dependent sinking (Kriest, 2002). Given the large uncertainty associated with parameterisations of marine aggregates, we carried out 36 sensitivity experiments, in which we varied parameters relevant for particle aggregation and sinking. As in previous studies, the model`s fitness is evaluated by the Root Mean Square Error (RMSE) 5 against observational data of dissolved inorganic tracers, namely PO 4 , NO 3 and O 2 . This study additionally determines the model fitness with respect to extent and location of OMZs, following the approach by Cabré et al. (2015). To examine the above-mentioned questions, and explore the effects and uncertainties of a model that simulates particle dynamics on a global scale for a seasonally cycling stationary ocean circulation, our main questions are as follows: 10 1. Does a model that includes explicit particle dynamics improve the representation of observed PO 4 , NO 3 and O 2 ? 2. Does a model that includes explicit particle dynamics improve the representation of observed OMZs, and do the 'best' parameters with respect to this metric agree with those constrained by dissolved inorganic tracers? 3. What are the effects of uncertainties in the parameterisation of organic aggregates on model results ? 15 4. Can the assumptions inherent in the model confirm either of the spatial particle flux length scale maps proposed by Marsay et al. (2015) or Henson et al. (2015) and Guidi et al. (2015)? This paper is organised as follows: we first describe the model and its assessment with regard to dissolved inorganic tracers and OMZs, including the sensitivity experiments carried out with the model. We then present the outcome of the sensitivity 20 experiments, with special focus on the metrics defined above. We finally examine and discuss derived maps of particle flux length scales against the background of maps derived from observed quantities Marsay et al., 2015;Guidi et al., 2015). Oceanic transport 25 In this study, we used the 'Transport Matrix Method' (TMM) (Khatiwala et al., 2005), as an efficient offline method to simulate biogeochemical tracer transport with monthly mean transport matrices (TMs). Additional fields of monthly mean wind, temperature and salinity extracted from the underlying circulation model are used to simulate air-sea gas exchange of oxygen, and to parameterise temperature-dependent growth of phytoplankton. For our experiments, we used two different types of TMs and forcing fields: one set derived from a coarser resolution (hereafter called MIT2.8), and one from a finer 30 resolution version, based on a data-assimilated circulation (ECCO1.0) (Stammer et al., 2004). The MIT2.8 forcing and transport represent a resolution of 2.8° x 2.8° and 15 depth layers with a thickness ranging between 50 m and 690 m. ECCO1.0 TMs and forcing are based on a resolution of 1° x 1° and 23 depth layers, with a thickness ranging between 10 m and 500 m. Further details about the two setups can be found in Kriest and Oschlies (2013). In general, we used a time step length of 1/2 day for physical transport, and a time step length of 1/16 day for biogeochemical interactions in the coarse resolution, MIT2.8. Because some parameter configurations allow a very large 5 particle sinking speed, which may exceed more than one box per time step, in MIT2.8 we used a biogeochemical time step length of 1/70 day for all simulations with = 1.17 (see Table 1), in the finer resolution, ECCO1.0, we used in all experiments a time step of 1/80 day (see Table 1) but with exception of three experiments, where we used a length of 1/160 day (these are the experiments for a strong increase of sinking speed with particle size, given by parameter = 1.17; see Table 1). Each model was integrated for 3,000 years until tracers approached steady state. The last year is used for analysis 10 as well as misfit calculations. Model of Oceanic Pelagic Stoichiometry The Model of Oceanic Pelagic Stoichiometry, called MOPS (Kriest and Oschlies, 2015), is based on phosphorus, and simulates phosphate, phytoplankton, zooplankton, dissolved organic phosphorus (DOP) and detritus. The unit of each tracer 15 is given in mmol P m -3 . In addition, MOPS simulates oxygen and nitrate. The P-cycle is coupled to oxygen by using a fixed stoichiometry of R -O2:P =171.739, and to nitrogen by R P:N =16. The stoichiometry of anaerobic and aerobic remineralisation is parameterised following Paulmier et al. (2009). Remineralisation of detritus and DOM is fixed to a constant nominal remineralisation rate r and is dependent on oxygen but independent of temperature. If oxygen concentrations decrease, denitrification replaces aerobic respiration, consuming 20 nitrate. If neither oxygen nor nitrate is sufficiently available, remineralisation stops as the model does not account for other electron acceptors such as sulphate. As both forms of remineralisation follow a saturation curve (Monod-type), the realised remineralisation rate may diverge from the constant nominal remineralisation rate. On long timescales, the loss of fixed nitrogen through denitrification is balanced by temperature-dependent nitrogen fixation. Therefore, it should be noted that while phosphorus is conserved, the inventory of fixed nitrogen as well as oxygen is 25 variable, and dependent on ocean circulation and biogeochemistry (Kriest and Oschlies, 2015). In the basic model without aggregation the sinking speed of detritus increases linearly with depth. With constant remineralisation rate r, the particle flux can thus be described by ∝ %& with = ) * (Kriest and Oschlies, 2008), and is therefore (for constant r, e.g. in a fully oxic water column) comparable to the common power-law description of observed particles fluxes (Martin et al., 1987). The fraction of detritus reaching the seafloor follows two pathways: One fraction is re-30 suspended back into the deepest box of the water column and the other one is buried into the sediment and therefore responsible for P-removal. However, the P-budget remains annually unchanged by the resupply of buried P via river runoff. Model for particle aggregation and size dependent sinking Different approaches have been applied to simulate particle aggregation in the marine environment. A detailed representation of the particle size spectrum can be accomplished by explicitly simulating many different size classes, which interact with each other via collision-based aggregation, particle sinking, remineralisation and breakup (Burd, 2013;Jackson, 1990). This flexible approach captures the details of the size spectrum and its spatio-temporal variation in a very detailed way. However, 5 it is computationally expensive, and thus prohibitive to be applied to large spatial and long temporal scales. The aggregation module applied in MOPS parameterises a continuous log-log-linear size distribution of particles via the spectral slope e calculated from number and mass of particles (Kriest and Evans, 2000). The particle size distribution is influenced by size-dependent particle aggregation and sinking (Kriest, 2002;Kriest and Evans, 2000). Because aggregation reduces particle numbers (but not mass), and sinking preferentially removes large particles, number and mass change 10 independently. By assuming a log-log-linear size spectrum, the slope e of this spectrum can, at each time step and grid point, be computed from the particle number and total particle mass. The model requires parameters for the power-law relationships between particle diameter, d, and mass, m, (m = Cd ζ ) and between particle diameter and sinking speed, w, (w = Bd ) to be specified. In our model experiments, we assign fixed values for the minimum diameter and mass of a primary particle of size of d 1 =0.002 cm and m 1 =0.00075 nmol P. The exponent for 15 the relationship between size and mass is set to ζ = 1.62, as proposed for marine aggregates in Kriest (2002), which is in line with more recent findings (Burd and Jackson, 2009;Jouandet et al., 2014). For the relationship between size and sinking speed we test two alternative values for eta, namely = 0.62 and = 1.17 for the exponent, and w 1 between 0.7-2.8 m d -1 for the minimum sinking speed (see below). Assuming a constant degradation rate, the average sinking speed of all particles combined would increase with depth due to higher sinking speed of large particles and their higher proportion in the deeper 20 ocean interior. To prevent instabilities at very large sinking speeds (very flat size distributions), as in Kriest and Evans (2000) and Kriest (2002) we restrict the size dependency of sinking and aggregation to a maximum diameter of D L . Beyond D L , these processes do not vary with particle size any more. In our model experiments, we let this parameter vary between 1, 2 and 4 cm. Changes in the number of marine particles are dependent on particle aggregation, described by the collision rate, and the 25 probability that two particles stick together, a. In our model experiments we vary a between 0.2-0.8. The collision rate depends on turbulent shear and differential sinking and is parameterised as in Kriest (2002). We assume that the turbulent shear is high in the euphotic layers and zero in the deeper ocean layers. To avoid complications and non-linear feedbacks, in the experiments presented here, we assume that plankton mortality and zooplankton egestion as well as quadratic zooplankton mortality produce new detritus particles, but do not change the size 30 spectrum. By using this setup, the module is similar to parameterisations of particle size applied in other large-scale or global models (Gehlen et al., 2006;Oschlies and Kähler, 2004;Schwinger et al., 2016). Adjustment of biogeochemical model parameters 5 Introducing aggregates and a dynamic particle flux profile to the global model MOPS has a strong impact on biogeochemical model dynamics. Starting from parameter values of the calibrated model setup (without aggregation) of , we calibrated parameters relevant for phytoplankton and zooplankton growth and turnover as described in Kriest et al. (2017) against observed global distributions of nutrients and oxygen. Parameters to be calibrated for this new model were the light and nutrient affinities of phytoplankton, zooplankton quadratic 10 mortality, detritus remineralisation rate, particle stickiness and the exponent that relates particle sinking speed to particle size (see Table 2). After introduction of particle aggregation, the calibrated nutrient affinity of phytoplankton is now much higher, with a half-saturation constant for phosphate of K PHY = 0.11 mmol PO 4 m -3 instead of 0.5 mmol PO 4 m -3 in , very likely because the optimisation compensates for the higher export (and lower recycling) of phosphorus and nitrogen. Possibly for the same reason, detritus remineralisation rate in the optimised model is increased from 0.05 d -1 to 0. 25 15 d -1 . Light affinity of phytoplankton deviates less from the value in the model without particle aggregation, but the quadratic mortality of zooplankton is strongly reduced (1.6 (mmol P m -3 ) -1 instead of 4.55 (mmol P m -3 ) -1 ); the latter might be regarded as an attempt of the optimisation to reduce the export of organic matter from the euphotic zone. The two parameters that affect aggregation and particle sinking remained at moderate values of α = 0.42 and = 0.72, i.e. close to those applied in earlier model experiments with aggregation (e.g. Kriest, 2002). The residual cost function J RMSE of this pre-calibrated model 20 with aggregation was 0.472, i.e. lower than noAgg MIT2.8 (J RMSE = 0.529), but somewhat higher than achieved with a model version optimised against nutrient and oxygen concentrations , that resulted in a misfit of J RMSE = 0.439. In the sensitivity experiment described below we will examine, whether this remaining misfit can be reduced even further, and evaluate the model sensitivity to changes in the parameters of this highly complex module. Sensitivity experiments at coarse resolution (MIT2.8) 25 In the coarser model configuration of MOPS, MIT2.8, a first sensitivity study of 36 model simulations with different aggregation parameters was performed (see Table 1). We varied the values of four aggregation parameters, which control the rate of aggregation and the sinking behaviour of particles. The first parameter is the stickiness a, i.e. the probability that after collision two particles stick together, which was set to values of 0.2, 0.5 and 0.8, respectively. The second parameter is the maximum particle diameter for size dependent aggregation and sinking, D L , set to values of 1, 2 and 4 cm. A small value of speed of a primary particle with values of 0.7, 1.4 and 2.8 m d -1 . One effect of a small value of w 1 is that it reduces the loss of organic matter from surface layers, and thus has a direct effect on the recycling of nutrients at the surface. At the same time, it also affects the maximum possible sinking speed of the entire detritus pool. Finally, the exponent that relates particle sinking to diameter, , is set to values of either 0.62 and 1.17. A high represents dense particles, and a fast increase of particle sinking speed with size, a low value stands for more porous particles, which show only a weak relationship between 5 size and sinking speed (Kriest, 2002). Sensitivity experiments at fine resolution (ECCO1.0) The occurrence of aggregates, and their transport to the ocean interior, can furthermore depend on physical dynamics (e.g. Kiko et al., 2017). Therefore, in a second step, we repeated some of the experiments presented above in the finer resolution version ECCO1.0 to investigate possible improvements at higher resolution. In particular, we repeated all MIT2.8-10 simulations with = 0.62 in this finer resolution configuration. Additionally, we carried out three more simulations with = 1.17 but with the smallest D L = 1 cm to prevent particles from sinking through more than one box per time step (see Table 1). All simulations together lead to 30 model runs in the finer resolution configuration. To compare the ECCO1.0 simulations directly with results from MIT2.8, we re-gridded the result from ECCO1.0 simulations onto the coarser MIT2.8 grid. 15 Model Assessment and Diagnostics Because observational data of particle flux are either limited with regard to space and time (e.g. Gehlen et al., 2006) or are combined with assumptions, that yield no clear patterns (Gehlen et al., 2006;Henson et al., 2012;McDonnell and Buesseler, 2010), this study restricts the model assessment to observations of nutrients and oxygen, in combination with the model fit to volume and location of oxygen minimum zones. 20 Root Mean Squared Error of Tracers After a spin-up of 3,000 years into a seasonally cycling equilibrium state, the model results are evaluated in terms of annual means of oxygen, phosphate and nitrate. As in previous studies (e.g. Kriest et al., 2017) the misfit is calculated by the deviation between simulated results, m, and observed properties taken from the World Ocean Atlas (WOA), o, (Garcia et al., 2006). The deviations are weighted by volume of each grid box V i , expressed as the fraction of the total ocean volume V T . 25 The sum of the weighted deviations is normalised by the observed global mean concentration of each tracer: (1). In this equation, j=1,2,3 describes the respective tracer (i.e. PO 4 , NO 3 and O 2 ). N is the total number of model grid boxes and o j is the global average observed concentration of each tracer . Thus, a low misfit value represents a good agreement between model and observations (J RMSE = 0 would be a perfect fit), which enables a prediction about the model 8 accuracy with regard to these tracers. The model runs with the lowest J RMSE in the coarse and the fine resolution are called RMSE MIT2.8* hereafter and RMSE ECCO1.0* , respectively. Fit to oxygen minimum zones To evaluate the extent and location of OMZs, we follow the approach of Cabré et al. (2015) by calculating the overlap between modelled and observed (Garcia et al., 2006; hereafter referred to as "WOA") OMZs. As several marine processes 5 are oxygen-dependent but have heterogeneous criteria for their minimum oxygen threshold, in this study, the OMZs are calculated for different oxygen threshold concentrations, C. Therefore, low-oxygen waters are characterised as O 2 < c, with c ranging from 0 to 100 mmol O 2 m -3 . To calculate the overlap between simulated and observed OMZs, we use the following equation (Sauerland et al., 2019): (2). 10 In this equation, ∩ is the volume of overlap of suboxic waters between model and observations, with regard to the defined oxygen threshold concentration c. This overlap is divided by the union (total volume of low-oxygen waters occupied in the model or in the observations) and results in a value between 0, equal to zero overlap between model and observations, and 1, which represents an optimal overlap. To adjust the scale to J RMSE , we calculated: In this equation, J OMZ varies between zero and one. Consequently, the scale of J OMZ is equivalent to the scale of J RMSE , which implies that a low misfit corresponds to a good agreement between model and observational data and vice versa. The model simulations with regard to lowest J OMZ are called OMZ MIT2.8* and OMZ ECCO1.0* hereafter. In calculating the overlap, we distinguish between the global ocean and the Pacific as well as the Atlantic Ocean. 20 Estimation of particle flux length scale b To investigate, if, and how, the model reproduced observed maps of the particle flux length scale, b, that relates particle flux and depth via ∝ %& and derived from data by Marsay et al. (2015) and Guidi et al. (2015), we log-transformed F(z), the simulated, annual average flux of particulate organic matter as a function of depth and carried out a linear regression of these values. Highest b values correspond to short particle flux length scale, i.e. many small particles, and thus a low sinking 25 speed, shallow remineralisation and high oxygen consumption in shallow waters. For the reference models without aggregation these global maps should, in areas with shallow mixed layers, show spatially uniform values, as imposed by the model's prerequisites. Deviations from uniform values can either be ascribed to oxidant limitation of remineralisation (see above model description), or from physical processes such as mixing or upwelling, which can result in an additional vertical transport of particles. 9 The parameterisation of the aggregation model assumes a constant sinking speed for an upper size limit D L (see above), and therefore average particle sinking speed will remain constant below some depth. Also, the assumption of a particle size spectrum, size dependent sinking and constant remineralisation will result in particle flux profiles that do not fully agree with those predicted by a power law (see Kriest and Oschlies, 2008). Thus, because the aggregation model's prerequisites do not fully agree with a continuous increase of sinking speed with depth, we confine the regression of log-transformed particle flux 5 to a vertical range between 100-1,000 m, where the aggregation model still shows an increase of average sinking speed with depth (see also Kriest and Oschlies, 2008). Global patterns of particle flux profiles As could be expected, noAgg ECCO1.0 shows almost no spatial pattern of b, with values around the prescribed, nominal value 10 of b = 0.858 (global mean: 0.64; Fig. 1a; please note the different scaling in (a) and (d)) indicating long particle flux length scales and deep remineralisation. Regions with particularly low diagnosed b values (< 0.2) result either from decreased remineralisation in OMZs (e.g. eastern tropical Pacific OMZ) or are found in areas of deep mixing (in the model mainly high latitudes or western boundary currents), where vertical mixing increases the inferred particle flux length scales. However, for the best simulation with regard to the sum of J RMSE and J OMZ of the aggregation model (called ECCO1.0* hereafter) we find 15 highest b values, corresponding to short particle flux length scales, or shallow remineralisation, in the oligotrophic subtropical gyres. In contrast, b is smallest in the equatorial upwelling and in the shelf regions ( Fig. 1d and g). This pattern is in accordance with the observed spatial pattern derived by Marsay et al. (2015). In our model, this very deep flux penetration (b close to zero) in the equatorial upwelling can be explained with low oxygen concentrations, which reduce the remineralisation rate. In contrast, when deriving the particle flux length scale from a similar model but with oxygen-20 independent remineralisation (Kriest and Oschlies, 2013), we find a b close to the prescribed b value of 0.858 (Fig. S1). In the subtropical and the equatorial region, the spatial variance (marked transparent red ; Fig 1g) of model-derived b values is quite high, which is caused by spatial variations in the physical environment, i.e. permanently stratified subtropical gyres and upwelling regions with low oxygen and reduced remineralisation. However, besides ECCO1.0* the four best model simulations with respect to the sum of J RMSE and J OMZ (simulation #14, #17, #28 and #29; Table 1) show essentially the same 25 pattern of b (Fig. S2), although these four simulations include quite different parameterisations (see Table 1). Regions with high b values are characterised by a high spectral slope of the size distribution and therefore a high abundance of small particles, leading to slow sinking speeds (Fig. 7) and low export rates in ECCO1.0* (Fig. 1f). ECCO1.0* simulates highest export rates at high latitudes and in the upwelling region and lowest export rates in the subtropical gyres ( Fig. 1f and i). Although the spatial pattern of export rates is similar for both model simulations with and without aggregation, 30 ECCO1.0* shows a 1.6-fold higher global mean export rate (10.1 mmol P m -2 a -1 ) than noAgg ECCO1.0 (6.1 mmol P m -2 a -1 ). In ECCO1.0* export rates show a higher regional variability than in noAgg ECCO1.0 (Fig. 1c, 1f and 1i), which is due to blooms in the high latitudes during summer season accelerating the size-dependent aggregation and thus the export signal. The oxygen concentration at a depth of 100 m shows the same global pattern in both simulations, with high oxygen concentrations at high latitudes, and decreasing concentrations towards the equator (Fig. 1b and 1e). However, the oxygen concentration at high latitudes is slightly higher in noAgg ECCO1.0 than in ECCO1.0* (Fig. 1h). Moreover, the global suboxic 5 volume (for a criterion c = 50 mmol m -3 ) in ECCO1.0* (7.3x10 16 m 3 ) is larger than in noAgg ECCO1.0 (3.7x10 16 m 3 ). Comparing our model results with the dataset of Garcia et al. (2006), which yields a volume of 5.6x10 16 m 3 , we find an underestimation of the suboxic volume for noAgg ECCO1.0 by 34% and an overestimation for ECCO1.0* by 30%. Representation of oxygen minimum zones The finer resolution and data-assimilated circulation of ECCO1.0 in general improves the representation of OMZs in 10 comparison to MIT2.8 with regard to the overlap of OMZs for a criterion of 50 mmol m -3 (Fig. 2). Both simulations without explicit particle dynamics, namely noAgg MIT2.8 and noAgg ECCO1.0 , clearly underestimate the extent of the OMZ at a depth of 500 m and 1,000 m for an OMZ-criterion of 50 mmol m -3 in the Pacific basin (Fig. 2). The simulations including particle dynamics that are best with respect to the OMZ metric, OMZ MIT2.8* and OMZ ECCO1.0* , exhibit a larger OMZ area for both resolutions (Fig. 2). Despite the improved representation of OMZs, all models including the particle aggregation module still 15 tend to merge the OMZs of the Northern Hemisphere (NH) and the Southern Hemisphere (SH) at a depth of 500 m, which does not agree with the well separated northern and southern OMZ shown by the observations (Fig. 2 and Fig. S3). As reflected in a plot that shows the extent of OMZ in the northern and southern hemisphere, similar to Fig. 1a and 1b of Cabre et al. (2015), all models fail to represent the double structure of OMZ north and south of the equator. However, in our model the northern Pacific OMZ is fitted quite well ( Fig. 2 and Fig. S3). 20 Aggregation improves the representation of OMZs with respect to a criterion of c = 50 mmol m -3 compared to the simulations without aggregation for both resolutions in the NH, but not in the SH (Fig. 3). In noAgg ECCO1.0 the OMZ simulated in the NH is too small and too shallow (Fig. 3a). Even though OMZ ECCO1.0* tends to underestimate the suboxic area between ~700 m and 1,300 m, it shows a considerably higher overlap of model results and observations compared to noAgg ECCO1.0 (Fig. 3b). However, in the SH noAgg ECCO1.0 represents the OMZs better than OMZ ECCO1.0* , which tends to 25 overestimate the suboxic area in this hemisphere. In addition to differences caused by particle dynamics, circulation affects the performance in the two hemispheres: OMZ ECCO1.0* represents the highest overlap between ~100 and 500 m depth in the SH but this is surpassed by OMZ MIT2.8* between 500 and 900 m depth. In the NH, OMZ ECCO1.0* outcompetes OMZ MIT2.8* between 300 and 900 m depth as far as overlap is concerned (Fig. 3b). However, the improvement of the representation of OMZs in the simulations with aggregation depends on the criterion for 30 OMZs. As could be expected, a higher oxygen threshold for the OMZ-criterion enhances the overlap between model simulations and observational data (Fig. 4). As for the fixed criterion of 50 mmol m -3 , globally and in the Pacific the better circulation and finer resolution of ECCO1.0 improves the overlap for varying OMZ-criterions in comparison to MIT2.8 ( Fig. 4a and c). While the OMZ ECCO1.0* simulation reaches globally a maximum overlap of 65.9% (for c = 100 mmol m -3 ), OMZ MIT2.8* culminates only in a maximum of 58.7% for the same criterion. In the Pacific basin OMZ ECCO1.0* reaches an agreement with observations of 19.9% overlap for a criterion of 20 mmol m -3 (Fig. 4c). The overlap then increases strongly until the 100 mmol m -3 criterion (68.2%). It is noteworthy that globally and in the Pacific area noAgg ECCO1.0 outperforms all models for a criterion of 20 mmol m -3 , where it shows an agreement of almost 5 31%. The Atlantic basin shows an inverse trend (Fig. 4b): here, OMZ MIT2.8* represents the OMZ better than OMZ ECCO1.0* (26% and 12.2%, respectively, for a criterion of 70 mmol m -3 ). Further, in this region, the ECCO1.0 model that performs best with respect to RMSE (RMSE ECCO1.0* ) outperforms OMZ ECCO1.0* over the full range of criteria (Fig. 4b). Thus, there are large regional differences in the model's response to different circulations and particle dynamics. Because the dataset of observations used for comparison does not contain any concentrations below 30 mmol m -3 in the Atlantic, all models show 10 no overlap at all in this basin. In summary, the improvement of model fit with regard to J OMZ depends not only on particle dynamics, but also on the definition of OMZs (i.e. the OMZ criterion c), the model resolution as well as the region considered (Fig. 2, Fig. 3, Fig. 4). Table 3 shows that in six cases out of nine (MIT2.8), a model that represents porous particles ( = 0.62) outperforms the 15 corresponding model with a sinking speed that describes rather dense, cell-like particles ( = 1.17). The same applies for the higher resolution (ECCO1.0), where in two cases out of three a porous parameterisation improves the fit with regard to J RMSE (see Table 1). Also, both J RMSE and J OMZ of the "dense" parameterisations are never among the best five models with respect to either metric (see Table 1). Thus, in the following we focus on model simulations with = 0.62. Sensitivity of nutrient and oxygen distributions to aggregation parameters Among the sensitivity experiments performed, the best model with respect to J RMSE (hereafter referred to as RMSE MIT2.8* ) is 20 characterised by an intermediate stickiness α of 0.5, the largest diameter for size-dependent aggregation and sinking, D L , of 4 cm and a minimum particle sinking speed w 1 of 2.8 m d -1, representing a rather fast organic matter transport to the ocean interior. However, many other models with medium stickiness perform about equally well (Fig. 5, upper mid panel). Models with lower stickiness perform best with slow minimum sinking speed w 1 and a large maximum size D L =4 cm for sizedependent sinking and aggregation (Fig. 5, upper left panel). In contrast, a large stickiness (which facilitates the formation of 25 aggregates in surface layers) requires either small w 1 or D L , which reduces the export of particles out of the euphotic zone, and into the ocean interior. Oxygen concentrations contribute most to the global J RMSE . The influence of oxygen on global tracer misfit is dominated by the deep concentrations (Fig. S4), and thus to a large extent by the large-scale circulation. The OMZs, because of their small regional extent, contribute less to the global misfit . This is confirmed by Fig. S4 30 (d, e, f), showing that, in the eastern tropical Pacific region deep (>300 m) mesopelagic and deep oxygen concentrations scatter strongly among the different models (Fig. S4 a), despite their good global match in shallow waters. Likewise, although global mean profiles of nutrients are quite similar among the different circulations, and agree quite well with observations, their concentrations scatter strongly in the eastern tropical Pacific. Most of the simulations tend to underestimate the oxygen and nitrate concentration in this region (Fig. S4 a and c). Too low oxygen concentrations lead to too high denitrification and thus widespread nitrate depletion in the eastern tropical Pacific region, which explains the simultaneous underestimate of oxidants in this region. To sum up, a moderate stickiness enhances the chance of a good model fit to nutrients and oxygen (J RMSE ), but there is no 5 unique trend for the parameters or combination of parameters, with the exception of the exponent that relates particle sinking speed to its size: here, we find an advantage of a parameterisation characteristic for porous marine aggregates. In the optimal scenario, the misfit is less than that of a model without aggregates, when this is simulated with fixed reference parameters (noAgg MIT2.8 ). Because of the small spatial extent of OMZs, the model fit to nutrient and oxygen concentrations is mainly caused by the large-scale tracer distribution, even if some models show a considerable mismatch to these tracers in OMZs. 10 The pattern for J RMSE does not change very much when applying a different, higher resolved and data assimilated circulation (see Table 1 and Fig. 6). Now, the optimal model (RMSE ECCO1.0* ) is improved with respect to J RMSE by about 13%, but many other, almost equally good solutions, can be found with moderate to high stickiness. Introducing aggregates in this coupled model system does not improve the model fit to nutrient and tracer concentrations, as evident from the comparison of RMSE ECCO1.0* (J RMSE = 0.431) against a model without aggregate dynamics (J RMSE = 0.426; Table 1). The lack of 15 improvement can likely be explained by the fact that the biogeochemical parameters of MOPS with particle dynamics were adjusted in the circulation of MIT2.8, and thus are not optimal for the model when simulated in the physical dynamics of ECCO1.0. The sensitivity to the metric for OMZs differs from the one to the metric for nutrients and oxygen. Now, for the fit to oxygen minimum zones (J OMZ ), a large stickiness, α, in combination with D L of 2 cm and slow to moderate minimum sinking speed 20 w 1 are of advantage ( Fig. 5 and Fig. 6). Thus, a high rate of aggregation, and a maximum sinking speed of about 50-100 m d -1 improves the model with respect to OMZs. This is also evident from comparison of the optimal models (OMZ MIT2.8* and OMZ ECCO1.0* ) to models without aggregate dynamics (noAgg MIT2.8 and noAgg ECCO1.0 ), shown in Fig. 3 and Fig. 4 and subsection 3.2. Nevertheless, even the models that perform best with respect to J OMZ underestimate mesopelagic oxygen when averaged over the eastern tropical Pacific (Fig. S4 a). 25 The sensitivity patterns with regard to J OMZ among both configurations MIT2.8 and ECCO1.0 diverge considerably from each other, which is in contrast to the patterns for J RMSE noted above (compare Fig. 5 with Fig. 6). Thus, model performance with respect to J OMZ seems to depend much more on circulation and physical details than the large-scale dynamics reflected in J RMSE . Discussion aggregates, which are composed of phytoplankton and detritus, the parameterisation, which is based on dense particles (dSAM, Kriest 2002) and a biogeochemical model, which is different. We found high values for the spectral slope of the size distribution (i.e. high abundance of small particles) and thus a low particle sinking speed in the subtropical gyres (Fig. 7), which corresponds with the findings by Oschlies and Kähler (2004) and Dutay et al. (2015). This, in turn, leads to highest b values in the oligotrophic subtropical gyres and lowest ones in the high latitudes and the upwelling region, and agrees with 5 the pattern as shown in Marsay et al. (2015). These findings imply that such a b pattern cannot only result from temperature dependent remineralisation -as suggested by Marsay et al. (2015) -but also from particle dynamics and temperatureindependent remineralisation. However, if temperature-dependent remineralisation, as suggested by Marsay et al. (2015) or Iversen and Ploug (2013), was also included in our model, this would likely enhance horizontal variations in the particle flux profile, with even deeper flux penetration in the cold waters of the high latitudes and upwelling areas. Beside particle 10 dynamics, the low b values in upwelling regions found in our study (Fig. 1d), are also caused by the suboxic conditions, which suppress remineralisation in subsurface waters. Such a tight link between suboxia and deep flux penetration is supported by the observations reported by Devol and Hartnett (2001) and Van Mooy et al. (2002). Therefore, two different processes -particle aggregation and/or temperature-dependent remineralisation -suggest low b values and deep flux penetration in the very productive areas of high latitudes. A third process, which consists in oxygen-dependent 15 remineralisation, is superimposed on these in OMZs, causing the steepest particle profiles in these areas. However, it should be noted that although the maximum sinking speed of our best simulations (101 m d -1 (#17) and 51 m d -1 (#26), see Table 1) agrees with observations (Alldredge and Gotschalk, 1988;Nowald et al., 2009, Jouandet et al., 2011, the range of b values in our model is almost twice as large as suggested by most empirical studies (Berelson, 2001;Buesseler et al., 2007;Martin et al., 1987;Van Mooy et al., 2002). However, as there is no common depth range to determine the particle 20 flux length scale b, the depth range spreads over a wide range in various studies and thus impedes the comparability (Marsay et al. 2015), which might explain some divergence between observations and model results. In particular, our model simulates a too large fraction of small particles and therefore a too steep particle size spectrum in the subtropical gyres, which causes too high b values in these areas. Other processes that modify the size spectrum, like grazing by zooplankton, and the subsequent egestion of large fecal pellets, might also play a role in these regions. Additionally, the model tends to 25 underestimate the number of large particles (size range 0.14 to 16.88 mm) in the surface of the tropical Atlantic Ocean (23°W), compared to observations (Kiko et al., 2017;Fig. S6). On the other hand, a first, direct comparison to the UVP 5 dataset (Kiko et al., 2017, their Fig. 1) exhibits a correct magnitude regarding the number of particles within this size range (0.14 to 16.88 mm) in our model ( Figure S5) along the 151°W section. One possible explanation for the mismatch at 23°W could consist in a not sufficiently resolved equatorial current system, which also will be discussed below. Also, additional 30 biological processes such as the downward transport of organic matter trough vertically migrating zooplankton (Kiko et al., 2017), or particle breakup of aged, fragile particles at depth (e.g. Biddanda et al., 1988) could improve the model. However, introducing this additional complexity is beyond the scope of this paper. In future studies, consideration of these processes, in conjunction with a comprehensive model calibration against observed particle abundances and size spectra (e.g. contributions of individual processes such as aggregation, vertical migration, temperature-dependent remineralisation and to validate simulated particle dynamics. However, model calibration against observed particle dynamics has to account for characteristics and limitations of observations. For example, the size spectrum assumed in our model is of infinite upper size and also contains particles with a 5 diameter larger than, e.g. 4 cm (the upper limit for size-dependency of aggregation and sinking). While these particles exist (e.g. Bochdansky and Herndl, 1992), they are very rare (in the model, and likely also in the observations), and might not be observed with standard methods, which usually rely on a sample size of few litres. The rare occurrence of large particles, and the limited sample size has, for example, consequences for estimated size spectra parameters (Blanco et al., 1994). Thus, any model calibration against observations of particle abundance and size has to account for a proper match between simulated 10 and observed quantities. As we used on the one hand two different model grid resolutions and on the other hand varied model parameterisations with regard to particle aggregation, changes in the location and extension of OMZs and the distribution of tracers within each resolution are exclusively driven by the aggregation parameters. A good parameterisation of particle aggregation parameters can therefore have a major influence on the representation of OMZs. Furthermore, a higher model resolution improves the 15 depiction of equatorial currents and therefore the oxygen transport (Cabré et al., 2015;Duteil et al., 2014), which, in turn, results in an improved representation of OMZs in the finer resolution configuration, ECCO1.0, compared to the coarser resolution, MIT2.8. However, as physical processes at smaller scales affect the simulated shallow to mesopelagic oxygen and nutrient concentrations for the eastern tropical Pacific (Getzlaff and Dietze, 2013), the finer (1°x1°) resolution of ECCO1.0 is not sufficient to resolve the details of the equatorial current system (Duteil et al., 2014). This can explain the 20 still high residual misfit of these simulations, and the missing double structure of OMZs in the Eastern Tropical Pacific. We therefore suggest, that the difference in improving the representation of OMZs between northern and southern hemisphere is more affected by physics than by biology. Furthermore, results of our sensitivity study confirm that dense particles do not constitute a realistic representation of particles, as indicated by Karakaş et al. (2009) and Kriest (2002). Porous particles seem to constitute a more appropriate 25 parameterisation for good model fit with regard J RMSE and J OMZ (Table 1). Although the observed stickiness ranges between almost zero and one (e.g. Alldredge and McGillivary, 1991;Kiørboe et al., 1990), in our study a moderate stickiness, a, between 0.5 and 0.8 leads the model towards a good fit to observed nutrients, oxygen and OMZs. In summary, our study supports the results of Schwinger et al. (2016), who found an improved representation of nutrient distribution and OMZs when switching from constant particle sinking to either a power law or particle dynamics, similar to 30 those presented here. However, the difference between the two latter schemes in that study were only small. A more extensive search of the parameter space within a given circulation may further improve the model. Additionally, we optimised noAgg MIT2.8 against the same misfit function as MOPS oD of Kriest et al. 2017 and found that even though including an aggregation module improves our model, utilising an appropriate parameter optimisation would further enhance our model fit. Thus, without a comprehensive calibration of biogeochemical and aggregation parameters there only seems to be a slight advantage when using this more complex model of particle dynamics. Finally, we found a steep particle size spectrum in the subtropical oligotrophic region (Fig. 1d), which does not agree with observational data. Potentially, there are processes taking place, which are not considered in our model i.e. particle repackaging and active transport by zooplankton (vertical migration) (Kiko et al. 2017) based on a modified food web. Thus, 5 particle aggregation alone so far seems not to be sufficient for a correct representation of the particle size spectrum. Najjar et al. (2007) applied different model circulations to the same biogeochemical model, and found that that physical processes are an important factor for modelling marine biogeochemistry. Our study furthermore showed that also biogeochemical parameterisations -in particular, those related to particle flux -can have an important impact on the 10 representation of dissolved inorganic tracers, in line with earlier studies (e.g. Kriest et al., 2012;Primeau, 2006, 2008). These earlier studies applied and varied a globally uniform particle flux length scale, whereas it has been suggested that this parameter should vary in space and time (e.g. Guidi et al., 2015;Marsay et al., 2015). The sensitivity study presented here constitutes a first approach to systematically estimate the impact of marine particle aggregation -and thus a spatially and temporally variable flux length scale -on the location and extent of OMZs as well as the representation of 15 phosphate, nitrate and oxygen under steady-state conditions in a global three-dimensional biogeochemical ocean model. Conclusion and Outlook We have shown that the assumptions inherent in the model confirm the general pattern of the spatial map of b values proposed by Marsay et al. (2015) (Fig. 1a and d). This, in turn, shows that the pattern of Martin's b cannot only be depicted by a POC flux dependent on temperature but also by simulating explicit particle dynamics. We furthermore found that even though there are still a lot of gaps in understanding several processes e.g. the variation of 20 export rates, particle stickiness and particle flux profile over space and time, as well as the link between particle diameter and sinking speed, the comparisons against observational data show a trend towards a model improvement by integrating particle dynamics (Table 1). While the parameterisation of aggregation leads the model towards an improved fit to OMZs for both model resolutions, this increase in model fit with regard to phosphate, nitrate and oxygen is only detectable in the coarse resolution MIT2.8, but not in the finer resolution and data-assimilated circulation of ECCO1.0. Moreover, model 25 simulations show that besides effects of grid resolution, the model fit with regard to J RMSE and J OMZ is mainly driven by the particles' porosity. Our results indicate that a best fit to both, tracers as well as OMZs (50 mmol O 2 m -3 criterion), is achieved by parameterising porous particles in combination with an intermediate to large maximum particle diameter for size dependent aggregation and sinking, a moderate to high stickiness ranging between 0.5 and 0.8 and an intermediate to high initial sinking speed ranging between 1.4 and 2.8 m d -1 (Fig. 5). The strong sensitivity of the model fit to aggregation 30 parameters may point towards the importance of a spatially and temporally varying flux length scale; however, they also show, that the dynamics of the model depend strongly on the assumptions we make with respect to particle properties and processes. Finally, we have shown that uncertainties in the parameterisation of particle aggregation remain, leading to the inference that dissolved inorganic tracers offer only insufficient observational constraints for global particle parameterisation. Therefore, for an accurate representation it will be necessary to calibrate the model not only against observed phosphate, nitrate, oxygen 5 distributions and volume and location of OMZs (Sauerland et al., 2019) but also against number and size of particles, using comprehensive datasets of observations (as in Guidi et al., 2015). Code and data availability The source code of MOPS including the aggregation module coupled to TMM as well as the model output are available at: https://data.geomar.de/thredds/catalog/open_access/niemeyer_et_al_2019_bg/catalog.html. 10 Author contributions D. Niemeyer, I. Kriest and A. Oschlies conceived the study. D. Niemeyer performed and analysed the simulations. All authors discussed and wrote the manuscript. Table 1: Model runs of sensitivity study, their parameter combinations and the calculated misfit of tracers (J RMSE ) and OMZs (J OMZ ) for MIT2.8 and the ECCO1.0 configurations. The 25% best simulations with regard to J RMSE and J OMZ are highlighted in yellow and the worst 25% in red (relative to RMSE ECCO1.0* and OMZ ECCO1.0* ). The simulations in between are coloured in two
2019-09-10T20:24:06.607Z
2019-08-15T00:00:00.000
{ "year": 2019, "sha1": "54293e48aa31e0d4171201633bd7d134fec8ed8e", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/16/3095/2019/bg-16-3095-2019.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b025a8cf2fdf2d57e8fcd03ef99685ffac90edeb", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
267682760
pes2o/s2orc
v3-fos-license
An Energy-based Model for Word-level AutoCompletion in Computer-aided Translation Word-level AutoCompletion (WLAC) is a rewarding yet challenging task in Computer-aided Translation. Existing work addresses this task through a classification model based on a neural network that maps the hidden vector of the input context into its corresponding label (i.e., the candidate target word is treated as a label). Since the context hidden vector itself does not take the label into account and it is projected to the label through a linear classifier, the model cannot sufficiently leverage valuable information from the source sentence as verified in our experiments, which eventually hinders its overall performance. To alleviate this issue, this work proposes an energy-based model for WLAC, which enables the context hidden vector to capture crucial information from the source sentence. Unfortunately, training and inference suffer from efficiency and effectiveness challenges, therefore we employ three simple yet effective strategies to put our model into practice. Experiments on four standard benchmarks demonstrate that our reranking-based approach achieves substantial improvements (about 6.07%) over the previous state-of-the-art model. Further analyses show that each strategy of our approach contributes to the final performance.1 Introduction Computer-aided Translation (CAT) (Barrachina et al., 2009;Santy et al., 2019;Huang et al., 2021), * Work done during internship at Tencent AI Lab.† Corresponding authors. 1 Our codes are available at https://github.com/yc1999/energy_wlac. which enables the leveraging of machine translation systems (Bahdanau et al., 2015;Vaswani et al., 2017) to improve the efficiency of the human translation process, has seen increasing interest in recent years.In this work, we study a crucial yet challenging task in CAT: Word-Level AutoCompletion (WLAC) (Li et al., 2021), which aims at yielding word-level suggestions based on context pieces provided by human (Figure 1(a)). Previous research includes statistical methods (Huang et al., 2015) and neural methods (Santy et al., 2019;Li et al., 2021).With the help of word alignment toolkits (Och and Ney, 2003;Dyer et al., 2013), statistical approaches build a translation table and use it to predict the target word.More recently, Li et al. (2021) use a Transformer-based classification model, which firstly encodes the input context to a hidden vector and then maps the hidden vector into the candidate target word through a linear classifier.This strong baseline method achieves the state-of-the-art (SOTA) performance. In the aforementioned classification paradigm, the hidden vector of the input context inherently does not take the candidate target word into consideration.As a result, it may not effectively leverage valuable information carried by the candidate target word when occurring in the input context, as shown in Figure 1(b).Specifically, given the input context and human typed characters ''d'', the user may tend to type ''disease'' (''Krankheit'' in German).However, through visualizing attention weights, it shows that the baseline method captures more information from ''gemeinsame'' and ''verzweifelten'' than that from the most informative word ''Krankheit'' in the source side, which may underestimate the Suppose that a user has input a source sentence x, partial translations (c l , c r ) and is now typing some characters (s).A well-trained WLAC model is expected to suggest ''disease'' to complete s.The expected translation for x is ''And disease is the common enemy of these desperate people.''(b) Attention weights from ''[MASK]'' to words in x of the baseline method.(c) Attention weights from ''disease'' to words in x of our energy-based model.(Color intensity reflects the strength of attention weights.)model score of the ground-truth word ''disease'' and thereby leads to incorrect prediction. To alleviate the above issue, we formalize the WLAC task with an energy-based model (Ranzato et al., 2006;LeCun et al., 2006) based on Transformer, where the hidden vector is defined on top of both the candidate target word and the input context through a deep energy function.Furthermore, with the help of deep neural networks, the energy-based function is expected to capture sufficient information for each candidate target word through the attention mechanism.In this way, the energy function is able to capture informative context (i.e., ''Krankheit'') to evaluate the target word (i.e., ''disease''), and thereby the score from the energy-based model is more reliable, as shown in Figure 1(c). Unfortunately, training and inference with the energy-based model suffer from efficiency and effectiveness challenges due to the normalization term in the model.To alleviate the effect of these barriers, we systematically incorporate three sim-ple yet effective strategies inspired by previous studies: (1) a negative sampling method for efficient training (Ma and Collins, 2018;Li et al., 2019a;Xu et al., 2022), (2) a reranking paradigm as an approximate proxy for efficient inference (Shen et al., 2004;Nogueira and Cho, 2019;Bhattacharyya et al., 2021), and (3) a pre-training method for effective training (Lee et al., 2021a).Experiments on four standard benchmarks demonstrate that the energy-based model is indeed better at capturing informative signals for the prediction of a candidate target word and thereby yields substantial improvements over strong baselines. To sum up, our contribution is three-fold: 1. We point out that the previous SOTA model for the WLAC task suffers from an issue, i.e., it can not sufficiently leverage the valuable information from the source sentence for word prediction. 2. We propose an energy-based model to alleviate this issue and we employ three simple yet effective strategies to put it into practice. 3. We comprehensively evaluate our approach on four benchmarks, and our approach achieves substantial improvements (about 6.07%) over the previous SOTA model. Preliminary In this section, we review the setting of the WLAC task and introduce the state-of-the-art baseline method, which will be reused in Section 3. WLAC Task Notations Let x = (x 1 , x 2 , . . ., x T ) be a source sentence, s = (s 1 , s 2 , . . ., s k ) be a sequence of human typed characters and c = (c l , c r ) be translation context where c l = (c l,1 , c l,2 , . . ., c l,m ) and c r = (c r,1 , c r,2 , . . ., c r,n ).c l and c r are on the leftand right-hand side of s, respectively.Figure 1(a) illustrates the examples for x, c l , c r , and s. Task Definition Given the input tuple (x, c, s), the WLAC task aims at predicting the target word w, which starts with s and is the most appropriate to be placed between c l and c r (Li et al., 2021). In partial translation consisting of c l , w, and c r , w is not necessary to be consecutive to c l,m and c r,1 .Figure 1(a) gives an illustrative example.To be more general in real-world scenarios, the WLAC In the baseline model, h [MASK] does not capture the information from ''disease'' whereas h [disease] does in the energy-based model.Note that ''Target Encoder'' is a variant of the Transformer decoder which can capture bidirectional information on the target side. task further assumes that c l and c r can be empty, which leads to following four translation context types: • Zero-context: both c l and c r are empty; • Prefix: c r is empty; • Suffix: c l is empty; • Bi-context: both c l and c r are not empty. It is noteworthy that context types described above are general and encompass context of several conventional translation scenarios, such as prefix-decoding for left-to-right interactive machine translation (IMT) (Knowles and Koehn, 2016) and post-editing (Lee et al., 2021b;Yang et al., 2022).To elaborate, in prefix-decoding, the context falls into the special case of prefix, where c r is empty and c l is consecutive to w.In post-editing, the context corresponds to the special case of bi-context, where both c l and c r are consecutive to w. et al. (2021) cast WLAC as a word prediction task.Generally, they decompose the WLAC task into two steps: (1) Model the distribution of the target word w using x and c via a Word Prediction Model (WPM); (2) Predict the most appropriate word ŵ which starts with s according to the conditional distribution.Their method achieves state-of-the-art performance. Li A baseline WPM is defined by Transformer architecture (Vaswani et al., 2017) for NMT.Specifically, it first uses a placeholder [MASK] to represent the position of the target word w and put it between c l and c r .Ultimately, it uses the representation of [MASK] defined through Transformer to predict the target word.Figure 2(a) shows the model architecture of the baseline WPM.Formally, the conditional probability distribution of the target word w is: where h [MASK] is the dense representation of [MASK], M represents the learnable embedding matrix, and [w] denotes taking the component with respect to the index w.In the following sections, we use P b to denote the baseline WPM. Then during the inference stage, P b tries to pick up the best w according to the following equation: M[w]h [MASK] (2) where V(s) denotes a set of candidate words that start with s, and M[w] is the word embedding vector of w.Note that h [MASK] is independent of w, and Mh [MASK] can be efficiently computed with GPU in parallel.Therefore, arg max in Equation ( 2) can be computed exactly. 3 Energy-based Model Motivation As shown in Equation (2) in Section 2.2, the baseline WPM essentially maps the hidden vector of the input context (i.e., h [MASK] ) into the candidate target word to predict the most appropriate target word for [MASK].Furthermore, according to the model architecture of the baseline WPM, the context hidden vector h [MASK] does not take the candidate target word into consideration (Liu et al., 2016;Li et al., 2018).Therefore, it might be difficult for h [MASK] to make full use of sufficient information from the source side for accurately predicting the ground-truth target word.Intuitively, the above issue for the baseline WPM in Equation ( 1) can be demonstrated from the example in Figure 1(b), where we use attention weights to visualize source words which are mostly used in h [MASK] . 2 From this figure, we see that h [MASK] uses more information from ''gemeinsame'' and ''verzweifelten'' than that from ''Krankheit''.Therefore, such a model may underestimate the score for the ground-truth word ''disease'', which aligns to ''Krankheit'' on the source side.Consequently, the baseline WPM may not successfully predict the ground-truth word, leading to sub-optimal performance.In response to the above issue, this paper proposes an energy-based model which enables defining the hidden vector on top of both the candidate target word and the input context through an energy function.Our intuition is that with the help of deep neural networks (e.g., attention networks), the energy function is expected to capture 2 In our preliminary experiments, we also employed other methods to attribute source words that are mostly used (e.g., the prediction difference method [Li et al., 2019b]).The conclusions drawn from these alternative methods align closely with those obtained using attention weights.This suggests that, in the context of the WLAC task, the model's utilization of source-side information can be consistently reflected through various effective attribution methods.In this paper, we opt to utilize attention weights for easier description.more valuable information from the source sentence, which makes the model score more reliable to evaluate contributions for w. Model Definition Formally, given x and c, we employ an energy-based model to define the word prediction model as follows: where S(w, x, c) is an energy function taking a real value and Z(x, c) is the normalization term. The energy-based model in Equation ( 3) is very general, because the energy function S(w, x, c) can be any function.For example, as a special case, if we set S(w, x, c) = P b (w|x, c), the energy-based model is then reduced to Equation (1) because the normalization term is 1.Since this paper aims to alleviate the insufficient usage of source sentence information for P b , it seeks another definition of the energy function to define the hidden vector on top of both the candidate target word w and the input context (x, c). Theoretically, there are many ways to define the energy function S(w, x, c).In this paper, in practice, we adopt the way to define S(w, x, c) very similar to P b in model architecture with minimal modifications and almost the same number of parameters as P b .As a result, it could indicate that the potential improvement derived from the energy-based model is not significantly attributed to the complex model architecture of S(w, x, c), but rather to define the hidden vector on top of both the candidate target word w and the input context (x, c). Specifically, the energy function S adopts the similar Transformer architecture as P b .S differs from P b only in two aspects.First, we replace the embedding matrix with a binary classifier.The binary classifier is defined by a parameterized weight vector and brings only a small number of parameters.Second, in particular, the candidate target word w is fed into the Transformer, then it is used as the query in the attention mechanism with (x, c).With the help of deep neural networks, S is expected to capture sufficient information for w through the attention mechanism.Formally, the energy function is defined as follows: where h is the dense representation vector of w accompanied with x and c, and θ is a learnable weight vector.The architecture of the energy function is illustrated in Figure 2(b). We believe that the energy function S can adequately exploit contextual information from (x, c).This belief is exemplified in Figure 1(c). 3n this figure, after visualizing attention weights to source words, the energy function S is able to capture more information from ''Krankheit'' to evaluate the target word ''disease''.Therefore S(disease, x, c) is more reliable than baseline score P b (disease|x, c), which inadequately make use of the signal from ''Krankheit'' as shown in Figure 1(b). Challenges However, it is far from trivial to make the energy-based model achieve the effect as shown in Figure 1(c) and further deliver excellent performance on the WLAC task due to the following efficiency and effectiveness challenges. Efficiency The first challenge is the efficiency in both training and inference.During training, maximizing the log-likelihood for Equation (3) needs the calculation of the value of the normalization term.During inference, it needs to enumerate all candidate words from vocabulary V. Unfortunately, the energy function S sacrifices the parallel computation for all w ∈ V: One has to feed all candidate target words to the network architecture independently for each w.However, since V is too large, such exhaustive computation is infeasible in practice.Consequently, this makes both training and inference challenging for the energy-based model. Effectiveness Second, in our preliminary experiments, optimizing the energy-based model from scratch does not work well, and its final performance is significantly worse than the baseline P b .One possible reason is that it is more difficult to train the energy-based model.Training the energy-based model involves an approximate method to shrink the subset for the normalization term, and this may induce a risk that the informative negative examples are excluded in the shrunk subset (Ma and Collins, 2018;Xu et al., 2022).Therefore, it is easy to get trapped in local optimization when training the energy-based model from scratch. Training and Inference To relieve the aforementioned challenges, we systematically employ three simple yet effective methods inspired by previous studies.First, we employ negative sampling to address the normalization computation during the training (Ma and Collins, 2018;Li et al., 2019a;Xu et al., 2022); similarly, during the inference, we adopt a reranking paradigm, where the energy-based model is used as a reranker over a small subset of candidates (Shen et al., 2004;Nogueira and Cho, 2019;Bhattacharyya et al., 2021).Moreover, we harness a conditional mask bilingual language modeling pre-training strategy for parameter initialization (Lee et al., 2021a). Efficient Training and Inference Efficient Training via Negative Sampling As described in Section 3.3, it is infeasible to calculate the normalization term in an exact way.To optimize the parameter Θ for the energy-based model in Equation (3), we instead use the negative sampling method to approximate the normalization term Z(w, x, c; Θ), and then we maximize the following objective function: where P is a predefined and parameter-free distribution over the vocabulary V and w i ∼ P denotes sampling from the distribution P .Note that if we consider all w i ∈ V, then the above objective function is equivalent to the likelihood function for the energy-based model in Equation (3). In this paper, we try different settings for P .As the first setting, P is defined by the uniform distribution over V.Although sampling from this distribution is efficient and even does not introduce extra computation, it cannot ensure the hard negatives are sampled with a high probability. Thus it is not promising to speed up the convergence in our experiments.Hence, as the second setting, P is instantiated by the baseline model P b .Furthermore, according to our empirical results, it will achieve better performance by replacing the sampling operation in Equation ( 4) with the top-K operation over the distribution P b (w|x, c). Efficient Inference via Reranking As described before, due to the definition of the energy function S(w, x, c), it is too costly to evaluate S(w, x, c) for all w.Thus, it is infeasible to exactly predict the best w such that S(w, x, c) is maximal.Similar to the top-K operation in the training stage, we adopt it in the inference stage as an approximation.Specifically, the inference process by the energy-based model includes the following two steps: • Obtain the top-K subset denoted by Ω(s, K) according to P b (w|x, c), where each element also satisfies the constraint s: • Output the target word ŵ in terms of the energy function as follows: Weight Initialization via Pre-training Recently, pre-trained language models have made exceptional success in numerous natural language processing tasks (Devlin et al., 2019;Lewis et al., 2020;Ouyang et al., 2022).One of their advantages is that they can learn general and contextual representations to boost the downstream tasks (Li et al., 2022(Li et al., , 2023a;;Shi et al., 2023).Inspired by this, we propose to use our limited supervised bilingual data to conduct a small-scale pre-training for the energy-based model to yield better weight initialization.Specifically, following practices of Non-Autoregressive Translation (Ghazvininejad et al., 2019;Li et al., 2022), we adopt Conditional Masked Bilingual Language Modeling (CMBLM) as our pre-training task.This CMBLM pre-trained model is supposed to capture bidirectional contextual information better.Given a sentence pair (x, y), similar to masked language models (Devlin et al., 2019), we train the model to predict a set of masked target tokens y m given a source sentence x and the observable target words y o = y \ y m .The prediction probability distribution for each masked target word y i ∈ y m can be formalized as: As for the model architecture, we adopt the same architecture as P b .During the pre-training stage, we randomly mask 15% of the tokens in y to get y m .After pre-training, we use the CMBLM pre-trained parameters to initialize our energy-based model. Experiments In this section, we first describe the experimental setup.Then we report the main results and analyze the proposed approach. Experimental Setup Datasets We experiment on four language pairs: Zh⇒En, En⇒Zh, De⇒En and En⇒De in training set across four language pairs.Following this, words are categorized into ten intervals based on their frequency.Finally, we calculate the proportion of target words in validation sets corresponding to each frequency interval.The result is presented in Figure 3. Figure 3 indicates a non-uniform distribution of target words across different frequency intervals.This data composition basically reflects demands encountered in real-world scenarios, where non-high frequency words are more challenging for WLAC models. Baselines We compare our model with the following baseline models: • TRANSTABLE: A statistical method inspired by Huang et al. (2015).They create a word-level translation table with a word alignment toolkit. 7During the inference stage, they use the translation table to obtain translations of all source words and filter out invalid candidate words through human typed characters.Ultimately, they pick the candidate word with the highest frequency as the prediction. • TRANS-PE: A Transformer-based baseline inspired by Langlais et al. (2000) and Santy et al. (2019).They first train a vanilla Transformer on training set.While testing, they only feed the left translation context to the Transformer decoder.Then they conduct a next-word prediction task with human typed characters as hard constraints to get the prediction word. • TRANS-NPE: The only difference between this method between TRANS-PE is that there is no position encoding layer in the decoder of TRANS-NPE.They apply average pooling to the representations of all translation con-7 https://github.com/clab/fast_align. text words.And then, they use the pooled representation to predict the target word. • P b : The word prediction model defined in Equation ( 1), which is the state-of-the-art model of the WLAC task.This model is expected to be capable of defining the hidden vector on top of previously generated subwords and the input context to predict the next subword. Implementation Details We implement our energy-based model on top of the Transformer-Base architecture (Vaswani et al., 2017) implemented in Fairseq toolkit (Ott et al., 2019). 8The source encoder is a stack of 6 Transformer encoder blocks.The target encoder is also composed of 6 blocks, each of which is a Transformer encoder block with an additional cross-attention layer between the multi-head self-attention layer and feed-forward layer.The vocabulary size is 60K for Chinese, 50K for German, and 50K for English.As for the implementation of TRANS-BPE, we adopt the Transformer-Base architecture and make For the above models, we set d model = 512, d hidden = 2048, n head = 8 and p dropout = 0.1.The learning rate is set as 0.0005, and the warmup step is set as 4,000 steps.All models are trained with 4096 tokens per batch for a maximum of 50,000 steps with the Adam optimizer (Kingma and Ba, 2015) on 8 NVIDIA V100 GPUs.We update the model parameters after accumulating 2 gradients for TRANS-BPE and 1 gradient for P b and OURS.Models are selected with the best accuracy on the validation set.We repeat the main experiment 5 times by using different random seeds. Main Results Evaluation on Word Prediction by ACC Table 2 lists the main results on four language pairs.From the table, we can make three observations: First, statistical and intuitive Transformer-based methods (#1-3) perform poorly on all language pairs.We speculate that this is because these approaches can not make full use of the information from the input context (e.g., source sentence).Second, TRANS-BPE outperforms P b on average accuracy.The reason behind this could be attributed to the effectiveness of TRANS-BPE to leveraging more valuable source sentence information than P b , which we will elaborate on in Section 5.4.Third, our energy-based model (#7) improves over the previous SOTA performance by an average of 6.07 accuracy points across all language pairs, which demonstrates its effectiveness.Furthermore, in Table 3 and Table 4 2. Human Evaluation It is also crucial to assess the actual improvement in effectiveness of our approach via human evaluation.However, performing comprehensive human evaluations can be resource-intensive in terms of labor.As a compromise, we randomly sample 400 examples from the original Zh⇒En and En⇒Zh NIST05 test sets, with 100 instances for each translation context type.We then collect predictions from three models: P b , TRANSBPE, and OURS.Subsequently, we enlist two professional evaluators to assess the appropriateness of predictions of these models. The human evaluators are presented with the input context, human typed characters, as well as each prediction.The predictions, originating from different models, are anonymized to the evaluators.The human evaluators are asked to assign binary scores for each prediction, where a score of '1' indicates appropriateness, while '0' signifies inappropriateness.Results of human evaluation are presented in Table 5.The Cohen's kappa is 0.92 between the two translators, which is a relatively high agreement.an improvement in performance when evaluated manually.This can be attributed to the fact that the accuracy metric only considers the top-1 prediction, while other predictions may also be valid.To ensure consistency with prior research, we utilize accuracy as the evaluation metric in the following sections. Ablation Studies Negative We report the results in Table 7.We can observe that the random sampling strategy from the uniform distribution is not as effective as the other three sampling configurations from P b .We conjecture that negative samples by random sampling on the uniform distribution could be too trivial to recognize hard negatives, which may hinder the performance of the energy-based model.While sampling according to P b (i.e., the other three strategies) can sample hard negatives and facilitate the training of the energy-based model. K-best Size in Inference We further analyze the impact of candidate word set size K = V(s) during the inference with the energy-based model.Figure 4 shows that, as K increases, the accuracy improvement increases rapidly from K = 1 to K = 4 and starts to saturate after K = 4.The recall of the ground-truth word shares the same trend as accuracy: It first improves sharply, then increases slowly and reaches a relatively high value.So for the efficiency and effectiveness trade-off, we choose to use K = 8 as our candidate word set size in all experiments during the inference. Weight Initialization Our energy-based model is pre-trained by a CMBLM pre-training strategy.Therefore, its improvements might come from two aspects, including 1) the energy-based model and 2) better initialization weights and representations learned from the CMBLM pre-training task.Hence, we perform further studies to quantify the contribution of each component of our approach. To this end, we conduct two experiments: we replace the CMBLM pre-training by initializing the weights from the baseline WPM P b ; and we apply the CMBLM pre-training on top of P b and compare it with the energy-based model with the CMBLM pre-training.We evaluate all these methods on Zh⇒En dataset and De⇒En dataset and present the results in Table 6. The results in Table 6 illustrate that: First, initializing the weights of the energy-based model with P b is not as effective as initializing with the CMBLM pre-training strategy.Second, although both P b and our energy-based model benefit from the CMBLM pre-training strategy, the gain for the energy-based model is much larger.These observations demonstrate that a simple pre-training method can not activate the potential of the energy-based model and the CMBLM pre-training strategy succeeds.performance on two common translation scenarios including prefix-decoding widely used in left-to-right interactive machine translation and post-editing as stated in Section 2.1.To this end, we implement P b , TRANS-BPE and OURS on these two scenarios with the same parameter configuration in Section 5.1.As for the construction of validation sets and test sets, we adopt the same simulation method as Li et al. (2021) other than that the target word must be consecutive to target context.Table 8 shows the results of P b , TRANS-BPE, and OURS on prefix-decoding and post-editing scenarios.As we can see, OURS can further improve average accuracy points across all language pairs by 3.22 on post-decoding and by 2.68 on post-editing, demonstrating the effectiveness of our energy-based model. Evaluation on Usage of Informative Context As we have claimed in Section 3, our motivation is that the energy-based model is capable of capturing more informative context for word prediction, which thereby leads to better performance eventually.In addition to the intuitive example in Figure 1(c), we design an automatic metric to verify our motivation.This metric is inspired by the word alignment error rate for the cross-attention in the Transformer (Li et al., 2019b;Garg et al., 2019).Specifically, as shown in Figure 1(c), the metric (alignment recall@n) is defined as the recall rate of the informative source word ''Krankhof Type-II errors and eit'' by the top-n source words according to the attention score by the Transformer architecture.For each ground-truth target word, e.g., ''disease'' in Figure 1(c), the infor- mative source word is defined by the manually annotated word alignment.We use the human-annotated alignment data on Zh⇔En NIST05 dataset and conduct experiments in the bi-context scenario.We compare the alignment recall@n between P b , TRANS-BPE and OURS in Figure 5.As we can see, the alignment recall@1 Figure 7: Attention weights from the predicted word to source words of three cases in Figure 6.Boxed text denotes source words aligned with the ground-truth target word. of OURS is higher than P b by 60 points and when n is small, it always maintains this advantage.What's more, TRANS-BPE also achieves better alignment recall@n than P b .This may serve as quantitative evidence that introducing subwords or the entire candidate target word into the modeling of hidden vectors with the input context, as implemented in TRANS-BPE and OURS, can make more use of informative context than P b (De Cao et al., 2021).And results illustrated in the Figure 5 also reveal that our energy-based model might be more effective in leveraging informative context than TRANS-BPE. Error Analysis After conducting the human evaluation in Section 5.2, we proceed to inspect incorrect instances of P b and OURS in Zh⇒En test examples. Furthermore, we summarize incorrect instances into three distinct categories: (1) Semantic discrepancy error (Type-I): The model erroneously suggests irrelevant words.These words lack semantic relevance to source sentences other than starting with the same human typed characters. (2) Repetition error (Type-II): The model suggests words that convey semantics of source sentences, however, these words already appear within the target context.(3) Morphological error (Type-III): The model suggests incorrect cognates of target words.9In the forthcoming Case Study section, we will present illustrative examples representing each of these three error categories. In Table 9, we present quantitative results of error occurrences for P b and OURS.In terms of the total error quantity, OURS exhibits a lower number of errors.Notably, for both methods, the most common error type is semantic discrepancy error.Comparatively, OURS demonstrates a notable ability to rectify 25 instances (31.65%) of Type-I errors, 20 instances (68.97%) of Type-II errors, and 14 instances (70.00%) of Type-III errors that are present in P b .Furthermore, OURS exhibits significantly fewer instances in repetition and morphological errors.However, it is essential to acknowledge that the OURS approach also introduces new incorrect instances in each type that are not originally observed in P b . Case Study We provide this case study to better illustrate the advantages of OURS over P b in utilizing contextual information, thereby leading to enhanced semantic information for word-level autocompletion.Figure 6 presents cases where P b yields errors while OURS predicts correctly.Furthermore, Figure 7 illustrates their attention weights which depict the connection between the predicted word and the source words.In case 1 (Type-I), P b tends to suggest ''suffice'', which is not consistent with semantics expressed by the source sentence other than starting with human typed characters ''suf''.In contrast, OURS succeeds in completing ''suf'' to ''suffer''.Through visualizing attention weights in Figure 7, we can find that OURS may have the merit of leveraging more information from the valuable source context (e.g., the aligned word '' '').In case 2 (Type-II), P b completes ''so'' to ''social'', which has already been translated in target context.With the leverage of interactions between candidate target words and input context, OURS successfully suggests ''services''.In case 3 (Type-III), P b suggests the cognates of target words (i.e.''problematic'').Whereas, according to the information captured in the energy-based model, OURS succeeds in suggesting the noun ''problems'', which are more appropriate.Although our model has substantially alleviated aforementioned cases, it is not flawless.One such instance is that, during the inference stage, the effectiveness of OURS is influenced by the baseline recall rate.10 summarizes the training and inference latency of P b , TRANS-BPE, and OURS on Zh⇒En validation dataset.The results indicate that the training and inference latency of OURS is higher than that of P b (approximately 2.0 times and 1.5 times, respectively).This discrepancy in latency can be attributed to the inherent necessity of OURS to get candidate words from P b and subsequently rerank them, which demands additional computational time.In comparison to the more potent auto-regressive model, TRANS-BPE, OURS exhibits a lower inference latency while concurrently delivering better performance.As a result, our approach achieves a desirable balance between performance and processing speed. Applying WLAC into Human-Computer Interactive Translation Setup and Evaluation As stated in the previous sections, one advantage of WLAC is that it is able to increase the efficiency of human input in interactive machine translation.To exemplify the usefulness of WLAC, we apply the WLAC models into IMT.Specifically, we first implement a practical IMT model following Huang et al. (2021) which is based on lexical constrained decoding (Hokamp and Liu, 2017) and thus enables the flexible input from users.Then, we apply three WLAC models (P b , TRANS-BPE, and OURS) into the IMT model, leading to three IMT systems named by IMT-P b , IMT-TRANS-BPE, and IMT-OURS.As a direct baseline, the IMT system without WLAC is denoted by IMT-RAW. For efficiency evaluation in IMT, the standard metric, the number of keystrokes from a human translator (Nepveu et al., 2004;Bender et al., 2005), is used for all IMT systems.To ensure a fair comparison in efficiency, we enforce all human inputted words to be the same for all IMT systems and thus all these IMT systems yield the same translation outputs.We randomly select a subset consisting of 200 source sentences from Zh⇒En NIST05 as x due to intensive human efforts in IMT experiments.On this subset, the standard NMT obtains 50.13 BLEU points and all IMT systems achieve 56.02 BLEU points thanks to human interactions.of keystrokes and offering input convenience for users. Related Work Computer-aided Translation Computer-aided Translation (CAT) (Langlais et al., 2000;Barrachina et al., 2009;Green et al., 2014;Knowles and Koehn, 2016;Santy et al., 2019;Lee et al., 2021b) has the merit of leveraging advantages of machine translation systems to facilitate human translation process.Word-level AutoCompletion (WLAC) is an important feature of interactive CAT (Casacuberta et al., 2022) and it plays an important role in CAT.Huang et al. (2015) leverage useful source-side knowledge to complete the target word.Li et al. (2021) propose a strong word prediction model (WPM) and try to leverage both source-side and target-side information.However, as stated in Section 1, these methods may still inadequately leverage the valuable information from the source sentence. To fill this gap, we introduce an energy-based model to enable the hidden vector to capture more valuable information. In machine translation, with the purpose of alleviating the mismatch between maximum likelihood estimation and the desired metric (e.g., BLEU), Bhattacharyya et al. (2021) and Lee et al. (2021a) propose to train an energy-based model to rerank candidate translations generated by NMT models. In this work, we are in line with prior findings that reranking is a conceptually simple yet empirically powerful framework.However, we pay more attention to leveraging valuable source sentence information in the WLAC task and corresponding training and inference challenges of the energy-based model for reranking. Input Method In recent years, with the advance of neural networks, the input method has shown significant progress in being effective (Huang et al., 2018;Zhang et al., 2019;Tan et al., 2022).However, most current research has concentrated on the monolingual scenarios, without sufficient consideration of how to utilize source-side information in bilingual settings (Li, 2012;Huang et al., 2015).Our work, which centers on the word-level autocompletion task to reduce keystrokes, is a new exploration of bilingual input methods.We believe that combining our approach with other input method technologies could significantly enhance the productivity of human translators.We leave this as a potential direction for future research. Conclusion Word-level AutoCompletion is a critical yet challenging task in Computer-aided Translation. Existing work casts this task as a classification problem.However, it cannot make full use of the contextual information from the input context for its prediction.To alleviate such issue, we introduce a reranking perspective by an energy-based model, which directly defines the energy function on top of the input context and the candidate target word.Extensive experiments and analyses demonstrate the effectiveness of our proposed approach on four standard benchmarks: It achieves about 6.07% improvements over the strongest baseline. Figure 1 : Figure 1: (a) Illustration of the WLAC task in De⇒En.Suppose that a user has input a source sentence x, partial translations (c l , c r ) and is now typing some characters (s).A well-trained WLAC model is expected to suggest ''disease'' to complete s.The expected translation for x is ''And disease is the common enemy of these desperate people.''(b) Attention weights from ''[MASK]'' to words in x of the baseline method.(c) Attention weights from ''disease'' to words in x of our energy-based model.(Color intensity reflects the strength of attention weights.) Figure 2 : Figure2: The comparison between the network architectures for the baseline method WPM (a) and the energy-based model (b).In the baseline model, h[MASK] does not capture the information from ''disease'' whereas h[disease] does in the energy-based model.Note that ''Target Encoder'' is a variant of the Transformer decoder which can capture bidirectional information on the target side. Figure 3 : Figure 3: The proportion of different frequency intervals on Zh⇔En and De⇔En validation datasets.Interval 1 and Interval 10 denote the most frequent interval and the most infrequent interval, respectively. Figure 4 : Figure 4: Accuracy of our energy-based model and recall of ground-truth word with different K on Zh⇒En NIST02 dataset (a) and De⇒En NT13 dataset (b).Experiments are conducted in the bi-context scenario. Figure 5 : Figure 5: Alignment recall@n on Zh⇔En NIST05 dataset with n ranging from 1 to 8. Experiments are conducted in the bi-context scenario. Figure Figure Three cases of P b and OURS in Zh⇒En test set.Human typed characters are in underlined fonts. Figure 8 : Figure 8: Proportion of the number of keystrokes in different IMT systems with and without WLAC models. Table 1 : Statistics of the average length of target words and human typed characters on validation sets are shown in Table1.As we can see, in general, target words are long and human typed characters are short, which poses a challenge for the WLAC task.In addition, we also conduct a frequency analysis of each word Statistics of average length of target words and human typed characters on Zh⇔En and De⇔En validation sets.T.W. and H.T.C. are short for target words and human typed characters, respectively. Table 2 : The main results of different systems on Zh⇔En and De⇔En datasets.The results in this table are the average accuracy across four translation context types (i.e., zero-context, prefix, suffix and bi-context).' † ': results are reported in previous work.' * ': results are implemented by ourselves, which is the average of 5 runs with different random seeds.The best and the second-best results are in bold and underlined fonts, respectively. Table 3 : The detailed results for each translation context type of different systems on Zh⇔En validation set. Table 4 : , we report the detailed results of different systems on four translation context types on the Zh⇔En The detailed results for each translation context type of different systems on De⇔En validation set. Table 5 : The detailed results of different systems under the Zh⇒En and En⇒Zh human evaluation setting.The results in the table represent the average rating scores from two evaluators. and De⇔En validation sets.We can find that our energy-based model can almost achieve performance improvement on each translation context type, except for De⇒En prefix context, and finally results in overall performance in Table Table 5 demonstrates that our energy-based model retains an advantage over previous methods under human evaluation.What's more, one detail worth noting is that, compared to results in Table 2, all models exhibit Table 6 : Sampling for Training As we state in Section 3, negative sampling in the training stage can affect the performance of the energy-based model.We consider two sampling distributions (the uniform distribution and the distribution of P Performance of weight initialization on Zh⇒En and De⇒En datasets.The results in this table are the average accuracy across four translation context types. b ) and three negative sampling strategies, i.e., Table 7 : The results of different negative sampling strategies on Zh⇒En.The results in this table are the average accuracy across four translation context types. Table 8 : The main results of different systems on Zh⇔En and De⇔En datasets under prefix-decoding and post-editing settings. Table 11 : Table11presents the total and average number of keystrokes across different IMT systems.Notably, the employment of WLAC systems significantly reduces the number of keystrokes in comparison to the IMT-RAW baseline without WLAC.Furthermore, in comparison to other systems, our proposed IMT-OURS system attains a minimal number of keystrokes relative to other systems.This observation is reinforced in Figure8, which depicts the distribution of the number of keystrokes across different systems.We can see that most of the keystrokes of OURS are less than 3 (constituting approximately 84.5% of cases), leading to a reduction in the number Efficiency for IMT systems with WLAC or not in terms of total and average number of keystrokes.IMT-Raw denotes the IMT system without WLAC function and other systems respectively denote IMT systems with corresponding WLAC models.
2024-02-16T14:19:56.432Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "1c24a0eab7d4eb0b780732b7f3be87085ce12aef", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MIT", "pdf_hash": "1c24a0eab7d4eb0b780732b7f3be87085ce12aef", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
54178713
pes2o/s2orc
v3-fos-license
Discrete Family Symmetry from F-Theory GUTs We consider realistic F-theory GUT models based on discrete family symmetries $A_4$ and $S_3$, combined with $SU(5)$ GUT, comparing our results to existing field theory models based on these groups. We provide an explicit calculation to support the emergence of the family symmetry from the discrete monodromies arising in F-theory. We work within the spectral cover picture where in the present context the discrete symmetries are associated to monodromies among the roots of a five degree polynomial and hence constitute a subgroup of the $S_5$ permutation symmetry. We focus on the cases of $A_4$ and $S_3$ subgroups, motivated by successful phenomenological models interpreting the fermion mass hierarchy and in particular the neutrino data. More precisely, we study the implications on the effective field theories by analysing the relevant discriminants and the topological properties of the polynomial coefficients, while we propose a discrete version of the doublet-triplet splitting mechanism. Introduction F-theory is defined on an elliptically fibered Calabi-Yau four-fold over a threefold base [1]. In the elliptic fibration the singularities of the internal manifold are associated to the gauge symmetry. The basic objects in these constructions are the D7-branes which are located at the "points" where the fibre degenerates, while matter fields appear at their intersections. The interesting fact in this picture is that the topological properties of the internal space are converted to constraints on the effective field theory model in a direct manner. Moreover, in these constructions it is possible to implement a flux mechanism which breaks the symmetry and generates chirality in the spectrum. F-theory Grand Unified Theories (F-GUTs) [2,3,4,5,6,7,8] represent a promising framework for addressing the flavour problem of quarks and leptons (for reviews see [9,10,11,12,13,14]). F-GUTs are associated with D7-branes wrapping a complex surface S in an elliptically fibered eight dimensional internal space. The precise gauge group is determined by the specific structure of the singular fibres over the compact surface S, which is strongly constrained by the Kodaira conditions. The so-called "semi-local" approach imposes constraints from requiring that S is embedded into a local Calabi-Yau four-fold, which in practice leads to the presence of a local E 8 singularity [15], which is the highest non-Abelian symmetry allowed by the elliptic fibration. In the convenient Higgs bundle picture and in particular the spectral cover approach, one may work locally by picking up a subgroup of E 8 as the gauge group of the four-dimensional effective model while the commutant of it with respect to E 8 is associated to the geometrical properities in the vicinity. Monodromy actions, which are always present in F-theory constructions, may reduce the rank of the latter, leaving intact only a subgroup of it. The remaining symmetries could be U (1) factors in the Cartan subalgebra or some discrete symmetry. Therefore, in these constructions GUTs are always accompanied by additional symmetries which play important role in low energy pheomenology through the restrictions they impose on superpotential couplings. In the above approach, all Yukawa couplings originate from this single point of E 8 enhancement. As such, we can learn about the matter and couplings of the semi-local theory by decomposing the adjoint of E 8 in terms of representations of the GUT group and the perpendicular gauge group. In terms of the local picture considered so far, matter is localised on curves where the GUT brane intersects other 7-branes with extra U (1) symmetries associated to them, with this matter transforming in bi-fundamental representations of the GUT group and the U (1). Yukawa couplings are then induced at points where three matter curves intersect, corresponding to a further enhancement of the gauge group. In this paper we extend the analysis in [35] in order to construct realistic models based on the cases A 4 and S 3 , combined with SU (5) GUT, comparing our results to existing field theory models based on these groups. We provide an explicit calculation to support the emergence of the family symmetry as from the discrete monodromies. In section 2 we start with a short description of the basic ingredients of F-theory model building and present the splitting of the spectral cover in the components associated to the S 4 and S 3 discrete group factors. In section 3 we discuss the conditions for the transition of S 4 to A 4 discrete family symmetry "escorting" the SU (5) GUT and propose a discrete version of the doublet-triplet splitting mechanism for A 4 , before constructing a realistic model which is analysed in detail. In section 4 we then analyse in detail an S 3 model which was not considered at all in [35] and in section 5 we present our conclusions. Additional computational details are left for the Appendices. General Principles F-theory is a non-perturbative formulation of type IIB superstring theory, emerging from compactifications on a Calabi-Yau fourfold which is an elliptically fibered space over a base B 3 of three complex dimensions. Our GUT symmetry in the present work is SU (5) which is associated to a holomorphic divisor residing inside the threefold base, B 3 . If we designate with z the 'normal' direction to this GUT surface, the divisor can be thought of as the zero limit of the holomorphic section z in B 3 , i.e. at z → 0. The fibration is described by the Weierstrass equation where f (z), g(z) are eighth and twelveth degree polynomials respectively. The singularities of the fiber are determined by the zeroes of the discriminant ∆ = 4f 3 + 27g 2 and are associated to non-Abelian gauge groups. For a smooth Weierstrass model they have been classified by Kodaira and in the case of F-theory these have been used to describe the non-Abelian gauge group. 5 Under these conditions, the highest symmetry in the elliptic fibration is E 8 and since the GUT symmetry in the present work is chosen to be SU (5), its commutant is SU (5) ⊥ . The physics of the latter is nicely captured by the spectral cover, described by a five-degree polynomial where b k are holomorphic sections and s is an affine parameter. Under the action of certain fluxes and possible monodromies, the polynomial could in principle be factorised to a number of irreducible components C 5 → C a 1 × · · · × C an , 1 + · · · + n < 5 provided that new coefficients preserve the holomorphicity. Given the rank of the associated group (SU (5) ⊥ ), the simplest possibility is the decomposition into four U (1) factors, but this is one among many possibilities. As a matter of fact, in an F-theory context, the roots of the spectral cover equation are related by non-trivial monodromies. For the SU (5) ⊥ case at hand, under specific circumstances (related mainly to the properties of the internal manifold and flux data) these monodromies can be described by any possible subgroup of the Weyl group S 5 . This has tremendous implications in the effective field theory model, particularly in the superpotential couplings. The spectral cover equation (1) has roots t i , which correspond to the weights of SU (5) ⊥ , i.e. b 0 5 i=1 (s − t i ) = 0. The equation describes the matter curves of a particular theory, with roots being related by monodromies depending on the factorisation of this equation. Thus, we may choose to assume that the spectral cover can be factorised, with new coefficients a j that lie within the same field F as b i . Depending on how we factorise, we will see different monodromy groups. Motivated by the peculiar properties of the neutrino sector, here we will attempt to explore the low energy implications of the following factorisations of the spectral cover equation Case i) involves the transitive group S 4 and its subgroups A 4 andD 4 while cases ii) and iii) incorporate the S 3 , which is isomorphic to D 3 . For later convenience these cases are depicted in figure 1. In case i) for example, the polynomial in equation (1) should be separable in the following two factors C 4 × C 1 : a 1 + a 2 s + a 3 s 2 + a 4 s 3 + a 5 s 4 (a 6 + a 7 s) = 0 which implies the 'breaking' of the SU (5) ⊥ to the monodromy group S 4 , (or one of its subgroups such as A 4 ), described by the fourth degree polynomial and a U (1) associated with the linear part. New and old polynomial coefficients satisfy simple relations b k = b k (a i ) which can be easily extracted comparing same powers of (1) and (3) with respect to the parameter s. Table 1 summarizes the relations between the coefficients of the unfactorised spectral cover and the a j coefficients for the cases under consideration in the present work. The homologies of the coefficients b i are given in terms of the first Chern class of the tangent bundle (c 1 ) and of the normal bundle (−t), b i a j coefficients for 4+1 a j coefficients for 3+2 a j coefficients for 3+1+1 b 0 a 5 a 7 a 4 a 7 a 4 a 6 a 8 b 1 a 5 a 6 + a 4 a 7 a 4 a 6 + a 3 a 7 a 4 a 6 a 7 + a 4 a 5 a 8 + a 3 a 6 a 8 b 2 a 4 a 6 + a 3 a 7 a 4 a 5 + a 3 a 6 + a 2 a 7 a 4 a 5 a 7 + a 3 a 5 a 8 + a 3 a 6 a 7 + a 2 a 6 a 8 b 3 a 3 a 6 + a 2 a 7 a 3 a 5 + a 2 a 6 + a 1 a 7 a 3 a 5 a 7 + a 2 a 5 a 8 + a 2 a 6 a 7 + a 1 a 6 a 8 b 4 a 2 a 6 + a 1 a 7 a 2 a 5 + a 1 a 6 a 2 a 5 a 7 + a 1 a 6 a 7 + a 1 a 5 a 8 b 5 a 1 a 6 a 1 a 5 a 1 a 5 a 7 . ., allowing us to rearrange for the required homologies. Note that since we have in general more a j coefficients than our fully determined b i coefficients, the homologies of the new coefficients cannot be fully determined. For example, if we factorise in a 3 + 1 + 1 arrangement, we must have 3 unknown parameters, which we call χ k=1,2,3 . In the following sections we will examine in detail the predictions of the A 4 and S 3 models. A 4 models in F-theory We assume that the spectral cover equation factorises to a quartic polynomial and a linear part, as shown in (3). T he homologies of the new coefficients may be derived from the original b i coefficients. Referring to Table 1, we can see that the homologies for this factorisation are easily calculable, up to some arbitrariness of one of the coefficients -we have seven a j and only six b i . We choose [a 6 ] = χ in order to make this tractable. It can then be shown that the homologies obey: This amounts to asserting that the five of SU (5) ⊥ 'breaks' to a discrete symmetry between four of its weights (S 4 or one of its subgroups) and a U (1) ⊥ . The roots of the spectral cover equation must obey: where t i are the weights of the five representation of SU (5) ⊥ . When s = 0, this defines the tenplet matter curves of the SU (5) GUT [36], with the number of curves being determined by how the result factorises. In the case under consideration, when s = 0, b 5 = 0. After referring Curve Equation Homology Hyperflux -N Multiplicity Table 2: table of matter curves, their homologies, charges and multiplicities. to Table 1, we see that this implies that P 10 = a 1 a 6 = 0. Therefore there are two tenplet matter curves, whose homologies are given by those of a 1 and a 6 . We shall assume at this point that these are the only two distinct curves, though a 1 appears to be associated with S 4 (or a subgroup) and hence should be reducible to a triplet and singlet. Similarly, for the fiveplets, we have which can be shown 6 to give the defining condition for the fiveplets: Table 1, we can write this in terms of the a j coefficients: Using the condition that SU (5) must be traceless, and hence b 1 = 0, we have that a 4 a 7 +a 5 a 6 = 0. An Ansatz solution of this condition is a 4 = ±a 0 a 6 and a 5 = ∓a 0 a 7 , where a 0 is some appropriate scaling with homology [a 0 ] = η − 2(c 1 + χ), which is trivially derived from the homologies of a 4 and a 6 (or indeed a 5 and a 7 ) [35]. If we introduce this, then P 5 splits into two matter curves: P 5 = a 2 2 a 7 + a 2 a 3 a 6 ∓ a 0 a 1 a 2 6 a 3 a 2 6 + (a 2 a 6 + a 1 a 7 )a 7 = 0 . The homologies of these curves are calculated from those of the b i coefficients and are presented in Table 2. We may also impose flux restrictions if we define: where N ∈ Z and F Y is the hypercharge flux. Considering equation (7), we see that b 5 /b 0 = t 1 t 2 t 3 t 4 t 5 , so there are at most five ten-curves, one for each of the weights. Under S 4 and it's subgroups, four of these are identified, which corroborates with the two matter curves seen in Table 1. As such we identify t i=1,2,3,4 with this monodromy group and the coefficient a 1 and leave t 5 to be associated to a 6 . Similarly, equation (8) shows that we have at most ten five-curves when s = 0, given in the form t i + t j with i = j. Examining the equations for the two five curves that are manifest in this model after application of our monodromy, the quadruplet involving t i + t 5 forms the curve labeled 5 d , while the remaining sextet -t i + t j with i, j = 5 -sits on the 5 c curve. The discriminant The above considerations apply equally to both the S 4 as well as A 4 discrete groups. From the effective model point of view, all the useful information is encoded in the properties of the polynomial coefficients a k and if we wish to distinguish these two models further assumptions for the latter coefficients have to be made. Indeed, if we assume that in the above polynomial, the coefficients belong to a certain field a k ∈ F, without imposing any additional specific restrictions on a k , the roots exhibit an S 4 symmetry. If, as desired, the symmetry acting on roots is the subgroup A 4 the coefficients a k must respect certain conditions. Such constraints emerge from the study of partially symmetric functions of roots. In the present case in particular, we recall that the A 4 discrete symmetry is associated only to even permutations of the four roots t i . Further, we note now that the partially symmetic function is invariant only under the even permutations of roots. The quantity δ is the square root of the discriminant, ∆ = δ 2 (12) and as such δ should be written as a function of the polynomial coefficients a k ∈ F so that δ ∈ F too. The discriminant is computed by standard formulae and is found to be ∆(a k ) = 256a 3 1 a 3 5 − 27a 4 2 − 144a 1 a 3 a 2 2 + 192a 2 1 a 4 a 2 + 128a 2 1 a 2 3 a 2 5 − 2 2 a 2 2 − 4a 1 a 3 a 3 3 − 9a 2 2 − 40a 1 a 3 a 2 a 4 a 3 + 3 a 2 2 − 24a 1 a 3 a 1 a 2 4 a 5 − a 2 4 4a 4 a 3 2 + a 2 3 a 2 2 − 18a 1 a 3 a 4 a 2 + 4a 3 3 + 27a 1 a 2 4 a 1 In order to examine the implications of (12) we write the discriminant as a polynomial of the coefficient a 3 [35] ∆ ≡ g(a 3 ) = 4 n=0 c n a n 3 (14) where the c n are functions of the remaining coefficients a k , k = 3 and can be easily computed by comparison with (13). We may equivalently demand that g(a 3 ) is a square of a second degree polynomial g(a 3 ) = (κa 2 3 + λa 3 + µ) 2 A necessary condition that the polynomial g(a 3 ) is a square, is its own discriminant ∆ g to be zero. One finds We observe that there are two ways to eliminate the discriminant of the polynomial, either putting D 1 = 0 or by demanding D 2 = 0 [35]. In the first case, we can achieve ∆ = δ 2 if we solve the constraint D 1 = 0 as follows Substituting the solutions (16) in the discriminant we find The above constitute the necessary conditions to obtain the reduction of the symmetry [35] down to the Klein group V ∼ Z 2 × Z 2 . On the other hand, the second condition D 2 = 0, implies a non-trivial relation among the coefficients Plugging in the b 1 = 0 solution, the constraint (44) take the form (a 2 2 a 7 + a 0 a 1 a 2 6 ) 2 = a 0 a 2 a 6 + 16a 1 a 7 3 3 which is just the condition on the polynomial coefficients to obtain the transition S 4 → A 4 . Towards an SU (5) × A 4 model Using the previous analysis, in this section we will present a specific example based on the SU (5) × A 4 × U (1) symmetry. We will make specific choices of the flux parameters and derive the spectrum and its superpotential, focusing in particular on the neutrino sector. It can be shown that if we assume an A 4 monodromy any quadruplet is reducible to a triplet and singlet representation, while the sextet of the fives reduces to two triplets (details can be found in the appendix). Singlet-Triplet Splitting Mechanism It is known from group theory and a physical understanding of the group that the four roots forming the basis under A 4 may be reduced to a singlet and triplet. As such we might suppose intuitively that the quartic curve of A 4 decomposes into two curves -a singlet and a triplet of A 4 . As a mechanism for this we consider an analogy to the breaking of the SU (5) GU T group by U (1) Y . We then postulate a mechanism to facilitate Singlet-Triplet splitting in a similar vein. Switching on a flux in some direction of the perpendicular group, we propose that the singlet and triplet of A 4 will split to form two curves. This flux should be proportional to one of the generators of A 4 , so that the broken group commutes with it. If we choose to switch on U (1) s flux in the direction of the singlet of A 4 , then the discrete symmetry will remain unbroken by this choice. Continuing our previous analogy, this would split the curve as follows: The homologies of the new curves are not immediately known. However, they can be constrained by the previously known homologies given in Table 2. The coefficient describing the curve should be expressed as the product of two coefficients, one describing each of the new curves -a i = c 1 c 2 . As such, the homologies of the new curves will be determined by If we assign the U (1) flux parameters by hand, we can set the constraints on the homologies of our new curves. For example, for the curve given in Table 2 as 10 a would decompose into two curves -10 1 and 10 2 , say. Assigning the flux parameter, N , to the 10 2 curve, we constrain the homologies of the two new curves as follows: Similar constraints may also be placed on the five-curves after decomposition. Using our procedure, we can postulate that the charge N will be associated to the singlet curve by the mechanism of a flux in the singlet direction. This protects the overall charge of N in the theory. With the fiveplet curves it is not immediately clear how to apply this since the sextet of A 4 can be shown to factorise into two triplets. Closer examination points to the necessity to cancel anomalies. As such the curves carrying H u and H d must both have the same charge under N . This will insure that they cancel anomalies correctly. These motivating ideas have been applied in Table 3. GUT-group doublet-triplet splitting Initially massless states residing on the matter curves comprise complete vector multiplets. Chirality is generated by switching on appropriate fluxes. At the SU (5) level, we assume the existence of M 5 fiveplets and M 10 tenplets. The multiplicities are not entirely independent, since where it is assumed the reducible representation of the monodromy group may split the matter curves. The curves are also assumed to have an R-symmetry we require anomaly cancellation, 7 which amounts to the requirement that i M 5 i + j M 10 j = 0. Next, turning on the hypercharge flux, under the SU (5) symmetry breaking the 10 and 5,5 representations split into different numbers of Standard Model multiplets [55]. Assuming N units of hyperflux piercing a given matter curve, the fiveplets split according to: Similarly, the M 10 tenplets decompose under the influence of N hyperflux units to the following SM-representations: n(3, 2) +1/6 − n(3, 2) −1/6 = M 10 , Using the relations for the multiplicities of our matter states, we can construct a model with the spectrum parametrised in terms of a few integers in a manner presented in Table 3. In order to curtail the number of possible couplings and suppress operators surplus to requirement, we also call on the services of an R-symmetry. This is commonly found in supersymmetric models, and requires that all couplings have a total R-symmetry of 2. Curves carrying SM-like fermions are taken to have R = 1, with all other curves R = 0. A simple model: N = 0 Any realistic model based on this table must contain at least 3 generations of quark matter (10 M i ), 3 generations of leptonic matter (5 M i ), and one each of 5 Hu and 5 H d . We shall attempt to construct a model with these properties using simple choices for our free variables. In order to build a simple model, let us first choose the simple case where N=0, then we make the following assignments: Note that it does not immediately appear possible to select a matter arrangement that provides a renormalisable top-coupling, since we will be required to use our GUT-singlets to cancel residual t 5 charges in our couplings, at the cost of renormalisability. Basis The bases of the triplets are such that triplet products, 3 a × 3 b = 1 + 1 + 1 + 3 1 + 3 2 , behave as: This has been demonstrated in the Appendix A, where we show that the quadruplet of weights decomposes to a singlet and triplet in this basis. Generations Full coupling Top-type Third generation Note that all couplings must of course produce singlets of A 4 by use of these triplet products where appropriate. Top-type quarks The Top-type quarks admit a total of six mass terms, as shown in Table 5. The third generation has only one valid Yukawa coupling -T 3 · T 3 · H u · θ a . Using the above algebra, we find that this coupling is: With the choice of vacuum expectation values (VEVs): this will give the Top quark it's mass, m t = yva. The choice is partly motivated by A 4 algebra, as the VEV will preserve the S-generators. This choice of VEVs will also kill off the the operators T · T 3 · H u · (θ a ) 2 and T · T · H u · (θ a ) 2 · θ b , which can be seen by applying the algebra above. The full algebra of the contributions from the remaining operators is included in Appendix B. Under the already assigned VEVs, the remaining operators contribute to give the overall mass matrix for the Top-type quarks: This matrix is clearly hierarchical with the third generation dominating the hierarchy, since the couplings should be suppressed by the higher order nature of the operators involved. Due to the rank theorem [37], the two lighter generations can only have one massive eigenvalue. However, corrections due to instantons and non-commutative fluxes are known as mechanisms to recover a light mass for the first generation [37][38]. Charged Leptons The Charged Lepton and Bottom-type quark masses come from the same GUT operators. Unlike the Top-type quarks, these masses will involve SM-fermionic matter that lives on curves that are triplets under A 4 . It will be possible to avoid unwanted relations between these generations using the ten-curves, which are strictly singlets of the monodromy group. The operators, as per Table 5, are computed in full in Appendix B. Since we wish to have a reasonably hierarchical structure, we shall require that the dominating terms be in the third generation. This is best served by selecting the VEV H d = (0, 0, v) T . Taking the lowest order of operator to dominate each element, since we have non-renormalisable operators, we see that we have then: We should again be able to use the Rank Theorem to argue that while the first generation should not get a mass by this mechanism, the mass may be generated by other effects [37] [38]. We also expect there might be small corrections due to the higher order contributions, though we shall not consider these here. The bottom-type quarks in SU (5) have the same masses as the charged leptons, with the exact relation between the Yukawa matrices being due to a transpose. However this fact is known to be inconsistent with experiment. In general, when renormalization group running effects are taken into account, the problem can be evaded only for the third generation. Indeed, the mass relation m b = m τ at M GU T can be made consistent with the low energy measured ratio m b /m τ for suitable values of tan β. In field theory SU (5) GUTs the successful Georgi-Jarlskog GUT relation m s /m µ = 1/3 can be obtained from a term involving the representations5 · 10 · 45 but in the F-theory context this is not possible due to the absence of the 45 representation. Nevertheless, the order one Yukawa coefficients may be different because the intersection points need not be at the same enhanced symmetry point. The final structure of the mass matrices is revealed when flux and other threshold effects are taken into account. These issues will not be discussed further here and a more detailed exposition may be found in [49], with other useful discussion to be found in [58]. Neutrino sector Neutrinos are unique in the realms of currently known matter in that they may have both Dirac and Majorana mass terms. The couplings for these must involve an SU (5) singlet to account for the required right-handed neutrinos, which we might suppose is θ c = (1, 3) 0 . It is evident from Table 5 that the Dirac mass is the formed of a handful of couplings at different orders in operators. We also have a Majorana operator for the right-handed neutrinos, which will be subject to corrections due to the θ d singlet, which we assign the most general VEV, If we now analyze the operators for the neutrino sector in brief, the two leading order contribution are from the θ c · F · H u · θ a and θ c · F · H u · θ b operators. With the VEV alignments θ a = (a, 0, 0) T and H u = (v, 0, 0) T , we have a total matrix for these contributions that displays strong mixing between the second and third generations: where y 0 = y 1 +y 2 +y 3 . The higher order operators, θ c ·F ·H u ·θ a ·θ d and θ c ·F ·H u ·θ b ·θ d , will serve to add corrections to this matrix, which may be necessary to generate mixing outside the already evident large 2-3 mixing from the lowest order operators. If we consider the We use z i coefficients to denote the suppression expected to affect these couplings due to renormalisability requirements. We need only concern ourselves with the combinations that add contributions to the off-diagonal elements where the lower order operators have not given a contribution, as these lower orders should dominate the corrections. Hence, the remaining allowed combinations will not be considered for the sake of simplicity. If we do this we are left a matrix of the form: The right-handed neutrinos admit Majorana operators of the type θ c · θ c · (θ d ) n , with n ∈ {0, 1, . . . }. The n = 0 operator will fill out the diagonal of the mass matrix, while the n = 1 operator fills the off-diagonal. Higher order operators can again be taken as dominated by these first two, lower order operators. The Majorana mass matrix can then be used along with the Dirac mass matrix in order to generate light effective neutrino masses via a see-saw mechanism. The Dirac mass matrix can be summarised as in equation (29). This matrix is rank 3, with a clear large mixing between two generations that we expect to generate a large θ 23 . In order to reduce the parameters involved in the effective mass matrix, we will simplify the problem by searching only for solutions where z 1 = z 3 and z 2 = z 4 , which significantly narrows the parameter space. We will then define some dimensionless parameters that will simplify the matrix: If we implement these definitions, we find the Dirac mass matrix becomes: The Right-handed neutrino Majorana mass matrix can be approximated if we take only the θ c ·θ c operator, since this should give a large mass scale to the right-handed neutrinos and dominate the matrix. This will leave the Weinberg operator for effective neutrino mass, 32.0 31.1 → 33.0 Table 6: Summary of neutrino parameters, using best fit values as found at nu-fit.org, the work of which relies upon [45] . Where we have also defined a mass parameter: We then proceed to diagonalise this matrix computationally in terms of three mixing angles as is the standard procedure [42], before attempting to fit the result to experimental inputs. Analysis We shall focus on the ratio of the mass squared differences: which is known due to the well measured mass differences, ∆m 2 32 and ∆m 2 21 [45]. These give us a value of R ≈ 32, which we may solve for numerically in our model using Mathematica or another suitable maths package. If we then fit the optimised values to the mass scales measured by experiment, we may predict absolute neutrino masses and further compare them with cosmological constraints. The fit depends on a total of six coefficients, as can be seen from examining the undiagonalised effective mass matrix. Optimising R, we should also attempt to find mixing angles in line with those known to parameterize the neutrino sector -i.e. large θ 23 and θ 12 , with a comparatively small (but non-zero) θ 13 . This is necessary to obtain results compatible with neutrino oscillation experiments. Table 6 summarises the neutrino parameters the model must be in keeping with in order to be acceptable. We should note that the parameter m 0 will be trivially matched up with the mass differences shown in Table 6. If we take some choice values of three of our five free parameters, we can construct a contour plot for curves with constant R using the other two. Figure 2 shows this for a series of fixed parameters. Each of the lines is for R = 32, so we can see that there is a deal of flexibility in the parameter space for finding allowed values of the ratio. In order to further determine which parts of the broad parameter space are most suitable for returning phenomenologically acceptable neutrino parameters, we can plot the value of sin 2 (θ 12 ) or sin 2 (θ 23 ) in the same parameter space as Figure 2 -(Y 1 , Y 2 ). The first plot in Figure 3 shows that the angle θ 12 constraints are best satisfied at lower values of Y 1 , while there are the each line spans a large part of the Y 2 space. The second plot of Figure 3 suggests a preference for comparatively small values of Y 2 based on the constraints on θ 23 . As such, we might expect that for this corner of the parameter space there will be some solutions that satisfy all the constraints. Figure 4 also shows a plot for contours of best fitting values of R, with the free variables chosen as Y 3 and Z 1 . As before, this shows that for a range of the other parameters, we can usually find suitable values of (Y 3 , Z 1 ) that satisfy the constraints on R. This being the case, we expect that it should be possible to find benchmark points that will allow for the other constraints to also be satisfied. This flexibility in the parameter space translates to the other experimental parameters, such that the points that allow experimentally allowed solutions are abundant enough that we can fit all the parameters quite well. Table 7 shows a collection of so-called benchmark points, which are points in the parameter space where all constraints are satisfied within current experimental errors -see Planck data [57] puts the sum of neutrino masses to be Σm ν ≤ 0.23eV, which the bench mark points are also consistent with. Proton decay Proton decay is a recurring problem in many SU (5) GUT models, with the "dangerous" dimension six operators, with the effective operator form: Since there are strong bounds on the proton lifetime (τ p ≥ 10 3 3yr) then these operators should be highly suppressed or not allowed in any GUT model. Within the context of the SU (5) × A 4 × U (1) in F-theory, these operators arise from effective operators of the type: 10 · 10 · 10 ·5 , where the5 contains the SU (2) Lepton doublet and the d c , and the quark doublet, u c and e c arise from 10 of SU (5). The interaction will be mediated by the H u and H d doublets. In the model under consideration, two matter curves are in the 10 representation of the GUT group: T 3 containing the third generation, and T containing the lighter two generations. In general these can be expressed as: Here, the role of R-symmetry in the model becomes important, since due to the assignment of this symmetry, these operators are all disallowed. Further more, the operators which have i = 0 will have net charge due to the U (1) ⊥ , requiring them to have flavons to balance the charge. This would offer further suppression in the event that R-symmetry were not enforced. There are also proton decay operators mediated by D-Higgs triplets and their anti-particles, which arise from the same operators, but in a similar way, these will be disallowed by R-symmetry thus preventing proton decay via dimension six operators. The dimension four operators, which are mediated by superpartners of the Standard Model, will also be prevented by R-symmetry. However, even in the absence of this symmetry, the need to balance the charge of the U (1) ⊥ would lead to the presence of additional GUT group singlets in the operators, leading to further, strong suppression of the operator. Unification The spectrum in Table 4 is equivalent to three families of quarks and leptons plus three families of 5 + 5 representations which include the two Higgs doublets that get VEVs. Such a spectrum does not by itself lead to gauge coupling unification at the field theory level, and the splittings which may be present in F-theory cannot be sufficiently large to allow for unification, as discussed in [25]. However, as discussed in [25], where the low energy spectrum is identical to this model (although achieved in a different way) there may be additional bulk exotics which are capable of restoring gauge coupling unification and so unification is certainly possible in this mode. We refer the reader to the literature for a full discussion. S 3 models Motivated by phenomenological explorations of the neutrino properties under S 3 , in this section we are interested for SU (5) with S 3 discrete symmetry and its subgroup Z 3 . More specifically, we analyse monodromies which induce the breaking of SU (5) ⊥ to group factors containing the aforementioned non-abelian discrete group. Indeed, in this section we encountered two such symmetry breaking chains, namely cases ii) and iii) of (2). With respect to the present point of view, novel features are found for case iii). In the subsequent we present in brief case ii) and next we analyse in detail case iii). 4.1 The C 3 × C 2 spectral cover split As in the A 4 case, because these discrete groups originate form the SU (5) ⊥ we need to work out the conditions on the associated coefficients a i . For C 3 × C 2 split the spectral cover equation is The equations connecting b k 's with a i 's are of the form b k ∼ n a n a 9−n−k , the sum referring to appropriate values of n which can be read off from (42) or from Table 1. We recall that the b k coefficients are characterised by homologies [b k ] = η − k c 1 . Using this fact as well as the corresponding equations b k (a i ) given in the last column of Table 1, we can determine the corresponding homologies of the a i 's in terms of only one arbitrary parameter which we may take to be the homology [a 6 ] = χ. Furthermore the constraint b 1 = a 2 a 6 + a 3 a 5 = 0 is solved by introducing a suitable section λ such that a 3 = −λ a 6 and a 2 = λ a 5 . Apart from the constraint b 1 = 0, there are no other restrictions on the coefficients a i in the case of the S 3 symmetry. If, however, we wish to reduce the S 3 symmetry to A 3 (which from the point of view of low energy phenomenology is essentially Z 3 ), additional conditions should be imposed. In this case the model has an SU (5) × Z 3 × U (1) symmetry. As in the case of A 4 discussed previously, in order to derive the constraints on a k 's for the symmetry reduction S 3 → Z 3 we compute the discriminant, which turns out to be and demand ∆ = δ 2 . In analogy with the method followed in A 4 we re-organise the terms in powers of the x ≡ a 1 : First, we observe that in order to write the above expression as a square, the product a 1 a 3 must be positive definite sign(a 1 a 3 ) = +. Provided this condition is fulfilled, then we require the vanishing of the discriminant ∆ f of the cubic polynomial f (x), namely: This can occur if the non-trivial relation a 3 2 = 27a 0 a 2 3 holds. Substituting back to (43) we find that the condition is fulfilled for a 2 2 ∝ a 1 a 3 . The two constraints can be combined to give the simpler ones a 0 a 3 + a 1 a 2 = 0, a 2 2 + 27a 1 a 3 = 0 The details concerning the spectrum, homologies and flux restrictions of this model can be found in [19,22]. Identifying t 1,2,3 = t a and t 4,5 = t b ( due to monodromies) we distribute the matter and Higgs fields over the curves as follows We have already pointed out that the monodromies organise the SU (5) GU T singlets θ ij obtained from the 24 ∈ SU (5) ⊥ into two categories. One class carries U (1) i -charges and they denoted with θ ab , θ ba while the second class θ aa , θ bb has no t i -'charges'. The KK excitations of the latter could be identified with the right-handed neutrinos. Notice that in the present model the left handed states of the three families reside on the same matter curve. To generate flavour and in particular neutrino mixing in this model, one may appeal for example to the mechanism discussed in [44]. Detailed phenomenological implications for Z 3 models have been discussed elsewhere and will not be presented here. Within the present point of view, novel interesting features are found in 3 + 1 + 1 splitting which will be discussed in the next sections. SU (5) spectrum for the (3, 1, 1) factorisation In this case the relevant spectral cover polynomial splits into three factors according to 5 k=0 b k s 5−k = a 4 s 3 + a 3 s 2 + a 2 s + a 1 (a 5 + sa 6 ) (a 7 + sa 8 ) We can easily extract the equations determining the coefficients b k (a i ), while the corresponding one for the homologies reads [b k ] = η − kc 1 = [a l ] + [a m ] + [a n ], k = 0, 1, . . . , 5, k + l + m + n = 18, l, m, n ≤ 8 (46) As in the previous case, in order to embed the symmetry in SU (5) ⊥ , the condition b 1 = 0 has to be implemented. The non-trivial representations are found as follows: The tenplets are determined by b 5 = a 1 a 5 a 7 = 0 As before, the equation for fiveplets is given by Table 1 homology Table 8: Matter curves with their defining equations, homologies, and multiplicities in the case of (3,1,1) factorisation. These, together with the tenplets, are given in Table 8. S 3 and Z 3 models for (3, 1, 1) factorisation In the following we present one characteristic example of F-theory derived effective models when we quotient the theory with a S 3 monodromy. As already stated, if no other conditions are imposed on a k this model is considered as an S 3 variant of the 3 + 1 + 1 example given in [19,22]. In this case the 10 t i , i = 1, 2, 3 residing on a curve -characterised by a common defining equation a 1 = 0 -are organised in two irreducible S 3 representations 2 + 1. The same reasoning applies to the remaining representations. In Table 9 we present the spectrum of a model with N χ = −1 and N ψ = 0. Because singlets play a vital role, here, in addition we include the singlet field spectrum. Notice that the multiplicities of θ i4 , θ 4i are not determined by the U (1) fluxes assumed here, hence they are treated as free parameters. (1) Table 9: Matter content for an SU (5) GU T × S 3 × U (1). S 3 monodromy organises 10 a ,5 a ,5 b and 5 c representations in doublets and singlets. The Yukawa matrices in S 3 Models To construct the mass matrices in the case of S 3 models we first recall a few useful properties. There are six elements of the group in three classes, and their irreducible representations are 1, 1 and 2. The tensor product of two doublets, in the real representation, contains two singlets and a doublet: Thus, if (x 1 , x 2 ) and (y 1 , y 2 ) represent the components of the doublets, the above product gives 1 : (x 1 y 1 + x 2 y 2 ), 1 : (x 1 y 2 − x 2 y 1 ), 2 : The singlets are muliplied according to the rules: 1 ⊗ 1 = 1 and 1 ⊗ 1 = 1. Note that 1 is not an S 3 invariant. With these simple rules in mind, we proceed with the construction of the fermion mass matrices, starting from the quark sector. Quark sector We start our analysis of the Top-type quarks. We see from table 9 that we have two types of operators contribute to the Top-type quark matrix. 1) A tree level coupling: g10 (2) a · 10 (2) 2) Dimension 4 operators: λ 1 10 (1) a and λ 2 10 (2) a · 10 In order to generate a hierarchical mass spectrum we accommodate the charm and top quarks in the 10 (2) a curve and the first generation on the 10 (1) a curve. In this case, only the first (tree level) coupling contributes to the Top quark terms. Using the S 3 algebra above while choosing 5 1 a = H u = υ u and θ 1 a = θ 0 , θ 2 a = (θ 1 , 0) T we obtain the following mass matrix for the Top-quarks Because two generations live on the same matter curve (10 a curve) we implement the Rank theorem. For this reason we have suppressed the element-22 in the matrix above with a small scale parameter . The quark eigenmasses are obtained from For reasonable values of the parameters this matrix leads to mass eigenvalues with the required mass hierarchy and a Cabbibo mixing angle. The smaller mixing angles are expected to be generated from the down quark mass matrix. Indeed, the following Yukawa couplings emerge for the Bottom-type quarks: 1) First generation: g 1 10 2) Second and third generation: g 2 10 3) First-second, third generation: g 3 10 a and g 4 10 4) Second-third generation: g 5 10 We assume that the doublet H d ∈5 (1) b and the singlet θ 2 a (being a doublet under S 3 ) develop VEVs designated as H d = υ d and θ 2 a = (θ 1 , θ 2 ) T . Then, applying the S 3 algebra, the Yukawa couplings above induce the following mass matrix for the Bottom-type quarks: For appropriate Singlet VEVs the structure of the Bottom quark mass matrix is capable to reproduce the hierarchical mass spectrum and the required CKM mixing. Leptons The charged leptons will have the same couplings as the Bottom-type quarks. To simplify the analysis, let us start with a simple case where the Singlet VEVs exhibit the hierarchy θ 2 < θ 1 < θ 0 . Furthermore, taking the limit θ 2 → 0 and switching-off the Yukawas coefficients g 3 , g 4 in (52) we achieve a block diagonal form of the charged lepton matrix with eigenvalues m e = g 1 θ 0 , m µ = g 2 θ 0 − g 5 θ 1 , m τ = g 2 θ 0 + g 5 θ 1 (54) and maximal mixing between the second and third generations. We turn now our attention to the couplings of the neutrinos. We identify the right-handed neutrinos with the SU(5)-singlet θ c = 1 ij . Under the S 3 symmetry, θ c splits into a singlet, named θ (1) c and a doublet, θ (2) c . As in the case of the quarks and the charged leptons we distribute the right handed neutrino species as follows The Dirac neutrino mass matrix arises from the following couplings and has the following form (for θ 2 → 0) Although the Dirac mass matrix has the same form with the charged lepton matrix (52) in general they have different Yukawas coefficients. Thus, substantial mixing effects may also occur even in the case of a diagonal heavy Majorana mass matrix. In the following we construct effective neutrino mass matrices compatible with the well known neutrino data in two different ways. In the first approach we take the simplest scenario for a diagonal heavy Majorana mass matrix and generate the TB-mixing combining charged lepton and neutrino block-diagonal textures. In the second case we consider the most general form of the Majorana matrix and we try to generate TB-mixing only from the Neutrino sector. Block diagonal case We start with the attempt to generate the TB-mixing combining charged lepton and neutrino block-diagonal textures. The Majorana matrix will simply be the identity matrix scaled by a RH-neutrino mass M . The effective neutrino mass matrix M ef f = M D M −1 M M T D now reads: (y 2 y 3 + y 1 y 4 )θ 0 θ 1 y 3 y 5 θ 2 1 (y 2 y 3 + y 1 y 4 )θ 0 θ 1 y 2 2 θ 2 0 + (y 2 4 + y 2 5 )θ 2 1 2y 2 y 5 θ 0 θ 1 y 3 y 5 θ 2 1 2y 2 y 5 θ 0 θ 1 where we used the Dirac mass matrix as given in (55). First of all we observe that we can reduce the number of the parameters by defining Then M ef f ν is written In the limit of a small y 5 Yukawa (or c → 0) we achieve a block diagonal form given by This can be diagonalised by a unitary matrix Now, we may appeal to the block diagonal form of the charged lepton matrix (53) which introduces a maximal θ 23 angle so that the final mixing is Moreover, diagonalisation of the neutrino mass matrix yields tan(2θ 12 ) = 2(xα + yb) The TB-mixing matrix now arises for tan (2θ 12 ) ≈ 2.828. In figure (5) we plot contours for the above relation in the plane (α, x) for various values of the pairs (b, y).As can be observed, tan (2θ 12 ) takes the desired value for reasonable range of the parameters α, b, x, y. For example We conclude that the simplified (block-diagonal) forms of the charged lepton and neutrino mass matrices are compatible with the TB-mixing. It is easy now to obtain the known deviations of the TB-mixing allowing small values for the parameters c, θ 2 in (53) and (58) respectively. However, we also need to reconcile the ratio of the mass square differences R = ∆m 2 32 /∆m 2 21 with the experimental data R ≈ 32. To this end, we first compute the mass eigenvalues of the effective neutrino mass matrix Notice that ∆ is a positive quantity and as a result m 3 > m 2 . We can find easily solutions for a wide range of the parameters consistent with the experimental data. Note that for the same values as in (61) we achieve a reasonable value of R ≈ 28.16. In figure(6) we plot contours of the ratio in the plane (α, b) for various values of the pair (x, y). We have stressed above that we could generate the θ 13 angle by assuming small values of the Yukawas y 5 . However, this case turns out to be too restrictive since the structure of (58) results to maximal (1 − 2) mixing in contradiction with the experiment. The issue could be remedied by a fine-tuning of the charged lepton mixing, however we would like to look up for a natural solution. Therefore, we proceed with other options. TB mixing from neutrino sector. In the previous analysis we considered the simplest scenario for the Majorana matrix. The general form of the Majorana mass matrix arises by taking into account all the possible flavon terms contributions and has the following form To reduce the number of parameters we consider that f i = f for i = 1, 2, 3 and y 3 , y 4 → 0 in the Dirac matrix. In this case the elements of the effective neutrino mass matrix are with an overall factor ∼ υ 2 u θ 2 1 M 2 (2f 3 −2f 2 m−f 2 M +m 2 M ) and the parameters are defined as a = m/M , b = f /M , c = y 5 , x = y 2 , y = y 1 and θ 0 = θ 1 . The matrix assumes the general structure: Maximal atmospheric neutrino mixing and θ 13 = 0 immediately follow from this structure. The solar mixing angle θ 12 is not predicted, but it is expected to be large. Next we try to generate TB -mixing only from the neutrino sector (assuming that the charged lepton mixing is negligible so that it can be used to lift θ 13 = 0). Then, it is enough to compare the entries of the effective mass matrix with the most general mass matrix form which complies with TB-mixing A quick comparison results to the following simple relations while the (23) element is subject to the constraint: which results to a quadratic equation of b with solutions being functions of the remaining parameters b = B ± (a, c, x, y). We choose one of the roots, b = B − , and substitute it back to the equations (66) to express the parameters u, v and w as functions of (a, c, x, y). The requirement that all the large mixing effects emerge from the neutrino sector imposes severe restrictions on the parameter space. Hence we need to check their compatibility with the mass square differences ratio R. We can express the latter as a function of the parameters R = R(a, c, x, y) by noting that the mass eigenvalues are given by Direct substitution gives the desired expression R(a, c, x, y) which is plotted in figure 7. It is straightforward to notice that there is a wide range of parameters consistent with the experimental data. In the first graph of the figure we plot contours for the ratio in the plane (x, y) for various values of a and constant value c = 0.5. In the second graph we plot the ratio in the (a, c) plane with constant x = 0.33. Note that in both cases, the a, c, x, y parameters take values < 1. Having checked that the parameters a, c, x, y are in the perturbative range, while consistent with the TB-mixing and the mass data, we also should require that b = f /M remains in the perturbative regime, i.e. b < 1. In figure 8 we plot the bounds put by this constraint. In particular we plot the mass square ratio in the (x, y) plane for R = 30 and R = 34 and we notice that there exists an overlapping region for values of b between 0.5 and 0.6. In this region Conclusions In this work we considered the phenomenological implications of F-theory SU (5) models with non-abelian discrete family symmetries. We discussed the physics of these constructions in the context of the spectral cover, which, in the elliptical fibration and under the specific choice of SU (5) GUT, implies that the discrete family symmetry must be a subgroup of the permutation symmetry S 5 . Furthermore, we exploited the topological properties of the associated 5-degree polynomial coefficients (inherited from the internal manifold) to derive constraints on the effective field theory models. Since we dealt with discrete gauge groups, we also proposed a discrete version of the flux mechanism for the splitting of representations. We started our analysis splitting appropriately with the spectral cover in order to implement the A 4 discrete symmetry as a subgroup of S 4 . Hence, using Galois Theory techniques, we studied the necessary conditions on the discriminant in order to reduce the symmetry from S 4 to A 4 . Moreover, we derived the properties of the matter curves accommodating the massless spectrum and the constraints on the Yukawa sector of the effective models. Then, we first made a choice of our flux parameters and picked up a suitable combination of trivial and non-trivial A 4 representations to accommodate the three generations so that a hierarchical mass spectrum for the charged fermion sector is guaranteed. Next, we focused on the implications of the neutrino sector. Because of the rich structure of the effective theory emerging from the covering E 8 group, we found a considerable number of Yukawa operators contributing to the neutrino mass matrices. Despite their complexity, it is remarkable that the F-theory constraints and the induced discrete symmetry organise them in a systematic manner so that they accommodate naturally the observed large mixing effects and the smaller θ 13 angle of the neutrino mixing matrix. In the second part of the present article, using the appropriate factorisation of the spectral cover we derive the S 3 group as a family symmetry which accompanies the SU (5) GUT. Because now the family symmetry is smaller than before, the resulting fermion mass structures turn out to be less constrained. In this respect, the A 4 symmetry appears to be more predictive. Nevertheless, to start with, we choose to focus on a particular region of the parameter space assuming some of the Yukawa matrix elements are zero and imposing a diagonal heavy Majorana mass matrix. In such cases, we can easily derive block diagonal lepton mass matrices which incorporate large neutrino mixing effects as required by the experimental data. Next, in a more involved example, we allow for a general Majorana mass matrix and initially determine stable regions of the parameter space which are consistent with TB-mixing. The tiny θ 13 angle can easily arise from small deviations of these values or by charged lepton mixing effects. Both models derived here satisfy the neutrino mass squared difference ratio predicted by neutrino oscillation experiments. In conclusion, F-theory SU (5) models with non-abelian discrete family symmetries provide a promising theoretical framework within which the flavour problem may be addressed. The present paper presents the first such realistic examples based on A 4 and S 3 , which are amongst the most popular discrete symmetries used in the field theory literature in order to account for neutrino masses and mixing angles. By formulating such models in the framework of F-theory SU (5), a deeper understanding of the origin of these discrete symmetries is obtained, and theoretical issues such as doublet-triplet splitting may be elegantly addressed. A.1 Four dimensional case From considering the symmetry properties of a regular tetrahedron, we can see quite easily that it can be parameterised by four coordinates and its transformations can be decomposed into a mere two generators. If we write these coordinates as a basis for A 4 , which is the symmetry group of the tetrahedron, it would be of the form (t 1 , t 2 , t 3 , t 4 ) T . The two generators can then be written in matrix form explicitly as: However, it is well known that A 4 has an irreducible representation in the form of a singlet and triplet under these generators. If we consider the tetrahedron again, this can be physically interpreted by observing that under any rotation through one of the vertices of the tetrahedron the vertex chosen remains unmoved under the transformation. 8 In order to find the irreducible representation, we must note some conditions that this decomposition will satisfy. In order to obtain the correct basis, we must find a unitary transformation V that block diagonalises the generators of the group. As such, we have the following conditions: as well as the usual conditions that must be satisfied by the generators: S 2 = T 3 = (ST ) 3 = I. It will also be useful to observe three extra conditions, which will expedite finding the solution. Namely that the block diagonal of one of the two generators must have zeros on the diagonal to insure the triplet changes within itself. If we write an explicit form for V, we can extract a set of quadratic equations and attempt to solve for the elements of the matrix. Note that we have assumed as a starting point that v ij ∈ R∀i, j. The complete list is included in the appendix. The problem is quite simple, but at the same time would be awkward to solve numerically, so we shall attempt to simplify the problem analytically first. If we start be using: we can trivially see two quadratics, Since we assume that all our elements or V are real numbers, it must be true then that: We may now substitute this result into a number of equations. However, we chose to focus on the following two: Taking the difference of these two equations, we can easily see there is a solution where v 11 = v 12 , and as such by the previous result: We are free to choose whichever sign for these four elements we please, provided they all have the same sign. This outcome reduces the number of useful equations to twelve, as nine of them can be summarised as Let us consider the first of these three derived conditions, along with the conditions: Squaring the condition i v 2i = 0 and using these relations, we can derive easily that v 21 = ± 1 2 . Likewise we can derive the same for v 31 and v 41 . As before, we might chose either sign for each of these elements, with each possibility yielding a different outcome for the basis, though our choices will constrain the signs of the remaining elements in V. Let us make a choice for the signs of our known coefficients in the matrix and choose them all to be positive for simplicity. We are now left with a much smaller set of conditions: After a few choice rearrangements, these coefficients can be calculated numerically in Mathematica. This yields a unitary matrix, up to exchanges of the bottom three rows, which arises due to the fact the triplet arising in this representation may be ordered arbitrarily. There is also a degree of choice involved regarding the sign of the rows. However, this is again largely unimportant as the result would be equivalent. If we apply this transformation to our original basis t i , we find that we have a singlet and a triplet in the new basis, and that our generators become block-diagonal: B Yukawa coupling algebra Table 5 specifies all the allowed operators for the N = 0 SU (5) × A 4 × U (1) model discussed in the main text. Here we include the full algebra for calculation of the Yukawa matrices given in the text. All couplings must have zero t 5 charge, respect R-symmetry and be A 4 singlets. In the basis derived in Appendix A, we have the triplet product: B.1 Top-type quarks The top-type quarks have four non-vanishing couplings, while the T · T 3 · H u · θ a · θ a and T ·T ·H u ·θ a ·θ a ·θ b couplings vanishings due to the chosen vacuum expectations: H u = (v, 0, 0) T and θ a = (a, 0, 0) T . The contribution to the heaviest generation self-interaction is due to the T 3 · T 3 · H u · θ a operator: We note that this is the lowest order operator in the top-type quarks, so should dominate the hierarchy. The interaction between the third generation and the lighter two generations is determined by the T · T 3 · H u · θ a · θ b operator: The remaining, first-second generation operators give contributions, in brief: These will be subject to Rank Theorem arguments, so that only one of the generations directly gets a mass from the Yukawa interaction. However the remaining generation will gain a mass due to instantons and non-commutative fluxes, as in [37] [38]. B.2 Charged Leptons The charged Leptons and Bottom-type quarks come from the same operators in the GUT group, though in this exposition we shall work in terms of the Charged Leptons. The complication for Charged leptons is that the Left-handed doublet is an A 4 triplet, while the right-handed singlets of the weak interaction are singlets of the monodromy group. There are a total of six contributions to the Yukawa matrix, with the third generation right-handed types being generated by two operators. The operators giving mass to the interactions of the right-handed third generation are dominated by the tree level operator F · H d · T 3 , which gives a contribution as: Clearly this should dominated the next order operator, however when we choose a vacuum expectation for the H d field, we will have contributions from F · H d · T 3 · θ d : The generation of Yukawas for the lighter two generations comes, at leading order, from the operators F · H d · T · θ b and F · H d · T · θ a : where the vacuum expectations for θ a and θ b are as before. The next order of operator take the same form, but with corrections due to the flavon triplet, θ d . B.3 Neutrinos The neutrino sector admits masses of both Dirac and Majorana types. In the A 4 model, the right-handed neutrino is assigned to a matter curve constituting a singlet of the GUT group. However it is a triplet of the A 4 family symmetry, which along with the SU (2) doublet will generate complicated structures under the group algebra. B.3.1 Dirac Mass Terms The Dirac mass terms coupling left and right-handed neutrinos comes from a maximum of four operators. The leading order operators are θ c · F · H u · θ b and θ c · F · H u · θ a , where as we have already seen the GUT singlet flavons θ a and θ b are used to cancel t 5 charges. The right-handed neutrino is presumed to live on the GUT singlet θ d . The first of the operators, θ c · F · H u · θ b , contributes via two channels: With the VEV alignments θ a = (a, 0, 0) T and H u = (v, 0, 0) T , we have a total matrix for the operator: The second leading order operator, θ c · F · H u · θ a , is more cimplicated due to the presence of four A 4 triplet fields. The simpelst contribution to the operator is: which only contributes to the diagonal. This is accompanied by two similar operators in the way of: The remaining contribtuions are the complicated four-triplet products. However, upon retaining to our previous vacuum expectation values, these will all vanish, leaving an overall matrix of: Where y 0 = y 1 + y 2 + y 3 as before. These contributions will produce a large mixing between the second and third generations, however they do not allow for mixing with the first generation. Corrections from the next order operators will give a weaker mixing with the first generation. These correcting terms are θ c · F · H u · θ d · θ b and θ c · F · H u · θ d · θ a , though we choose to only consider the first of these two operators, since the flavon θ a will generate a very complicated structure, hindering computations with little obvious benefit in terms of model building. The θ c · F · H u · θ d · θ b operator has of diagonal contributions as: This is mirrored by similar combinations from the other 3 triplet-triplet combinations allowed by the algebra. Overall, this gives: Due to the choice of Higgs vacuum expectation, the diagonal contributions will only correct the first generation mass, giving a contribution to it ∼ vd 1 b. B.3.2 Majorana operators The right-handed neutrinos are also given a mass by Majorana terms. These are as it transpires relatively simple. The leading order term θ c · θ c , gives a diagonal contribtuion: There may also be corrections to the off diagonal, due to operators such as θ c · θ c · θ d . These yield: Higher orders of the flavon θ d are also permitted, but should be suppressed by the coupling. C Flux mechanism For completeness, we discribe here in a simple manner the flux mechanism introduced to break symmetries and generate chirality. • We start with the U (1) Y -flux inside of SU (5) GU T . The 5's and 10's reside on matter curves Σ 5 i , Σ 10 j while are characterised by their defining equations. From the latter, we can deduce the corresponding homologies χ i following the standard procedure. If we turn on a U (1) Y -flux F Y , we can determine the flux restrictions on them which are expressed in terms of integers through the "dot product" The flux is responsible for the SU (5) breaking down to the Standard Model and this can happen in such a way that the U (1) Y gauge boson remains massless [3,2]. On the other hand, flux affects the multiplicities of the SM-representations carrying non-zero U (1) Y -charge. Thus, on a certain Σ 5 i matter curve for example, we have where N Y i = F Y · χ i as above. We can arrange for example M 5 + N Y i = 0 to eliminate the doublets or M 5 = 0 to eliminate the triplet. • Let's turn now to the SU (5) × S 3 . The S 3 factor is associated to the three roots t 1,2,3 which can split to a singlet and a doublet 1 S 3 = t s = t 1 + t 2 + t 3 , 2 S 3 = {t 1 − t 2 , t 1 + t 2 − 2t 3 } T It is convenient to introduce the two new linear combinations t a = t 1 − t 3 , t b = t 2 − t 3 and rewrite the doublet as follows Under the whole symmetry the SU (5) GU T 10 t i , i = 1, 2, 3 representations transform (10, 1 S 3 ) + (10, 2 S 3 ) Our intention is to turn on fluxes along certain directions. We can think of the following two different choices: 1) We can turn on a flux N a along t a 9 . The singlet (10, 1 S 3 ) does not transform under t a , hence this flux will split the multiplicities as follows This choice will also break the S 3 symmetry to Z 3 . 2) Turning on a flux along the singlet direction t s will preserve S 3 symmetry. The multiplicities now read To get rid of the doublets we choose M = 0 while because flux restricts non-trivially on the matter curve, the number of singlets can differ by just choosing N s = 0. 9 In the old basis we would require Nt 1 = 2 3 Na and Nt 2 = Nt 3 = − 1 3 Na. D The b 1 = 0 constraint To solve the b 1 = 0 constraint we have repeatidly introduced a new section a 0 and assumed factorisation of the involved a i coefficients. To check the validity of this assumption, we take as an example the S 3 × Z 2 case, where b 1 = a 2 a 6 + a 3 a 5 = 0. We note first that the coefficients b k are holomorphic functions of z, and as such they can be expressed as power series of the form b k = b k,0 + b k,1 z + · · · where b k,m do not depend on z. Hence, the coefficients a k have a z-independent part a k = m=0 a k,m z m while the product of two of them can be cast to the form a l a k = p=0 β p z p , with β p = p n=0 a ln a k,p−n Clearly the condition b 1 = a 2 a 6 + a 3 a 5 = 0 has to be satisfied term-by-term. To this end, at the next to zeroth order we define λ = a 3,1 a 5,0 + a 2,1 a 6,0 a 5,1 a 6,0 − a 5,0 a 6,1 The requirement a 5,1 a 6,0 = a 5,0 a 6,1 ensures finiteness of λ, while at the same time excludes a relation of the form a 5 ∝ κa 6 where κ would be a new section. We can write the expansions for a 2 , a 3 as follows i.e., satified up to second order in z. Hence, locally we can set z = 0 and simply write a 2 = λ a 5 , a 3 = −λ a 6
2014-07-08T13:22:33.000Z
2014-06-24T00:00:00.000
{ "year": 2014, "sha1": "0934d082bf924d1945bdf299989e34d18c57bd35", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP09(2014)107.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "0934d082bf924d1945bdf299989e34d18c57bd35", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
250128362
pes2o/s2orc
v3-fos-license
Exploration of Shared Gene Signatures and Molecular Mechanisms Between Periodontitis and Nonalcoholic Fatty Liver Disease Background: Periodontitis is associated with periodontal tissue damage and teeth loss. Nonalcoholic fatty liver disease (NAFLD) has an intimate relationship with periodontitis. Nevertheless, interacted mechanisms between them have not been clear. This study was intended for the exploration of shared gene signatures and latent therapeutic targets in periodontitis and NAFLD. Methods: Microarray datasets of periodontitis and NAFLD were obtained from the Gene Expression Omnibus (GEO) database. The weighted gene co-expression network analysis (WGCNA) was utilized for the acquisition of modules bound up with NAFLD and periodontitis. We used ClueGO to carry out biological analysis on shared genes to search their latent effects in NAFLD and periodontitis. Another cohort composed of differential gene analysis verified the results. The common microRNAs (miRNAs) in NAFLD and periodontitis were acquired in the light of the Human microRNA Disease Database (HMDD). According to miRTarbase, miRDB, and Targetscan databases, latent target genes of miRNAs were forecasted. Finally, the miRNAs–mRNAs network was designed. Results: Significant modules with periodontitis and NAFLD were obtained via WGCNA. GO enrichment analysis with GlueGo indicated that damaged migration of dendritic cells (DCs) might be a common pathophysiologic feature of NAFLD and periodontitis. In addition, we revealed common genes in NAFLD and periodontitis, including IGK, IGLJ3, IGHM, MME, SELL, ENPP2, VCAN, LCP1, IGHD, FCGR2C, ALOX5AP, IGJ, MMP9, FABP4, IL32, HBB, FMO1, ALPK2, PLA2G7, MNDA, HLA-DRA, and SLC16A7. The results of differential analysis in another cohort were highly accordant with the findings of WGCNA. We established a comorbidity model to explain the underlying mechanism of NAFLD secondary to periodontitis. Finally, the analysis of miRNA pointed out that hsa-mir-125b-5p, hsa-mir-17-5p, and hsa-mir-21-5p might provide potential therapeutic targets. Conclusion: Our study initially established a comorbidity model to explain the underlying mechanism of NAFLD secondary to periodontitis, found that damaged migration of DCs might be a common pathophysiological feature of NAFLD and periodontitis, and provided potential therapeutic targets. INTRODUCTION Nonalcoholic fatty liver disease (NAFLD), accompanied by varying levels of hepatic fat accumulation, can gradually progress to nonalcoholic steatohepatitis, cirrhosis, and hepatocellular carcinoma, which has fatal consequences (Wesolowski et al., 2017). It was reported that the prevalence of NAFLD accounted roughly 25%, with the prospect of further increase according to expanding populations with metabolic syndrome (Younossi et al., 2016;Estes et al., 2018). According to the pathophysiology of NAFLD, some kinds of medical treatments with respective effects are being assessed in clinical trials. It is regrettable that these drug candidates have been found bringing unpalatable side effects or are limited by efficacy (Alkhouri et al., 2020;Younossi et al., 2021;Barritt et al., 2022). Currently, there has been a lively investigation over the participation of periodontitis in the occurrence and development of NAFLD. Some scholars even believed that there was a comorbidity effect between the two diseases (Rosato et al., 2019). Suffering from oral microbial imbalance brought about by anaerobic Gram-negative bacteria chiefly, periodontitis is associated with periodontal tissue damage and teeth loss (Kuraji et al., 2021). It was reported that there were 1.1 billion people with severe periodontitis worldwide in 2019 (Chen et al., 2021). Actually, mechanical debridement is hard to absolutely clear periodontitis infection and prolonged antibiotic exposure is effective but unsafe (Rotundo et al., 2010;Rams et al., 2020). From the beginning, periodontitis has contributed to the development of NAFLD owing to systemic inflammation and oxidative stress on the basis of vitro study (Tomofuji et al., 2007). Then, Porphyromonas gingivalis, the main pathogenic bacteria of periodontitis, resulted in the development of NAFLD, above which academic discussion had continued ever since (Furusho et al., 2013;Nagasaki et al., 2021;Yamazaki et al., 2021). Epidemiological investigation reported that NAFLD incidence was increasing with the combination of periodontitis, which could increase the risk of progression to liver fibrosis as well (Akinkugbe et al., 2017a;Akinkugbe et al., 2017b;Iwasaki et al., 2018;Suominen et al., 2019;Kuroe et al., 2021). The potential associations between periodontitis and NAFLD has been discussed from in vitro, in vivo, and epidemiologic perspectives, but the genetic and biological mechanisms of connection between periodontitis and NAFLD is unknown. Although most studies suggest that periodontitis can affect NAFLD outcomes, the effect of genetic and biological mechanisms might be bidirectional and extremely valuable. In order to have insights into the mechanisms of diseases, gene microarray technology is developed, which can generate thousands of gene expression data in various diseases. Despite periodontitis and NAFLD being two relatively independent pathological process, periodontitis feels more like a trigger, once it is lit, it will quicken NAFLD aggravation. To explain the trigger, the weighted gene coexpression network analysis (WGCNA) was applied to seek the clusters of shared genes in periodontitis and NAFLD. This method has been utilized to explain genetic mechanism related to various disease phenotypes effectively (Zhu et al., 2020;Yao et al., 2021). Through the deep analysis of the Gene Expression Omnibus (GEO) database, we found that genes related to "dendritic cell migration" were presented in modules hugely relevant to periodontitis and NAFLD, which meant that biological pathway "dendritic cell migration" might play a significant role in periodontitis and NAFLD. In addition, the unique gene signatures in periodontitis and NAFLD were also identified and microRNAs (miRNAs) might play a regulatory role. So far as we know, this is the first study to utilize the bioinformation technique to explain the gene signatures between periodontitis and NAFLD, which is expected to provide new diagnostic and therapeutic windows for these two diseases. Download and Preprocessing of the Gene Expression Omnibus Dataset We used the key words "Nonalcoholic Fatty Liver Disease" or "periodontitis" to search NAFLD and periodontitis gene expression profiles in which the data at original or processed state could be for the return to analysis in the GEO database (Barrett et al., 2013). Finally, the GEO dataset numbered GSE16134 was accepted, which contained a total of 241 periodontitis samples and 69 healthy samples. The GSE48452 and GSE63067 microarray datasets were used for NAFLD, which contained raw transcriptomics data from the human liver tissue. In GSE48452 dataset, 73 samples of human liver grouped into C (control = 14), H (healthy obesity = 27), S (steatosis = 14), and N (NASH = 18) from original references. In GSE63067dataset, two human steatosis and nine human nonalcoholic steatohepatitis (NASH) together with their respective control patterns were analyzed from original references. The original data were processed with background correction, normalization, and relative expression calculation. Log2 transformation was applied to gene expression profiling and the probes were matched with their gene symbols on the basis of annotated files from relevant platforms. Ultimately, we acquired the genetic matrix with row and column defined as specimen names and gene symbols, respectively, for the following analysis. Weighted Gene Co-Expression Network Analysis A popular algorithm, WGCNA, is applied to seek gene coexpression modules with the great importance of biology and discover the relevance between diseases and gene networks Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 939751 (Langfelder and Horvath, 2008). Consequently, WGCNA was utilized for the acquisition of modules bound up with NAFLD and periodontitis. All the differential genes (DEGs) from healthy and disease samples satisfying p value < 0.05 were collected for WGCNA analysis (Supplementary Data S1). Clustering of samples was doing well and the threshold of cutting line was 30. The soft thresholds ranging from 1 to 20 were used for topology calculation and optimum soft threshold was identified as 6. According to the soft threshold, the matrix of correlations was converted to the adjacency matrix and then into a topological overlap matrix (TOM). With the average-linkage hierarchical clustering method which followed, the genes were clustered. The modules were divided according to TOM, each of which contained at least 50 genes. The cutting height of gene module was 0.7 and similar modules were combined. After that, gene significance (GS) and module membership (MM) in every module were calculated for plotting the scatter plots. At last we applied Pearson correlation analysis to estimate the relevance of disease emergence with the merged modules. Identification of Shared and Unique Gene Signatures The modules with high correlation with NAFLD and periodontitis were chosen and the shared genes in modules positively related to NAFLD and periodontitis were crossed and overlapped through venn (Bardou et al., 2014). The nonredundant GO terms can be classified and visually arranged into networks grouped by functions through ClueGO, which is a Cytoscape plug-in unit (Bindea et al., 2009). Hence, we used ClueGO to carry out biological analysis on the shared genes to search their latent effects in NAFLD and periodontitis, in which the biological process (BP) of GO analysis was highlighted. The unique gene signatures in NAFLD and periodontitis were distinguished through the protein-protein interaction (PPI) network and cluster analysis, the latter of which was calculated by the "MCODE" algorithm with default parameters in Cytoscape software (version: 3.7.2). The Co-Expression Modules in Periodontitis and Nonalcoholic Fatty Liver Disease With the application of WGCNA, four modules in total were recognized in GSE48452 and GSE63067, each of which had different color betokening separate module. For the assessment of relevance between disease and each module, a heatmap was plotted on the basis of Spearman correlation coefficient, in which module "green" had the highest relevance to NAFLD (Figures 1A,C). The module, with the core (r = 0.77), was positively related to NAFLD, including 920 genes. Four modules in total were recognized equally in GSE16134, in which the module "cyan" was the strongest and positively related to periodontitis (r = 0.3), including 522 genes ( Figures 1B,D). The Common Gene Signatures in Periodontitis and Nonalcoholic Fatty Liver Disease Seventy-nine genes were crossed and overlapped in the relevant core modules of NAFLD and periodontitis, which was recognized as gene set 1 (GS1). Periodontitis could be the important risk factor for the development of NAFLD according to current study. GlueGo was used to discuss the latent functions of GS1 through the GO enrichment analysis. The top three markedly enriched GO terms about BP were "dendritic cell migration," "regulation of alpha-beta T cell activation," and "cytokine receptor activity" (Figure 2A). Dendritic cell migration represented 44.68% of all the GO terms ( Figure 2B), meaning that this pathway might be vital to both NAFLD and periodontitis. The Unique Gene Signatures in Periodontitis and Nonalcoholic Fatty Liver Disease A PPI network was subsequently established at protein levels for green module of NAFLD. MCODE analysis was applied to acquire clusters. There were 34 nodes and 274 edges in cluster 1 (score = 16.606) ( Figure 3A). Cluster 2 embodied 13 nodes and 78 edges (score = 13.000) ( Figure 3B). Cluster 3 embodied 43 nodes and 209 edges (score = 9.952) ( Figure 3C). Cluster 3 was primarily related to dendritic cell migration, which was represented with the functional enrichment analysis ( Figure 4A). Consequently, it was inferred that cluster 3 pertained to common genes section from NAFLD and periodontitis. The other two clusters were recognized as unique gene signatures in NAFLD. The PPI network was established at protein levels for cyan module of periodontitis equally. MCODE analysis was also applied to acquire the clusters. There were 33 nodes and 387 edges in cluster 1 (score = 24.188) ( Figure 3D). Cluster 2 embodied 13 nodes and 71 edges (score = 11.833) ( Figure 3E). Cluster 3 embodied 13 nodes and 34 edges (score = 5.667) ( Figure 3F). Coincidentally, cluster 3 was primarily related to dendritic cell migration, which was represented with the functional enrichment analysis ( Figure 4B). The other two clusters were recognized as unique gene signatures in periodontitis. Identification and Analysis of Common miRNAs in Periodontitis and Nonalcoholic Fatty Liver Disease In the light of the Human microRNA Disease Database (HMDD) (Huang et al., 2019), 43 miRNAs were found to be related to NAFLD and 33 miRNAs were related to periodontitis (Supplementary Data S2). There were five overlapped miRNAs (hsa-mir-125b-5p, hsa-mir-155-5p, hsa-mir-17-5p, hsa-mir-200b-5p, and hsa-mir-21-5p) between NAFLD and periodontitis. There followed the enrichment analysis of five miRNAs, which revealed a variety of biological functions that these miRNAs are involved in. Similarly, "dendritic cell migration" got involved in these biological processes according to the heatmap, signifying that miRNAs associated with pathogenesis of NAFLD and periodontitis could also regulate dendritic cell migration ( Figure 6A). Hence, our findings were proved again. According to miRTarbase (Chou et al., 2018), miRDB (Chen and Wang, 2020), and Targetscan (Morovat et al., 2022) databases, latent target genes of five miRNAs were Figure 6B). Unfortunately, hsa-mir-155-5p was not retrieved in the database, and hsa-mir-200b-5p had no overlapped target genes. Finally, the miRNAs-mRNAs network was designed ( Figure 6C). DISCUSSION As noted earlier, NAFLD has a high prevalence in periodontitis, indicating that some susceptibility factors in periodontitis may trigger the initiation and progression of NAFLD. Although it is not yet clear that how hazardous factors are delivered to liver from periodontium, the following two routes have been highly accepted. Blood transmission of bacteria, endotoxin, and inflammatory mediators from the periodontal tissues is the first aspect correlating periodontitis and NAFLD. Delivery of oral bacteria via the digestive tract is the second aspect, which brings out the imbalance of the intestinal bacteria (Kuraji et al., 2021). Regardless of dangerous medium such as periodontal bacteria, lipopolysaccharide and proinflammatory mediators, or intestinal dysbacteriosis, the precise role of them in effect of periodontitis on NAFLD needs further studies. So far, no studies have discussed the susceptibility of NAFLD in periodontitis at the genetic level. Drawing support from WGCNA, we first discussed the common mechanisms of periodontitis and NAFLD. The differentially expressed genes in common were found in the intersection of GS1 and GS2, such as VCAN, LCP1, and ENPP2. Functional enrichment analysis concerned included dendritic cell migration, regulation of alpha-beta T cell activation, cytokine receptor activity, dendritic cell chemotaxis, and neutral lipid catabolic process. Finally, the miRNAs-mRNAs network was designed. More importantly, genes related to "dendritic cell migration" were presented in modules hugely relevant to periodontitis and NAFLD and experienced repeated verification. In addition, miRNAs might play a regulatory role in periodontitis and NAFLD. Playing a major Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 939751 6 role in innate immunity, dendritic cells (DCs) could capture and present antigens, which are also the bond to adaptive immunity (Steinman, 2001). The research shows that transmission of bacteria from periodontal tissues to distant sites via systemic circulation might appear at highly migrated DCs (Carrion et al., 2012). Porphyromonas gingivalis, as major pathogens in periodontitis, can attack DCs, reduce the level of proapoptosis protein expression, and prolong the survival of DCs . This type of bacteria not only damages immune homeostasis of DCs, but also disrupts DCs homing to secondary lymphoid organs, the latter of which makes the inflammation migrate to vascular circulation (Miles et al., 2014). Unfortunately, it could avoid intracellular killing in DCs by targeting to dendritic cell-specific intercellular adhesion molecule-3-grabbing nonintegrin (El-Awady et al., 2015). However, oral microbial diversity destines that Porphyromonas gingivalis do not fight alone. In a previous study, a union of three oral microorganisms, Streptococcus gordonii, Fusobacterium nucleatum, and Porphyromonas gingivalis, drove bacterial growth, attack and stability in DCs, and regressed DCs maturation via coordinated effects, which generated microbial transmission and inflammatory spread (El-Awady et al., 2019). After-effects of bacteria themselves are taken out, lipopolysaccharide or proinflammatory cytokines, coming from periodontitis and bringing about low-grade systemic inflammatory state, is closely related to DCs (Kanaya et al., 2004;Jardine et al., 2019;Psarras et al., 2021). With its receptors distributed extensively in the human body, inactive gingipains, as critical virulence factors of Porphyromonas gingivalis, leads to proinflammatory response in DCs (Ciaston et al., 2022). All in all, DCs not only play a central role in initiating and exacerbating periodontitis but also could be considered as potential contributing factors to the development of systemic diseases related to periodontitis, one of which is NAFLD. It is generally known that intestinal microbial imbalance is intimately connected to NAFLD. First, anomalous abundance changes of bacterial phyla affect the severity of NAFLD (Boursier et al., 2016). Second, metabolite of intestinal bacteria results in fatty degeneration of liver cells, insulin resistance, and hepatic fibrosis (Ji et al., 2019). Third, endotoxemia attributed to the increase in intestinal permeability is related to pathogenesis of NAFLD (Wang et al., 2022). However, the mechanism of the pathology in which the intestinal flora imbalance induced by oral bacteria contributes to NAFLD has been unclear. Studies have pointed out that Porphyromonas gingivalis plays a major role in the process via interfering with the metabolic and immune profiles (Wang et al., 2022). It is not clear if DCs also affect the transmission of pathogenic bacteria and their toxic metabolites to the liver through the portal vein. The existing fact remains that the physiological action of DCs can be affected by the intestinal microbes (Yang et al., 2021). On the other hand, numerous researches have proved that migratory DCs could dominate induction of enteric T regulatory cells to manage commensal bacteria or to set up oral tolerance targeted at dietary antigens (Esterházy et al., 2016;Esterházy et al., 2019;Russler-Germain et al., 2021). Although the action mechanism of DCs in NAFLD is not completely clear, existing studies have confirmed the important role of DCs. DCs play a proinflammatory role in the animal models with nonalcoholic steatohepatitis (NASH). With CD11c+ DCs or CD103+ DCs consumption, decreased expression of proinflammatory cytokines and chemokines could prevent liver fibrosis (Nati et al., 2016;Schuster et al., 2018). Recent research has also shown that depletion of type 1 conventional DCs attenuates liver pathology in the NASH mouse models (Deczkowska et al., 2021). As noted previously, NAFLD has a high morbidity in periodontitis, indicating that predisposing factors in periodontitis could touch off NAFLD. In our modeling, both the discovery cohort and validation cohort reached the conclusion that dendritic cell migration played an important part in gene function enrichment analysis. Previous studies also support our view. Consequently, damaged migration of DCs might be a common pathophysiologic feature of NAFLD and periodontitis, which means that dendritic cell migration plays a key role and provides critical therapeutic target in the comorbidity model. MiRNA, as endogenous noncoding regulatory RNA, plays huge roles in the regulation of posttranscriptional gene. We have constructed the miRNAs-mRNAs network with the benefit of HMDD, miRTarbase, miRDB, and Targetscan databases. Interestingly, the target genes of common miRNAs, having no intersection with GS1 and GS2, still enriched in "dendritic cell migration", which might be related to the indirect interaction of genes. Among these miRNAs, epigenetic silencing of miR-125b-5p resulted in liver fibrosis in NAFLD (Cai et al., 2020). Differential expression of miR-125b-5p influenced the functions of DCs (Hu et al., 2017). Mast cells had a close associate of periodontitis, and overexpressed miR-125b-5p in its own exosomes (Ekström et al., 2012;Tetè et al., 2021). We speculated that periodontitis might be affected by miR-125b-5p. Similarly, miR-17-5p and miR-21-5p were reported to play a part in periodontitis and migration of DCs and were predicted to get involved in NAFLD (Du et al., 2016;Kim et al., 2017;Reis et al., 2018;Cui et al., 2019;Zhang et al., 2020;Lin et al., 2022). Although these miRNAs have not been verified in the microenvironment of comorbidity with periodontitis and NAFLD, they also provide important therapeutic targets. Considering the reality of the situation, experimental validation is currently not possible because clinical specimens of NAFLD are extremely difficult to obtain. Therefore, this is a limitation of our study and we will gradually collect samples for vitro assays. All in all, our study has established a comorbidity model to explain the underlying mechanism of NAFLD secondary to periodontitis, found that damaged migration of DCs might be a common pathophysiologic feature of NAFLD and periodontitis, and provided potential therapeutic targets. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS WX and ZZ designed and conducted the whole research. WX and ZZ wrote the majority of the manuscript. LY, BX, and HX contributed to data collation. XW and SS revised and finalized the manuscript. All authors contributed to the article and approved the submitted version. WX and ZZ have contributed equally to this work.
2022-06-30T15:20:07.479Z
2022-06-28T00:00:00.000
{ "year": 2022, "sha1": "81ae6521d0af89a6a0d0918ea2efe3af34db7943", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2022.939751/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "d1a083b6805fe4efb2dca4fd1cf63e1fbcdd6a4e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
3880934
pes2o/s2orc
v3-fos-license
A Vicious Cycle: A Cross-Sectional Study of Canine Tail-Chasing and Human Responses to It, Using a Free Video-Sharing Website Tail-chasing is widely celebrated as normal canine behaviour in cultural references. However, all previous scientific studies of tail-chasing or ‘spinning’ have comprised small clinical populations of dogs with neurological, compulsive or other pathological conditions; most were ultimately euthanased. Thus, there is great disparity between scientific and public information on tail-chasing. I gathered data on the first large (n = 400), non-clinical tail-chasing population, made possible through a vast, free, online video repository, YouTube™. The demographics of this online population are described and discussed. Approximately one third of tail-chasing dogs showed clinical signs, including habitual (daily or ‘all the time’) or perseverative (difficult to distract) performance of the behaviour. These signs were observed across diverse breeds. Clinical signs appeared virtually unrecognised by the video owners and commenting viewers; laughter was recorded in 55% of videos, encouragement in 43%, and the commonest viewer descriptors were that the behaviour was ‘funny’ (46%) or ‘cute’ (42%). Habitual tail-chasers had 6.5+/−2.3 times the odds of being described as ‘Stupid’ than other dogs, and perseverative dogs were 6.8+/−2.1 times more frequently described as ‘Funny’ than distractible ones were. Compared with breed- and age-matched control videos, tail-chasing videos were significantly more often indoors and with a computer/television screen switched on. These findings highlight that tail-chasing is sometimes pathological, but can remain untreated, or even be encouraged, because of an assumption that it is ‘normal’ dog behaviour. The enormous viewing figures that YouTube™ attracts (mean+/−s.e. = 863+/−197 viewings per tail-chasing video) suggest that this perception will be further reinforced, without effective intervention. Introduction Tail-chasing in dogs is widely celebrated in cultural references, such as its depiction in the cheerful, repetitive phrases of Chopin's Minute Waltz [1], and as performed by Sirius Black's animagus dog, Padfoot, in the Harry Potter series, when it is accompanied by a 'joyful bark' [2]. However, scientific literature exclusively refers to tail-chasing -or 'spinning', when the behaviour is not necessarily focussed towards the tail -in clinical contexts, because it can indicate welfare problems of varying severity, e.g. [3,4,5]. The most common reported diagnosis is canine compulsive disorder [6,7], but other conditions, such as dermatitis or anal sacculitis [8], are also reported. Even in otherwise healthy dogs, the behaviour could indicate externally triggered welfare problems including lack of stimulation ('boredom'), insufficient exercise, or various stressful situations [4,7,9]. Nevertheless, tailchasing can simply comprise play or exercise in many dogs, and these 'normal' tail-chasers have never yet been included in scientific publications, partly because the sporadic nature of the behaviour makes it difficult to study. Clinical texts, e.g. [3,4,10,11], often propose that compulsive tail-chasing develops from repeated exposure to triggering events or situations, but the behaviour gradually becomes dissociated from the original trigger, occurring ever more frequently in increasingly diverse contexts. In other words, the behaviour might develop through a vicious cycle. Like many stereotypic behaviours, tail-chasing can sometimes be temporarily eliminated by the opioid blocker, naloxone [12]. Attempted treatments for compulsive tail-chasing include behavioural therapy alongside drugs, including the tricyclic antidepressant, clomipramine, the selective serotonin reuptake inhibitor, fluoxetine [6,9], and the NMDA receptor blocker, memantine [7]. Tail-amputation has no reported success, and the problem can be so intractable, and distressing for the owners, that dogs are euthanased [7,12]. Indeed, all 32 dogs in Blackshaw et al.'s [12] study -the largest study to date -were euthanased due to the persistence of their condition. Several breeds are prone to compulsive tail-chasing, including Bull Terriers [12], German Shepherds [6] and Anatolian sheepdogs [9]. However, the sample sizes of clinical studies to date have been too small to rule out high propensities in other breeds too, such as Jack Russells and West Highland White Terriers [12]. Breed differences could arise from environmental (e.g. opportunities to exercise) and/or genetic factors. If the latter, the behaviour could have been artificially selected for, even indirectly if tail-chasing is linked with a desirable characteristic, as with many inherited defects [13]. Despite the general renown of the behaviour and its potential severity in clinical cases, little is known about tail-chasing in home contexts or when no clinical causes have been diagnosed. Yet, a search for ''dog chasing tail'' on the most popular video-sharing website [14], YouTube TM , returned almost 3500 hits in 2010. These videos provide a new opportunity for a hitherto untapped insight into tail-chasing in non-clinical contexts, and will include many 'normal' dogs (those with no relevant clinical diagnosis). For the first time, a large sample size is rapidly available and economically feasible. Furthermore, the videos reveal environments and contexts in which tail-chasing occurs, often together with audible and written responses of human observers (Figure 1). Despite the increasing accessibility of broadband and video cameras/phones to a wide demographic, the dogs and humans on YouTube TM will not represent all dogs and humans; indeed truly representative sampling eludes most population studies. Dogs that tail-chase very rarely are likely to be under-represented, as videographers would have to catch the behaviour at exactly the right place and time. Conversely, dogs with clinical diagnoses may also be under-represented if owners are embarrassed (but not if they wish to raise awareness). Thus, the tail-chasing dogs on YouTube TM should approximately represent the centre of the normal distribution of dogs that chase their tails at some point in their lives. As with other survey methods, the use of video-sharing websites requires similar caution in generalizing conclusions beyond the sample population, because the populations are usually non-random and self-selecting to some extent. However, data from video-sharing websites reflects directly observed behaviour (rather than relying on respondents' descriptions), and data are unprompted by the researcher, so they are less likely to be biased towards the study purposes. To date, video-sharing websites, such as YouTube TM , have been studied regarding their potential for disseminating information to the public, in contexts including tobacco use [15], immunization [16] and sunbed use [17]. More recently, the actual video content has begun to be explored epidemiologically, providing insight into an asphyxiation 'game' in teenagers (using 65 video clips) [18], and into dietary messages given by adults to children playing with toy kitchens (115 clips) [19]. The current study goes further, using a larger sample size, plus a control group to examine the characteristics of and responses to tail-chasing in domestic dogs. My aims were to describe (i) canine breed/morphological and (ii) behavioural characteristics, and the (iii) animal welfare implications and (iv) broad environmental contexts, associated with tail-chasing; and also (v) to describe human responses to it on YouTube TM . I made no clinical diagnoses from the videos, but could broadly infer certain animal welfare implications from visible injuries and characteristics commonly associated with perseverative abnormal behaviours, including both frequent performance and persistence in the face of distraction. Description of tail-chasing videos I identified tail-chasing videos using the search term ''dog chasing tail'' on YouTube, which returned 3340 hits in November 2009. The videos were continually but gradually shuffled by YouTube's Figure 1. Screenshot of a video of a Golden Retriever chasing its tail on YouTube TM . The sidebar on the right also offers views links to related videos, showing a thumbnail of the video content, the video title, and the number of times the video has been viewed. The usernames are withheld here for privacy reasons, but on YouTube TM they are hyperlinked to the uploaders' homepages, which usually contain information about their age, sex, country, and their other videos. doi:10.1371/journal.pone.0026553.g001 confidential search algorithms. Between Nov 2009 and August 2010, I collected data from the first 400 videos of the returned hits, subject to the following exclusion criteria: only one video was used per 'uploader' (person who uploaded a video to their YouTube TM account); and very dark or pixelated videos, or those not showing a domestic dog tail-chasing or spinning were discarded; photographic collages, professional videos, and advertisements were excluded, and in video collages, only the first continuous shot was used. It is worth noting that in some cases, the uploader may neither have owned the dog, nor have taken the footage themselves. The following details were recorded from the videos (further details in Table S1): N Clip ID and URL N the reported sex, age and nationality of the uploader N dog breed, sex and age N dog tail morphology N relevant human and dog behaviour observed in the video (summarized in Table 1) N environmental context (indoors or outdoors; television switched on, off or unknown) N relevant descriptive comments by the uploader and viewers (summarised in Table 2). I structurally defined all the behaviours scored according to an ethogram (Table S1 and S2), and systematically categorized human comments after data collection using defined criteria ( Table 2). Comparisons of tail morphology and environmental context in breed-matched controls I compared tail-chasing videos against 400 breed-matched control (non-tail-chasing) videos, to investigate associations between tail-chasing and tail morphology, such as whether docked tails were more or less frequently seen in tail-chasing versus control videos. The control videos were also used to identify whether dogs were more frequently indoors, and whether a television, computer, radio or music was switched on when tail-chasing. Breed-and agematching was important because these factors affect the likelihood that dogs are taken outdoors and that their tails are docked. My control search terms were ''[dog breed name]''+''dog'' or ''puppy'' as appropriate to match each tail-chasing video. The first control video not yet scored for that breed was used in each case. Exclusion criteria were as before, but additionally, videos were excluded if the tail could not be clearly seen; if the control video included tailchasing or spinning; or if the video seemed to involve animal cruelty, for ethical reasons (e.g. dog fights). The ensuing control videos included diverse footage: for example, dogs playing, vocalising, performing 'tricks', eating, dreaming, exercising, exploring novel stimuli, or interacting with other dogs, other pets, or humans. Observer reliability A subset of the variables described in Table S2 & S3, encompassing the more subjective aspects of dog and human behaviour, were checked for inter-and intra-observer reliability using 10% of the tail-chasing videos. Kappa observer reliability statistics are meaningless in overly homogenous samples [20][21][22], so Hoehler [21] suggests that investigators should 'concentrate on obtaining populations with trait prevalence near 50% rather than searching for statistical methods to rescue inefficient experiments.'' The 40 videos were therefore selected (using my ratings as the primary observer) to optimize the prevalence index for as many variables as possible, avoiding overly homogenous samples and allowing even rare scores to be tested [20,21]. For example, only 46 videos had comments revealing the dog's tail-chasing frequency as well as having a potentially distracting event occurring during the video, so 35 of these videos were included in the reliability sample (representing habitual, periodic and rare tail-chasing, in both perseverative (difficult to distract) and non-perseverative dogs). This meant that for key variables, such as tail-chasing frequency, distractibility, or play behaviour, the prevalence index was ,0.4 [20], so no variable was too rare to test. The order in which videos were re-watched was randomized. The other observer (OHB; see Acknowledgements) was an experienced observer of animal behaviour, and was blind to the hypotheses being tested. He received five practice videos for which he could see my original scores, and he was given a detailed description of the scoring criteria for each variable (Table S2), but he received no other training. Intra-and inter-observer agreement was tested using Fleiss' Kappa statistics for binary variables, and Kendall's W for ordinal variables (Minitab 15). Thresholds for clinical acceptability were defined as Moderate (k or W$0.4), Substantial ($0.6), or Excellent ($0.8) according to convention, e.g. [22]. Only scores for panting behaviour failed to attain at least Moderate reliability, so results for that variable are not reported. The observer reliability scores are shown in Table S3. Statistical methods Within the 400 tail-chasing videos, I tested associations between specific tail-chasing behaviours and their predictors (other behaviours, dog characteristics, and human responses) using generalized linear mixed models (glmmPQL and glmmML in R). I included breed as a random factor in every model to control for non-independence of similar dogs, and compared breed groups (defined according to both the UK Kennel Club and genetic groupings found by Parker et al. [23]) either as random or as fixed factors in alternative models. Breed was nested within breed group. Video-length was always included, because certain events (e.g. play behaviour or potential distractions) will have been more likely to be observed in longer videos. For analyses of clinically relevant predictors, dogs with objects attached to their tails were excluded, because their tail-chasing was not necessarily ever a selfinitiated behaviour. I also used generalized linear mixed models, as before, to compare tail-chasing and control videos. In these analyses, tail morphology, the in-or outdoor location, and television/computer/radio activity were used as predictors. I selected models using Akaike information criteria, and identified (and thus avoided) multicollinearity using inflated standard error terms. The a-level for statistical significance was set at P#0.05 in this exploratory study [24]; the number of independent tests for each dependent variable ranged from six to 16, depending on the hypotheses relating to that variable. Of the total 76 tests carried out, just under four (5%) of the seemingly significant results can therefore be expected to be Type I errors, but follow up studies will be required to reveal which results can and cannot be replicated. No correction for multiple testing has been done here, because the risk of Type II errors, failing to report potentially significant results, is considered more serious in exploratory studies than that of Type I errors [24]. Uploader and video characteristics Of the 400 uploaders of the tail-chasing videos, 69.0% were from the USA, 13.8% from the UK, 5.8% from Canada, and Tail-chasing characteristics and their associations Associations between dog behaviour characteristics and context (excluding dogs with objects attached to their tails) are shown in Table 1. Of the 86 tail-chasing videos that had comments describing the frequency of tail-chasing, about 30% of dogs were stated as chasing their tails habitually (e.g. daily or 'all the time', rather than 'periodically' or 'rarely' (Table 1; Table S1), which is a clinical criterion for classifying tail-chasing as compulsive [7,25]). Approximately 38% of dogs appeared difficult to distract, or 'perseverative' during tail-chasing. Perseverative dogs were more likely to tail-chase habitually and to collide with objects when tailchasing, and they were less likely to show play behaviours than were other tail-chasing dogs (Table 1). Hair-loss from the tail or hind-quarters was seen in 1.25% of the tail-chasing dogs and there were no comments that suggested uploaders or viewers considered this as an indication of the tail-chasing being a potential clinical problem. Play behaviours (defined in Table 1) were interspersed with tailchasing bouts in 17% of videos, and were more likely to be seen in puppies than older dogs. When indoors, tail-chasing was less likely to include play behaviour than when outdoors, and with a screen switched on, tail-chasing dogs were less likely to bark but more likely to wag their tails (Table 1). Problematic tail-chasing (as indicated by the percentage of all tail-chasing videos that appeared perseverative or habitual per breed group) was distributed widely across diverse Kennel Club breed groups ( Table 3). The highest proportion of perseverative tail-chasing was observed in toy breeds (56% of videos), followed by crossbreeds (43%) and terriers and working dogs (42% of both), but around one quarter of videos of gundogs, hounds, and utility breeds also showed evidence for perseveration. Few breed groups contained enough videos to enable assessment of tail-chasing frequency, but of those with at least 10 such clips, the highest proportion of habitual tail-chasing was observed in crossbreeds (52%) and terriers (38%). The five dogs with visible hair-loss or injury to the tail or hindquarters comprised two German Shepherds, one Labrador-Staffordshire Bull Terrier cross, one Labrador and one Parsons Jack Russell Terrier. Human responses and descriptions of tail-chasing videos While 69.3% of tail-chasing videos were categorized as 'Pets and Animals', 18.8% were categorized as 'Comedy' and 6.3% as 'Entertainment'. Human responses to tail-chasing are shown in Table 2. In 55% of videos, laughter could be heard, and this was significantly more likely to be female (in 81.6% of 114 clips with only one sex laughing; Binomial test: P,0.001). Laughter was positively associated with encouragement of the dog (Odds +/2 S.E. = 2.83+/21.28; DF = 234; P,0.001), but there were no significant associations with tail-chasing frequency or perseveration. Verbal or physical encouragement or praise was noted in 43% of videos, including attaching objects to the tail in almost 4% of videos (Table 2). Uploaders described 59% of tail-chasing videos as 'Funny', 26% as 'Crazy', 19% as 'Cute' and 15% as 'Stupid'. Similarly, 46% of videos with comments from viewers were described as 'Funny' by the viewers, and 42% as 'Cute'. Viewers were 6.8 times more likely to describe perseverative dogs as 'Funny' (defined in Table 2) compared with more easily distracted dogs. Uploaders described dogs that tail-chased habitually as 'Stupid' (defined in Table 2) 6.5 times more often than other dogs. Examples of uploader comments describing habitual chasing are as follows: ''Ya it's funny she does this all the time:)''; ''… my puppy does this ALL THE TIME. I've never seen a dog chase its tail so much. Maybe he enjoys the dizzyness??''; ''This is just 1/ 100th of the allotted time [my dog] spends chasing his tail every day''; ''This is him on a normal day. Chasing His Tail, Then eats his food, Watches a little TV, Chase's his tail some more then eat…''; and (audible, rather than written) ''It's amazing how long he'll do that for… he never stops… it's your favourite game; you take it everywhere with you''. In nine videos (2.3%), at least one comment offered clinical explanations for the behaviour or suggested that the dog should be checked by a veterinarian (three comments by uploaders, and seven videos had at least one such comment by viewers). However, none of the descriptions indicated that uploaders had posted their video on YouTube TM specifically to raise awareness of clinical aspects of tail-chasing. Comparisons of environmental context and tail morphology against breed-matched controls Videos showing tail-chasing were approximately 6.5 times less likely to be outdoors than were breed-and age-matched control videos (8.8% of tail-chasing videos were outdoors versus 38.8% of controls; Odds +/2 S.E = 0.15+/21.25; DF = 317; P,0.001); and when indoors, tail-chasing videos were over three times more likely to show a television or computer switched on than were controls (32.1% of indoor tail-chasing videos showed one switched The percentages of videos are arranged in order of magnitude for each general category. The words that were accepted as valid synonyms for comment categories were shown. These were accepted only if they were consistent within the context of the whole comment, e.g. a comment was not included in the counts for 'funny' if the comment actually stated that the video was 'not funny', even though the keyword was present in the comment. doi:10.1371/journal.pone.0026553.t002 Table 2. Cont. on versus 9.1% of controls; Odds +/2 S.E. = 3.35+/21.34; DF = 106; P,0.001). Control and tail-chasing videos showed no significant differences in tail morphology, such as length, docking, or hair-type (initial analyses had suggested that tails were longer in tail-chasing than control videos [26], but this relationship proved not to be robust when other significant variables were included in the final statistical models). Descriptions of tail-chasing characteristics, context and human responses to it The results here reveal new clinically relevant information that has been difficult to discover previously. Approximately one third of the dogs with complete data tail-chased habitually or appeared perseverative, and were significantly more likely than other tailchasers to be described as 'Stupid' or 'Funny', respectively. Comments suggesting clinical explanations for habitual, perseverative tail-chasing were only seen on 2.3% of videos, so it seems that public awareness must indeed be very low. Regardless of clinical signs, about one quarter (25.1%) of tail-chasing videos were classified as Comedy or Entertainment, laughter was recorded in over half (55%) of videos, and encouragement in 43%; and almost half of viewer comments described the videos as 'funny' or 'cute'. The vast and ever growing numbers of viewings that these and similar videos receive on YouTube TM will likely reinforce these perceptions, normalising tail-chasing behaviour yet further [18]. The findings therefore indicate a gulf between public perception and indicators of poor welfare in tail-chasing dogs. This implies that many pathological tail-chasers may go untreated, and the behaviour is widely assumed to be normal and amusing regardless of its persistence. These results are perhaps not surprising considering that some owners also incorrectly perceive thearguably less ambiguous -separation-related behaviours in their dogs (barking, whining, howling, scratching the door, destructive behaviour and inappropriate elimination) to indicate neutral or even positive welfare [27]. Similarly, owners can describe frequent signs of breathing difficulties in their brachycephalic (short-muzzle) dogs, but most later report that this not a 'breathing problem', being normal for the breed [28]. It appears that, although dogs seem readily to understand aspects of human behaviour [29,30], humans do not necessarily interpret all important aspects of canine behaviour accurately. Results in Table 3 show that problematic tail-chasing as a proportion of all the tail-chasing videos per breed group was prevalent in Bull Terrier breeds, consistent with clinical literature [4,9,12], but it was also widely distributed across other breed groups, including Toy and other groups little represented in studies to date. The prevalences here should not be taken as absolute values, because some breeds may be owned by a more technologically active demographic than others, and might thus be over represented on YouTube TM . Also, if owners of breeds known to tail-chase compulsively are more aware of the clinical implications of this behaviour than other owners, they may be reluctant to post videos of it (e.g. being embarrassed or saddened by it), so those breeds could be under-represented. Nevertheless, the results indicate the degrees to which tail-chasing videos show problematic signs in the different breed groups and suggest that it would be worthwhile investigating whether there are hitherto unrecognized clinical implications of tail-chasing across diverse breeds. Possibly behavioural anomalies in small or toy dogs may be less likely to be referred for veterinary attention than in larger, heavier breeds, whose behaviour may be more disruptive and obviously problematic to the owners. A previous survey indicated that owners of smaller dogs may also be less attentive to their dogs' behaviour and training in general [31]. In 17% of videos play behaviours were interspersed with tailchasing; playing was less likely in perseverative dogs, but more likely in puppies than adult dogs. This is consistent with tailchasing sometimes forming part of play, especially in puppies [4]. In these cases, as long as dogs infrequently chase their tails, owners need not necessarily be concerned about their dog's tail-chasing because play is often (but not always) an indicator of positive welfare [32]. A caveat is that even play can be a response to stress, lack of exercise or under-stimulation (a 'do-it-yourself enrichment', c.f. [33]), so owners should assess the context of the behaviour in case the trigger could be a negative one. Encouragement of tail-chasing was recorded in 43% of videos, and laughter, which could also inadvertently be reinforcing for dogs, was heard in 55% of videos. The true prevalence of encouragement and laughter, will depend on how frequently people manipulate the dog for the film (e.g. attaching objects to the tail), play up to the camera, or deliberately remain quiet or offscreen during filming. Some encouragement seen on YouTube TM may have directly distressed the dogs: in almost 2% of videos, humans 'growled' at dogs, and almost 20% of people physically manipulated the tail (Table 2), often appearing to pull or pinch it with considerable force. In any case, whether reinforcement is through negative or positive means, it should be minimized to prevent tail-chasing from becoming compulsive. Equally, frequent tail-chasing must not be punished or prevented without addressing its cause, as this can increase stress and poor welfare in the affected dog, e.g. [34]. Comparisons of environmental context and tail morphology in breed-matched controls Compared with breed-and age-matched controls, tail-chasing videos were approximately 6.5 times less likely to be outdoors, and -when indoors -televisions or computers (but not radios or music players) were more frequently switched on. The breed-and agematching was intended to control for some breeds being kept indoors to a greater extent than others. However, the environmental differences could still be Type I errors (falsely significant) if, for example, tail-chasing were one of the few canine behaviours that people tend to record indoors while watching television, rather than it being performed more in that situation per se. Some control videos were by nature likely to be filmed outdoors, such as dogs exercising or interacting with other dogs, but others showed more typically indoor activities, such as eating, dreaming, or interacting with other pets, so further research will be necessary to confirm the environmental contexts of tail-chasing. Nevertheless, the observed environmental differences are consistent with tail-chasing being triggered by a lack of exercise, under-stimulation, and/or insufficient attention from humans [4,7,9,11]. If so, the behaviour might indeed predominantly occur when dogs are indoors while humans are engaged in the sedate, non-interactive pastimes of television and computer use. Lack of exercise, stimulation and attention as triggers for tail-chasing have apparently not yet been tested empirically. If tail-chasing genuinely is associated with insufficient exercise, this would also be consistent with tail-chasing dogs having raised cholesterol levels, as found by Yalcin et al. [25]. The usual treatment for compulsive tail-chasing is drug therapy combined with behavioural therapy, such as increased owner attention and walks; the drugs may treat the clinical signs but behavioural change addresses the cause of the problem. However, owner compliance with behavioural recommendations is often poor, e.g. [7], and in general many dogs are walked very seldom (e.g. fewer than half of Australian owners surveyed walked their dogs at all [35], and 70% of dogs with acral lick dermatitis were never walked [36]). The finding that tail-chasing on YouTube TM appears to occur predominantly indoors with screens switched on might therefore reinforce the importance of exercise and stimulation for dogs. Tail morphology and docking showed no significant differences between tail-chasing and control videos. A previous small-scale study [37] found neuromas in the docked tails of dogs showing 'tail-directed behaviour', so neuromas should be considered as a potential cause of tail-chasing in docked dogs, but no such association was found here (indeed the non-significant trend was in the opposite direction). A study focussing on breeds with frequently docked tails will be necessary to investigate whether a significant association exists. Conclusions In summary, YouTube TM has offered the first large, study population of dogs chasing their tails in non-clinical contexts. Approximately one third of the dogs showed signs of clinical relevance, but this was rarely recognised openly by uploaders or viewers; indeed, dogs showing problematic tail-chasing were more likely than other dogs to be described as 'Stupid' or 'Funny'. In 43% of videos tail-chasing was actively encouraged, which could risk reinforcing the behaviour excessively, and in some cases it included rough handling or goading the dog. The study also reveals that diverse dog breeds chase their tails on YouTube TM , and that this seems predominantly to occur indoors when televisions or computers are switched on. Future research could record more detail about the clinical signs: for example, details of tail-mouthing behaviour could indicate tail or hindquarter discomfort, and persistently chasing in one direction could help diagnose compulsivity [12]. It will also be necessary to determine what really triggers tail-chasing, to obtain meaningful prevalences of pathological and non-pathological tail-chasing, and to identify the most reliable indicators of whether the behaviour is of welfare concern. in the meantime, awareness of the clinical implications of frequent tail-chasing should be increased in the public domain if the associated canine welfare problems are to be addressed. Supporting Information Table S1 Condensed descriptions of all the data collected concerning YouTube TM videos of dogs chasing their tails. * indicates that the data were also collected for breedmatched control videos. (DOC) Table S2 The detailed description of criteria for scoring the presence or absence of particular characteristics in YouTube TM videos of dogs chasing their tails. This includes a subset of the behavioural ethogram used to score the dog behaviour throughout the study. This summary was sent to the animal behaviour expert (OHB) who scored the 40 videos to allow inter-observer reliability to be tested. (DOC) Table S3 Intra-and inter-observer reliability for selected variables describing dogs chasing their tails on YouTube TM . For each variable, the raw percentage agreement (%), the prevalence index (P.I.) and the k value (for categorical variables) or W value (for ordinal variables) is shown. * indicates that the k value fell below the clinically acceptable threshold of 0.4 (e.g. Sim & Wright, 2005), so the variable should be discarded from further analysis. ¥ indicates that the variable is ordinal, rather than categorical. (DOC)
2014-10-01T00:00:00.000Z
2011-11-09T00:00:00.000
{ "year": 2011, "sha1": "74840631d6db3049effebd77e26f92000c66b621", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0026553&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74840631d6db3049effebd77e26f92000c66b621", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
151177195
pes2o/s2orc
v3-fos-license
How are misconceptions about material discontinuation by gender in elementary school teacher candidates? This study wishes to conclude the gender differences in conceptions about the discontinuous nature of the material in potential elementary school teacher candidates. Exhausting descriptive-qualitative research method, the data were composed of the third-grade school teacher candidates of Sumedang Regional State University, with 84 participants (20-21-year-old). Instruments used in the usage of multiple-choice four-tier, questionnaires, and interview. The upshots of the tests are then treated by evaluating the school teacher candidate answers for each item and gathered by four-tier categories combined with the questionnaire and the interview. The upshots of this study indicate that the highest average percentage of school teacher candidate answers, be it female or male school teacher candidates experiencing ‘misconception’. Consequently, it can be supposed that the conception of the discontinuous nature of the material on the third-grade school teacher candidates is still very low. As a final point, countless school teacher candidates are incapable to understand the material well. Introduction In science, to recognize all scopes of an inferior basis of a new concept is fatefully important in learning the concept [1][2]. In the route of mastering the concept of science knowledge, students commonly have exertion understanding the concept of science knowledge which then leads to misconception. Core explanations of misconceptions can be registered as; student (lack of personalize knowledge, preconceptions, shortage of motivation and interest, spending day-to-day language in scientific issues), teacher (inadequate substance knowledge, ordering of concepts, generous more attention to details) and textbook factors [1,3]. Misconceptions besides arise in the concept of the particulate state of an object or can be identified by the discontinuous nature of matter. The particulate nature of matter is recognized in science education standards as one of the indispensable concepts [4][5]. Thus, many of them have difficulty in understanding the material about the nature of the particle. On behalf of the particulate nature of matter, students have a mental model that differs from the science concepts since students have only macroscopic involvement for the matter [4][5][6]. This is in stripe with the upshots of the discontinuous material test accompanied on prospective elementary school teachers. Numerous of them do not is an image showing the particle structure of any solid, gas object, and liquid body. Discourse of particles, elementary school teacher candidates must furthermore recognize the parts of the particle itself. The integer of particles is communicated in entities of moles One mole of the ingredient comprehends the integer of particles equal to the integer of particles in 12.0 grams of isotope C-12 i.e. 6.02 × 1023 particles. Voluminous elementary school teacher candidates do not grasp it in a microscopic viewpoint. Ability of the student microscopic involvement for matter is influenced by the gender they have. Girls, however, report being more harassed in their science classes than boys [7][8]. Hence, it can be said that gender differences can impact the mastery of the concepts presented through student learning outcomes [9]. Disappearance from the preliminary findings, the focus of this study aims to find out how the impact of gender on the conception of the discontinuous nature of the material at the third-grade student of elementary school candidates. The upshots of this study are probably to be a consideration for related research that aims to expand student conceptions on the discontinuous nature of the material. Method This study uses the qualitative-descriptive method that purposes to observe students without generous action to the subjects then the results are prevailing in a forthright and truthful way. The subject of study is a third-class student of one of the state universities in Bandung area which amounted to 38 participants (20-21-year-old). They are 19 boys and 19 girls who have prior knowledge about material discontinuation. Instruments used in the usage of multiple-choice four-tier, questionnaires, and interviews. The test is a four-tier multiple-choice of 7 items where each question characterizes each indicator. The upshots of the test are evaluated by point of question by observing at the answers to the questions, explanations, inscription pictures or symbols and sureness levels, then classified into one of the four-tier categories consisting of "Understand", "Partial Understand", "Misconception", "Not Understanding", and "Uncode". Knowing the concept of the constituent particles in solids In item 1, many of elementary school student candidates are in the category of "Partial Understanding", as shown in figure 3. Based on that figure 3, we know that most of male students in the category "Understand". Therefore, it can be said that the mastery-concept of male students of elementary school teacher candidates is better than male students of elementary school teacher candidates. There is study which says that man represent 54 % of all earned doctorates in the biological and biomedical sciences and 48 % of all earned medical degrees since 2012 [10][11]. Knowing the concept of the constituent particles in a liquid Based on the item 2, many of their answers are in the "Understand" category. If item 2 of the elementary school teacher candidate answers are seen based on the gender difference, then the upshot is like figure 4. Based on figure 4, it is known that 31.57% or 12 male students are in the category of "Understand". Based on interviews and questionnaires data, many of elementary school teacher candidates are more aware of the existence of particles in a liquid [12], because they already know when in high school. Thus, it can be said that the ability of male students of elementary school teacher candidates in item 2 is better than female students. Figure 4. Test upshot based on gender in item 2. Knowing the concept of the constituent particles in gas objects On item 3, many of the elementary school teacher candidates are in the category of "misconception". If item 3 of the elementary school teacher candidates answer are seen based on the gender difference, then the upshot is like figure 5. Based on that figure 5, the "Understand" category has 7.89% or 3 male students of the elementary school teacher candidates who can understand the problem. Based on the results of interviews, many of the elementary school teacher candidates are convinced that there is no particle in the gas. It shows that gases are some of the chemistry subjects that students commonly fail to understand [13]. Thus, it can be said that the male's mastery-concept is better than female students of elementary school teacher candidates. Knowing the concept of particles contained in solid objects do vibration In question 4 the dominance of student answers is on the categorized "Misconception". This is supported by interview data which says that they are never learned about it before. If item 4 of the elementary school teacher candidates answer are seen based on the gender difference, then the upshot is like figure 6. Based on that figure 6, it shows that female students of elementary school teacher candidates have better mastery-concept than male students. It can be happened because most of the teachers often avoid chemical themes due to a lack of knowledge, interest, and confidence [14]. Knowing the concept of free space of particles in solids In item 5, the answer of the elementary school teacher candidates is dominated by the "Misconception" category. If item 5 of the elementary school teacher candidates answer are seen based on the gender difference, then the upshot is like figure 7. In this case, many female students are in the category of "Partial Understanding". Most of the students think that there are no spaces between the particles of a solid [15]. This analysis shows that in item 5 the conception of female students of elementary school teacher candidates is better than the conception of male students of elementary school teacher candidates. Figure 7. Test upshot based on gender in item 5. Knowing the concept of the free space of particles in a liquid In item 6, many of the elementary school teacher candidates responded with the answers which categorized as "Misconception". If item 6 is viewed in terms of gender differences, the result is like the figure 8. There is a male student of an elementary school teacher who has a good mastery-concept. Most of the students perceive that there are no gaps between the particles of a liquid [15]. Therefore, it can be said that in item 6 is dominated by male students of elementary school teacher candidates. Figure 8. Test upshot based on gender in item 6. Knowing the concept of free space of particles in the gas object In item 7, more than half of the answers from elementary school student candidates are in the "Misconception" category. Most of them wrong in answering the third tier that asks students to write down a gas name and its symbol. If item 7 is viewed in terms of gender differences, the upshot is like the table 1. Based on that table 1, upshot of item 7 shows that there is a female student of elementary school teacher candidates in "Understand" category. Most of the students perceive that there are no gaps between the particles in a gas object e.g. [15]. Hence, it can be said that in item 7 the ability of female students of elementary school teacher candidates is better than male. Conclusion Almost in all indicators, students of elementary school teacher candidates have the misconception. Based on the percentage of gender in each item, it can be said that the conception of the discontinuous nature of the material on male student of elementary school teacher candidates are better than that of female student of elementary school teacher candidates. Therefore, it would be better to do further research on the relationship between gender differences with students' understanding of concepts.
2019-05-13T13:05:24.270Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "e8cfdcb98fea1a267a6de707089170b8e9d2f357", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1157/4/042025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2656c4e54260c60e0cff6b5d9b9332abbb4773bd", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
266555359
pes2o/s2orc
v3-fos-license
The Combination of Bioavailable Concentrations of Curcumin and Resveratrol Shapes Immune Responses While Retaining the Ability to Reduce Cancer Cell Survival The polyphenols Curcumin (CUR) and Resveratrol (RES) are widely described for their antitumoral effects. However, their low bioavailability is a drawback for their use in therapy. The aim of this study was to explore whether CUR and RES, used at a bioavailable concentration, could modulate immune responses while retaining antitumor activity and to determine whether CUR and RES effects on the immune responses of peripheral blood mononuclear cells (PBMCs) and tumor growth inhibition could be improved by their combination. We demonstrate that the low-dose combination of CUR and RES reduced the survival of cancer cell lines but had no effect on the viability of PBMCs. Although following CUR + RES treatment T lymphocytes showed an enhanced activated state, RES counteracted the increased IFN-γ expression induced by CUR in T cells and the polyphenol combination increased IL-10 production by T regulatory cells. On the other hand, the combined treatment enhanced NK cell activity through the up- and downregulation of activating and inhibitory receptors and increased CD68 expression levels on monocytes/macrophages. Overall, our results indicate that the combination of CUR and RES at low doses differentially shapes immune cells while retaining antitumor activity, support the use of this polyphenol combinations in anticancer therapy and suggest its possible application as adjuvant for NK cell-based immunotherapies. Introduction Polyphenols are a considerable group of natural compounds found in foods and beverages of vegetal origin.Numerous studies have shown that polyphenols have potent antioxidant, anti-inflammatory, antimicrobial and anticancer properties [1], so their consumption is considered beneficial for the human body [2]. However, the beneficial effects of polyphenols are limited by their poor bioavailability.Indeed, polyphenols have a poor biodistribution and absorption, as well as a quick metabolism and elimination in the human body.The mechanisms that limit the bioavailability of oral administered polyphenols encompass their metabolism in the gastrointestinal tract and liver, their binding to blood cell surfaces, the action of the microbial flora in the mouth and gut, and additional regulatory factors that reduce the toxicity of high doses of compounds on mitochondria or other organelles [14].Further, in addition to endogenous factors, dietary variables, such as food matrix and food preparation methods, might also alter the bioavailability of polyphenols [4,14].Thus, after dietary intake, only nano-or micromolar quantities of polyphenols and their metabolites are detected in plasma [15].In this regard, as reported by several reviews, the results of the investigations in humans differ significantly, with a Cmax in plasma ranging from 1.17 µM to 5.6 µM (oral intake of 2-5 g) for RES [16] and from 2.7 nM to 8.7 µM (oral intake of 2-10 g) for CUR [17].Although these discrepancies could be attributable to different quantification approaches, their actual causes are not known [16,17]. Since the low bioavailability of polyphenols negatively impacts on the effective dose delivered to cancer cells and it is regarded as one of the main factors able to limit their effectiveness in cancer patients [4], several attempts have been made to develop formulations, derivatives and analogues with enhanced bioavailability, solubility and stability [3,5,[18][19][20].Another strategy for enhancing polyphenols effects on cancer cells is their use in combination, since different polyphenols combined together at low doses might have a synergic or additive effect [21][22][23][24].In fact, several studies demonstrated that treatment with polyphenol combinations is more effective in suppressing cancer growth than treatment with a single polyphenol compound [3,25,26].For instance, the combination of CUR and RES had more potent cytotoxic effects than either compound alone on hepatocellular carcinoma [21] and colorectal cancer cells [22] and was reported to be able to synergistically restrain cervical cancer cells proliferation and migration [23,24].CUR plus RES treatment was also demonstrated more effective than the single drugs in reducing the proliferation of colon cancer cells in vitro and in vivo [22].Combination treatments also suppressed chemoresistance to cisplatin of ovarian cancer cells [27].In this context, our group previously demonstrated that the combination of diallyl disulfide (DADS) plus RES, DADS plus CUR, and RES plus CUR displayed stronger in vitro anticancer activity on malignant rhabdoid or osteosarcoma cell lines than the single polyphenols and that RES and DADS increased the apoptotic effects of CUR [28].We also reported that RES enhanced CUR anticancer activities on head and neck cancer in vitro and in vivo.Moreover, RES plus CUR therapy inhibited the development of transplanted salivary gland cancer cells in mice more effectively than either CUR or RES alone [29].Furthermore, we showed that CUR with RES affected the PI3K/AKT/mTOR pathway, autophagy, intracellular reactive oxygen species (ROS) and ER stress/UPR both in breast and salivary gland tumor cell lines derived from Her-2/neu transgenic mice and that RES increased CUR cytotoxic effect by suppressing CUR-induced pro-survival autophagy [30].Still, it should be emphasized that in most of the in vitro studies aimed at evaluating the anticancer effects of these compounds, the polyphenols are used at concentrations higher than those attainable in vivo [6]. Recently, several studies have shown that polyphenols, including CUR and RES, also have the ability to modulate immune responses and may enhance antitumor immunity while preventing or delaying the development of tumor-supporting leukocytes by influencing the activity of immune cells, the production of cytokines, and the regulation of other elements of the immunological defense system [8,10,.Indeed, the tumor immune microenvironment is composed of different immune cells, which can play a dual role in the development of cancer.Anticancer cells, such as Natural Killer (NK) cells and CD8 + T lymphocytes, can recognize and eliminate tumor cells; on the other side, immunosuppressive cells, such as regulatory T cells (Tregs), myeloid-derived suppressor cells (MDSCs) and tumor-associated macrophages (TAMs), can support the evasion of immune surveillance by neoplastic cells and promote tumor growth [2]. Given these evidences, the aim of our study was (a) to explore whether CUR and RES, used at their bioavailable concentration (5 µM) [16,17] could modulate immune responses while retaining antitumor activity and (b) to determine whether CUR and RES effects on the immune responses of peripheral blood mononuclear cells (PBMCs) and tumor growth inhibition could be improved by their low-dose combination. Effect of Low-Dose CUR and RES on Tumor Cell Survival The effects of CUR and RES on tumor cell growth were evaluated at the 5 µM bioavailable concentration and, for comparison, at a 5-fold higher concentration using a panel of ten human cell lines including head and neck carcinoma (SCC-15, A253), breast cancer (MCF-7, MDA-MB-468), malignant mesothelioma (MM-B1, MM-F1, H-Meso-1), prostate cancer (PC-3, DU 145) and colon cancer (HCT 116) cell lines.Cell survival was assessed by the SRB assay after 96 h of treatment with the polyphenols, alone or combined in equimolar concentrations, or with DMSO used as solvent of the compounds.In all tumor cell lines both compounds, either alone or in combination, were able to significantly reduce cell survival when used at the high dose (25 µM) (Figure 1).At 25 µM, CUR was more effective than RES on cell survival inhibition.The effect obtained with 25 µM CUR + RES was significantly higher than the effect of CUR in SCC-15, MCF-7, M-Meso-1 and MM-B1cells. As for the effects of the low dose of CUR and RES (Figure 1), when the two compounds were used alone at 5 µM, a modest but significant cell survival inhibition was observed in only 3 out of the 10 cell lines tested, i.e., the head and neck carcinoma cell lines A253 and SCC15 and the mesothelioma cell line MM-B1.Still, when CUR and RES were combined at the 5 µM bioavailable concentration, their inhibitory effect was significant on all cell lines.Furthermore, on seven cell lines (A253, SCC15, MCF-7, H-Meso-1, DU 145, PC-3, and HCT 116) the low-dose combination of CUR and RES was more potent than either compound used alone.Even though, on the whole, the percentage reduction in tumor cell survival obtained with CUR + RES at the low dose was modest, the reported findings suggest that long-term supplementation with this combination of polyphenols may have a clinical impact in cancer patients. Effects of Low-Dose CUR and RES on Proliferation and Death of PBMCs The effects of low-dose CUR and RES on PBMC proliferation were next evaluated.Resting PBMCs were treated with the polyphenols at 5 µM for 96 h.Flow cytometric measurement of CFSE dye dilution was then used to assess cell proliferation of the total lymphocyte population as well as that of helper T lymphocytes (CD3 + CD19 − CD14 − CD4 + ), cytotoxic T lymphocytes (CD3 + CD19 − CD14 − CD8 + ), B lymphocytes (CD3 − CD14 − CD19 + ) and In PBMCs treated with 5 µM CUR and RES, alone or in combination, a very low level of cell death was detected, which was not significantly different from that of the DMSO-treated controls (Figure 2F).According to these findings, PBMCs survival is not affected by the low, bioavailable concentrations of the two polyphenols.Interestingly, the low-dose combination of CUR and RES was able to decrease oxidative stress in PBMCs, whereas the single compounds had no significant effects in this regard (Supplementary Figure S2).NK cells (CD3 − CD19 − CD14 − CD56 + ) subpopulations (Supplementary Figure S1).CUR, alone (1.2 ± 1.0%) or combined with RES (1.1 ± 0.9%) reduced the percentage of proliferating CD4 + T cells, as compared to DMSO (3.2 ± 2.4%) and RES alone (2.7 ± 1.7%), without significantly affecting CD8 + T, B and NK cells (Figure 2A-E).In PBMCs treated with 5 µM CUR and RES, alone or in combination, a very low level of cell death was detected, which was not significantly different from that of the DMSOtreated controls (Figure 2F).According to these findings, PBMCs survival is not affected by the low, bioavailable concentrations of the two polyphenols.Interestingly, the lowdose combination of CUR and RES was able to decrease oxidative stress in PBMCs, whereas the single compounds had no significant effects in this regard (Supplementary Figure S2). Effects of Low-Dose CUR and RES on Activation and Functional Properties of Resting T Lymphocytes The CD25 receptor, also known as Interleukin-2 receptor (IL-2R), is not expressed by quiescent mature T lymphocytes, but its expression is rapidly induced upon cell activation [72].Thus, the percentage of lymphocytes expressing CD25 provides an indication of their activation status.Hence, the modulation of CD25 expression was evaluated in resting Effects of Low-Dose CUR and RES on Activation and Functional Properties of Resting T Lymphocytes The CD25 receptor, also known as Interleukin-2 receptor (IL-2R), is not expressed by quiescent mature T lymphocytes, but its expression is rapidly induced upon cell activation [72].Thus, the percentage of lymphocytes expressing CD25 provides an indication of their activation status.Hence, the modulation of CD25 expression was evaluated in resting lymphocytes from healthy donors after 96 h of treatment with 5 µM CUR and RES, alone or in combination.CUR (4.3 ± 3.1%) and CUR + RES (3.6 ± 2.2%) significantly increased the percentage of CD3 + CD19 − CD14 − CD8 + T lymphocytes expressing the activation marker as compared to DMSO (1.9 ± 0.7%) (Figure 3A).Moreover, the percentage of CD3 + CD19 − CD14 − CD4 + T helper cells positive for CD25 was not significantly modified by either compound used alone, while it was significantly increased after treatment with the CUR + RES combination (9.4 ± 3.1%) as compared to DMSO (6.8 ± 1.5%) (Figure 3B). Effect of Low-Dose CUR and RES on Frequency and Functional Properties of Regulatory T Cells Regulatory T cells (Tregs) have a crucial role in peripheral immune tolerance and mediate the establishment of an immunosuppressive microenvironment that favors tumor immune escape [73].The treatment of PBMCs with CUR and RES, alone or in combination at 5 µM for 96 h, did not affect the frequency of Tregs, identified by the combined expression of the markers CD4 + CD25 high CD127 low/neg (Figure 5A).However, both the single and combined treatments increased the expression of the immunosuppressive cytokine IL-10 by Tregs, the highest increase being induced by the combined treatment.In fact, the frequency of IL-10-positive Tregs in PBMCs treated with CUR + RES was approximately 3-fold higher than that observed in PMBCs treated with DMSO only (CUR + RES vs. DMSO: 6.9 ± 4.8% vs. 2.4 ± 0.7%) (Figure 5B). Effect of Low-Dose CUR and RES on Frequency and Functional Properties of Regulatory T Cells Regulatory T cells (Tregs) have a crucial role in peripheral immune tolerance and mediate the establishment of an immunosuppressive microenvironment that favors tumor immune escape [73].The treatment of PBMCs with CUR and RES, alone or in combination at 5 µM for 96 h, did not affect the frequency of Tregs, identified by the combined expression of the markers CD4 + CD25 high CD127 low/neg (Figure 5A).However, both the single and combined treatments increased the expression of the immunosuppressive cytokine IL-10 by Tregs, the highest increase being induced by the combined treatment.In fact, the frequency of IL-10-positive Tregs in PBMCs treated with CUR + RES was approximately 3-fold higher than that observed in PMBCs treated with DMSO only (CUR + RES vs. DMSO: 6.9 ± 4.8% vs. 2.4 ± 0.7%) (Figure 5B). Effect of Low-Dose CUR and RES on NK Cell-Mediated Recognition of Tumor Target Cells To further assess the immunomodulatory effect of CUR and RES on NK cells, human PBMCs treated for 48 h with 5 µM CUR and RES, alone or in combination, were used in degranulation assays against K562 target cells and stained to specifically assess the functional contribution of the NK cell subset (the gating strategy is shown in Supplementary Figure S3).As evaluated by the percentage of cells positive for the degranulation marker CD107a, NK cells treated with the CUR + RES combination were significantly more activated than control cells, while no significant differences were observed between DMSO-treated NK cells and NK cells treated with either CUR or RES alone (Figure 6). Effect of Low-Dose CUR and RES on NK Cell-Mediated Recognition of Tumor Target Cells To further assess the immunomodulatory effect of CUR and RES on NK cells, human PBMCs treated for 48 h with 5 µM CUR and RES, alone or in combination, were used in degranulation assays against K562 target cells and stained to specifically assess the functional contribution of the NK cell subset (the gating strategy is shown in Supplementary Figure S3).As evaluated by the percentage of cells positive for the degranulation marker CD107a, NK cells treated with the CUR + RES combination were significantly more activated than control cells, while no significant differences were observed between DMSO-treated NK cells and NK cells treated with either CUR or RES alone (Figure 6). Then, we evaluated whether the treatment with CUR and RES could affect the expression of molecules involved in the recognition of tumor target cells by NK cells, including activating receptors (NKG2D, DNAM-1, NKp30 and NKp46), inhibitory receptors (NKG2A, KIRs) and exhaustion receptors (PD-1 and TIGIT) [74].The combined CUR + RES treatment significantly increased the expression of activating receptors such as NKG2D and NKp30 (Figure 7A).Conversely, the expression of inhibitory receptors such as NKG2A, KIR2DL2/L3/S2 and KIR3DL1 as well as the expression of exhaustion receptors such as TIGIT were significantly reduced (Figure 7B,C).Of note, CUR treatment alone induced a significant reduction in NKG2A expression, consistent with previously reported results [31].These findings indicate that the combined low-dose CUR + RES treatment significantly enhanced NK cell activation through the upmodulation of activating receptors and concomitant reduction of inhibitory and exhaustion receptors.Then, we evaluated whether the treatment with CUR and RES could affect the expression of molecules involved in the recognition of tumor target cells by NK cells, including activating receptors (NKG2D, DNAM-1, NKp30 and NKp46), inhibitory receptors (NKG2A, KIRs) and exhaustion receptors (PD-1 and TIGIT) [74].The combined CUR + RES treatment significantly increased the expression of activating receptors such as NKG2D and NKp30 (Figure 7A).Conversely, the expression of inhibitory receptors such as NKG2A, KIR2DL2/L3/S2 and KIR3DL1 as well as the expression of exhaustion receptors such as TIGIT were significantly reduced (Figure 7B,C).Of note, CUR treatment alone induced a significant reduction in NKG2A expression, consistent with previously reported results [31].These findings indicate that the combined low-dose CUR + RES treatment significantly enhanced NK cell activation through the upmodulation of activating receptors and concomitant reduction of inhibitory and exhaustion receptors. Effect of Low-Dose CUR and RES on Monocytes/Macrophages To further explore the effect of the polyphenols at low doses on innate immune cells, human PBMCs were treated for 48 h with 5 µM CUR and RES, alone or in combination, and then stained to evaluate the expression of CD68, a glycosylated type I transmenbrane glycoprotein associated with the endosomal/lysosomal compartment in the monocyte/macrophage subset [75].Both RES and CUR + RES induced a significant increase of CD68 expression levels in CD3 -CD56 -CD19 -CD14 + monocytes/macrophages (Figure 8).and then stained to evaluate the expression of CD68, a glycosylated type I transmenbrane glycoprotein associated with the endosomal/lysosomal compartment in the monocyte/macrophage subset [75].Both RES and CUR + RES induced a significant increase of CD68 expression levels in CD3 -CD56 -CD19 -CD14 + monocytes/macrophages (Figure 8). Discussion Polyphenols are a large group of compounds and secondary plant metabolites responsible for the color and flavor of fruits, flowers, and vegetables [1,2].They also play roles in plant defense against pathogens, possess antioxidant properties and modulate multiple signaling processes.Among these molecules are stilbenes, like RES, and curcuminoids, like CUR.Herein, we explored the antitumor efficacy of a combined, bioavailable low-dose treatment with CUR and RES, evaluating their in vitro effects on tumor cell survival as well as on growth, death, and functional properties of lymphocytes from healthy donors' PBMCs.As compared to the strong reduction in tumor cell survival obtained with high-dose (25 µM) CUR and RES, the two compounds used at a low dose (5 µM) retained a modest efficacy on selected cell lines when used individually but had significantly more consistent effects when used in combination.Remarkably, when the same low-dose treatment conditions were used on PBMCs from healthy donors, CUR, either alone or in combination with RES, reduced the proliferation of CD4 + T lymphocytes, but had no significant effects on CD8 + T lymphocyte, B lymphocyte and NK cell proliferation.Moreover, the percentage of viable vs. necrotic/apoptotic PBMCs was not affected by the single or combined treatment with the compounds at low doses.While it has been previously reported that CUR and RES do not affect the viability of PBMCs when Discussion Polyphenols are a large group of compounds and secondary plant metabolites responsible for the color and flavor of fruits, flowers, and vegetables [1,2].They also play roles in plant defense against pathogens, possess antioxidant properties and modulate multiple signaling processes.Among these molecules are stilbenes, like RES, and curcuminoids, like CUR.Herein, we explored the antitumor efficacy of a combined, bioavailable low-dose treatment with CUR and RES, evaluating their in vitro effects on tumor cell survival as well as on growth, death, and functional properties of lymphocytes from healthy donors' PBMCs.As compared to the strong reduction in tumor cell survival obtained with high-dose (25 µM) CUR and RES, the two compounds used at a low dose (5 µM) retained a modest efficacy on selected cell lines when used individually but had significantly more consistent effects when used in combination.Remarkably, when the same low-dose treatment conditions were used on PBMCs from healthy donors, CUR, either alone or in combination with RES, reduced the proliferation of CD4 + T lymphocytes, but had no significant effects on CD8 + T lymphocyte, B lymphocyte and NK cell proliferation.Moreover, the percentage of viable vs. necrotic/apoptotic PBMCs was not affected by the single or combined treatment with the compounds at low doses.While it has been previously reported that CUR and RES do not affect the viability of PBMCs when used individually at concentrations up to approximately 20-25 µM [76][77][78], to our knowledge, this is the first study demonstrating the absence of toxic effects on human PBMCs treated with the two compounds combined at a bioavailable dose.Additionally, the antioxidant properties of RES appeared to be potentiated by its combination with CUR. In summary, the combination of bioavailable concentrations of CUR and RES retained the ability to reduce cancer cell survival while it had no effects on PBMC viability and negatively affected the proliferation of the CD4 + T lymphocyte subset only.Still, a more complex scenario emerged with regard to the impact of the combined treatment on lymphocytes' functional properties, since the effects of the low-dose combination of CUR and RES in vitro appeared at the same time beneficial and unfavorable if translated into the context of the antitumor immune response.As for the beneficial effects, the combined treatment resulted in an increased frequency of CD4 + and CD8 + T cells expressing the activation marker CD25.In particular, while the percentage of CD8 + CD25 + T lymphocytes was increased to a similar extent by CUR + RES and by CUR alone, the frequency of CD4 + CD25 + T cells was significantly increased only by the polyphenol combination.Worthy of note, this increase of CD4 + CD25 + T lymphocytes was not associated with an increased frequency of CD4 + CD25 high CD127 low/neg immunosuppressive Tregs, whose amount was not indeed affected by the compounds, either alone or in combination.On the other hand, as compared to CUR or RES administered singularly, the combined treatment resulted in a greater increase in the fraction of Tregs expressing the immunosuppressive cytokine IL-10.Moreover, RES counteracted the increased IFN-γ expression induced by CUR in both CD4 + and CD8 + T cells.The increase of IL-10 induced by CUR and RES in vitro could prospectively reflect a potential effect of these polyphenols in vivo, regulating inflammatory processes in autoimmune diseases and tumor associated-inflammation [79,80].CUR induces IL-10 expression and production in different tissues, thereby modulating several inflammatory pathophysiologic conditions [81,82], while RES, by inducing IL-10 production, exerts a beneficial function on microglia cells in ischemic brain injury [83,84]. The anti-inflammatory properties of RES, in terms of pro-inflammatory cytokine downregulation, have been previously reported.Although in different experimental settings, our results are in agreement with what has been stated previously [49,[85][86][87], but to our knowledge, the data reported here represent the first report of an increased IFN-γ production by T cells, after a low-dose CUR treatment of 96 h performed on resting PBMCs. Interestingly, in NK cells, the combined treatment had a different outcome on IFN-γ expression, since in this innate lymphocyte subset, CUR did not modulate IFN-γ expression when used alone, as previously reported in NK92 human NK cells [88], but it was able to abolish the decreased expression of the cytokine induced by RES.In fact, on the whole, the favorable effects of the polyphenol combination were more consistently observed in NK cells than in adaptive lymphocytes.In fact, unlike the single compounds, the CUR + RES treatment was able to improve NK cell-mediated recognition of tumor target cells, with a concomitant upregulation of the activating receptors NKG2D and NKp30 and downregulation of the inhibitory and exhaustion receptors KIR2DL2/L3/S2, KIR3DL1 and TIGIT.Among the investigated receptors, only the NKG2A inhibitory receptor was downregulated to a similar extent by CUR + RES and CUR alone.Of note, in previously published studies, RES has been shown to exert beneficial effects on NK cells, in terms of cytotoxic activity, modulation of activating receptors expression and cytokines' release, even when administered alone at low doses [40,48,52,55].Similarly, the single treatment with low-dose CUR was previously reported to improve NK cell cytotoxic activity [48].While the discrepancies between these and our findings may be ascribed to different experimental conditions and sensitivity of the assays, our results add to those previously published, indicating that the favorable effects of RES and CUR on NK cells may be potentiated by their low-dose combination.Worthwhile to mention, unlike the low-dose treatments, high concentrations of CUR or RES have been reported to inhibit NK cell functions [40,52,89], further highlighting the importance of investigating polyphenols' effects in vitro using bioavailable concentrations of the compounds.NK cells are critical components of the innate immune system with a well-established role in tumor surveillance.Indeed, these cells have ability to eliminate tumor cells and have been shown to exert a protecting role against the metastatic spread of cancer cells [90][91][92].Accordingly, there is a growing interest in immunotherapy strategies aimed at exploiting the anticancer potential of NK cells [74,91,92], as well as the very promising engineered NK cells [74,93] and, based on the results presented here, the efficacy of such approaches could be potentiated by the combined supplementation with CUR and RES.In addition, we observed a significant increase in CD68 expression in the monocyte/macrophages subset following RES and CUR + RES treatments which may suggest an increased monocyte/macrophage activation mediated by low-dose polyphenols [94]. Conclusions Overall, herein, we demonstrate that the combined use, at low doses, of CUR and RES can simultaneously reduce tumor cell growth [95,96] and shape immune responses (Figure 9). Conclusions Overall, herein, we demonstrate that the combined use, at low doses, of CUR and RES can simultaneously reduce tumor cell growth [95,96] and shape immune responses (Figure 9). For a prospective clinical use, the absence of toxicity of low-dose CUR and RES and the ease of application based on oral administration [97] make these polyphenols suitable tools that may be added to standard antitumor treatments including chemotherapy and radiotherapy [98], to biological agents such as immune-checkpoint inhibitors [99] and to adoptive immune therapies with T/CAR-T and NK/CAR-NK cells [100].In this context, our findings may support the use of these polyphenols in combination with NK cell-based immunotherapies.For a prospective clinical use, the absence of toxicity of low-dose CUR and RES and the ease of application based on oral administration [97] make these polyphenols suitable tools that may be added to standard antitumor treatments including chemotherapy and radiotherapy [98], to biological agents such as immune-checkpoint inhibitors [99] and to adoptive immune therapies with T/CAR-T and NK/CAR-NK cells [100].In this context, our findings may support the use of these polyphenols in combination with NK cell-based immunotherapies. Peripheral blood mononuclear cells (PBMCs) were obtained from buffy coats collected from anonymous healthy blood bank donors, in accordance with the Institutional Review Board of Bambino Gesù Children's Hospital, IRCCS, Rome, Italy.PBMCs were isolated through density gradient centrifugation by Ficoll-Plaque Plus (Lympholyte Cedarlane, Burlington, NC, USA) and cryopreserved liquid nitrogen until further analysis. All the antibodies were used according to the manufacturers' protocol.Prior to surface staining, PBMC and NK cells were pre-stained with Fixable Viability Dye eFluor™ 780 (Thermo Fisher Scientific).Before IFN-γ or IL-10 intracellular staining, cells were supplemented overnight with 1 µg/mL Brefeldin A (BFA, Merck-Italy-Sigma Aldrich, Milano, Italy) in order to enhance intracellular cytokine retention.Flow cytometry was performed by using FACSCanto (BD Biosciences) or Cytoflex (Beckman Coulter) and analyzed by FlowJo Software, version 10.0.8r1 (Treestar, Ashland, OR, USA), or CytExpert version 2.5 software.Before the assays, both CUR and RES were incubated individually or in combination with PBMCs for 96 h at 5 µM.DMSO, used as a solvent for both polyphenols, was used as a control. Sulforhodamine B Assay Tumor cell survival was evaluated by the sulforhodamine B (SRB) assay, as previously described [34].Briefly, tumor cells were plated in flat bottomed 96-well plates at 2500 cells/well in 200 µL of medium.After 24 h, cells were incubated with 5 or 25 µM RES (cat no.R5010, purity ≥ 99%, Merck-Italy-Sigma Aldrich) and CUR (from Curcuma longa, cat.no.C1386, purity ≥ 65%, Merck-Italy-Sigma Aldrich) for 96 h.Cells were then fixed by adding 50 µL/well of 50% trichloroacetic acid (TCA, Merck-Italy-Sigma Aldrich) and incubated for 1 h at 4 • C.After 4 washings with distilled water, cells were dried and stained for 30 min with 100 µL of a 0.4% (w/v) SRB (Merck-Italy-Sigma Aldrich) solution in 1% acetic acid.The plate was washed 4 times with 1% acetic acid and left to dry.The dye was finally solubilized by adding 100 µL/well of 10 mM Tris pH 10. Cell density was then determined by spectrophotometric reading of the absorbance (O.D. values) 492 nm with a reference filter at 620 nm.The percentage survival of the cultures treated with RES and/or CUR was calculated by normalization of their O.D. values to those of the control cultures treated with DMSO [34]. PBMC Proliferation and Cell Death Assays PBMC proliferation was evaluated through flow cytometric measurement of carboxyfluoresceinsuccimide ester (CFSE) dye dilution.PBMCs were thawed, counted, and stained with 0.5 µg/mL CFSE (CellTrace Cell Proliferation Kit, Invitrogen-Thermo Fisher Scientific) for 15 min at 37 • C. At the end of the incubation, the cells were washed in complete RPMI medium, plated in 96-well U-bottom plates at a concentration of 300,000 cells/well and treated with 5 µM CUR and/or RES for 96 h.DMSO was used as a control. The percentage of necrosis and apoptosis of PBMCs treated with CUR and/or RES for 96 h was evaluated by using a PE-conjugated Annexin V/7AAD apoptosis detection Kit (Biolegend) and flow cytometric analysis. Reactive Oxygen Species Detection Assay The production of reactive oxygen species (ROS) was evaluated in PBMCs plated in 96-well U-bottom plates at a concentration of 300,000 cells/well.Cells were pre-treated with 5 µM CUR and/or RES, or DMSO, for 96 h and then incubated with phorbol-12myristate-6-acetate (PMA, 50 ng/mL, Merck-Italy-Sigma Aldrich) for 90 min.During the last 30 min of incubation with PMA, the fluorogenic probe dichlorodihydrofluorescein diacetate (DCFDA, 20 µM, Merck-Italy-Sigma Aldrich) was added to cultures and then green emission was detected by flow cytometry. NK Cell Degranulation Assay Degranulation assay was performed by co-culturing PBMCs, untreated or pre-treated with 5 µM CUR and/or RES for 48 h, with K652 target cells at a 1:1 ratio for 3 h in complete medium in the presence of anti-CD107a at a 1:100 dilution.During the last 2 h of coculture, GolgiStop (BD Biosciences), used at a 1:500 dilution, was added.Cells were then washed, centrifuged, and stained with anti-CD56, anti-CD16, anti-CD3, anti-CD14 and anti-CD19 to evaluate CD107a expression in the CD56 + CD16 + CD3 − CD14 − CD19 − subset by flow cytometry. Statistical Analysis Data distribution of cell growth and apoptosis assays was preliminarily verified using the Kolmogorov-Smirnov test, and the datasets were analyzed by one-way analysis of variance (ANOVA) followed by the Newman-Keuls test.For all other data, statistical significance was evaluated with the unpaired or paired two-tailed Student's t-test.Normalized values were analyzed for correlation by the regression analysis using GraphPad Prism version 5.0 software.Values with p ≤ 0.05 were considered to be statistically significant. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijms25010232/s1.Institutional Review Board Statement: Ethical review and approval was not required for this study on human participants in accordance with the local legislation and institutional requirements. Informed Consent Statement: Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.Data Availability Statement: Data is contained within the article or Supplementary Material. Figure 1 . Figure 1.Effect of low-dose CUR and RES on tumor cell survival.Cell survival was evaluated by the SRB assay on a panel of tumor cell lines including the head and neck carcinoma (SCC-15, A253), breast cancer (MCF-7, MDA-MB-468), malignant mesothelioma (MM-B1, MM-F1, H-Meso-1), prostate cancer (DU 145, PC-3) and colon cancer (HCT 116) cell lines, after 96 h of treatment with DMSO or CUR and/or RES at 5 and 25 µM.The percentage survival of polyphenol-treated cells was calculated relative to that of DMSO-treated control cells.Results are expressed as the mean ± SD of three independent experiments performed in triplicate.Statistical significance was calculated with one-way ANOVA (* p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001).The effect of CUR, RES, and CUR + RES 25 µM vs. DMSO was always significant. Figure 1 . Figure 1.Effect of low-dose CUR and RES on tumor cell survival.Cell survival was evaluated by the SRB assay on a panel of tumor cell lines including the head and neck carcinoma (SCC-15, A253), breast cancer (MCF-7, MDA-MB-468), malignant mesothelioma (MM-B1, MM-F1, H-Meso-1), prostate cancer (DU 145, PC-3) and colon cancer (HCT 116) cell lines, after 96 h of treatment with DMSO or CUR and/or RES at 5 and 25 µM.The percentage survival of polyphenol-treated cells was calculated relative to that of DMSO-treated control cells.Results are expressed as the mean ± SD of three independent experiments performed in triplicate.Statistical significance was calculated with one-way ANOVA (* p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001).The effect of CUR, RES, and CUR + RES 25 µM vs. DMSO was always significant. Figure 2 . Figure 2. Effect of low-dose CUR and RES on PBMC proliferation and cell death.(A-E) Cell proliferation of resting PBMCs was assessed by flow cytometry using the dilution of CFSE dye after 96 h of treatment with DMSO, CUR and/or RES (5 µM).The results are presented as the mean ± SD of the frequency of cells subsets in PBMCs from five or eight healthy donors.(A) Total lymphocytes identified based on morphological characteristics on FSC/SSC; (B) CD3 + CD19 − CD14 − CD4 + helper T lymphocytes, (C) CD3 + CD19 − CD14 − CD8 + cytotoxic T lymphocytes, (D) CD3 − CD14 − CD19 + B cells and (E) CD3 − CD19 − CD14 − CD56 + NK cells, identified by positive staining for the respective markers.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05).(F) Percentages of viable, necrotic, early and late apoptotic cells after 96 h of treatment with DMSO, CUR and/or RES (5 µM) as assessed by the Annexin V/AAD assay and flow cytometry.Results are expressed as the mean ± SD of the independent analysis of PBMCs from eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with one-way ANOVA. Figure 2 . Figure 2. Effect of low-dose CUR and RES on PBMC proliferation and cell death.(A-E) Cell proliferation of resting PBMCs was assessed by flow cytometry using the dilution of CFSE dye after 96 h of treatment with DMSO, CUR and/or RES (5 µM).The results are presented as the mean ± SD of the frequency of cells subsets in PBMCs from five or eight healthy donors.(A) Total lymphocytes identified based on morphological characteristics on FSC/SSC; (B) CD3 + CD19 − CD14 − CD4 + helper T lymphocytes, (C) CD3 + CD19 − CD14 − CD8 + cytotoxic T lymphocytes, (D) CD3 − CD14 − CD19 + B cells and (E) CD3 − CD19 − CD14 − CD56 + NK cells, identified by positive staining for the respective markers.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05).(F) Percentages of viable, necrotic, early and late apoptotic cells after 96 h of treatment with DMSO, CUR and/or RES (5 µM) as assessed by the Annexin V/AAD assay and flow cytometry.Results are expressed as the mean ± SD of the independent analysis of PBMCs from eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with one-way ANOVA. Figure 3 . Figure 3.Effect of CUR and RES on CD25 activation marker expression in T lymphocytes.CD25 expression in resting (A) CD3 + CD19 − CD14 − CD8 + and (B) CD3 + CD19 − CD14 − CD4 + T lymphocytes was assessed by flow cytometry after 96 h of PBMCs treatment with DMSO, CUR, and/or RES (5 µM).The results are presented as the mean ± SD of the frequency of cells subsets in PBMCs from eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05). Figure 4 . Figure 4. Effect of CUR and RES on IFN-γ expression in T lymphocytes and NK cells.IFN-γ expression was assessed by flow cytometry after 96 h of PBMCs treatment with DMSO, CUR and/or RES (5 µM) on (A) resting CD3 + CD19 − CD14 − CD4 + T lymphocytes, (B) resting CD3 + CD19 − CD14 − CD8 + T lymphocytes, and (C) NK cells.The results are presented as the mean ± SD of the frequency of IFN-γ + cell subsets in PBMCs obtained from six to eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001). Figure 3 . Figure 3.Effect of CUR and RES on CD25 activation marker expression in T lymphocytes.CD25 expression in resting (A) CD3 + CD19 − CD14 − CD8 + and (B) CD3 + CD19 − CD14 − CD4 + T lymphocytes was assessed by flow cytometry after 96 h of PBMCs treatment with DMSO, CUR, and/or RES (5 µM).The results are presented as the mean ± SD of the frequency of cells subsets in PBMCs from eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05). Figure 3 . Figure 3.Effect of CUR and RES on CD25 activation marker expression in T lymphocytes.CD25 expression in resting (A) CD3 + CD19 − CD14 − CD8 + and (B) CD3 + CD19 − CD14 − CD4 + T lymphocytes was assessed by flow cytometry after 96 h of PBMCs treatment with DMSO, CUR, and/or RES (5 µM).The results are presented as the mean ± SD of the frequency of cells subsets in PBMCs from eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05). Figure 4 . Figure 4. Effect of CUR and RES on IFN-γ expression in T lymphocytes and NK cells.IFN-γ expression was assessed by flow cytometry after 96 h of PBMCs treatment with DMSO, CUR and/or RES (5 µM) on (A) resting CD3 + CD19 − CD14 − CD4 + T lymphocytes, (B) resting CD3 + CD19 − CD14 − CD8 + T lymphocytes, and (C) NK cells.The results are presented as the mean ± SD of the frequency of IFN-γ + cell subsets in PBMCs obtained from six to eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001). Figure 4 . Figure 4. Effect of CUR and RES on IFN-γ expression in T lymphocytes and NK cells.IFN-γ expression was assessed by flow cytometry after 96 h of PBMCs treatment with DMSO, CUR and/or RES (5 µM) on (A) resting CD3 + CD19 − CD14 − CD4 + T lymphocytes, (B) resting CD3 + CD19 − CD14 − CD8 + T lymphocytes, and (C) NK cells.The results are presented as the mean ± SD of the frequency of IFN-γ + cell subsets in PBMCs obtained from six to eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001). Figure 5 . Figure 5.Effect of CUR and RES on Tregs frequency and IL-10 production.PBMCs were treated for 96 h with DMSO, CUR, and/or RES (5 µM) and analyzed by flow cytometry to assess (A) the frequency of CD4 + CD25 high CD127 low/neg cells; (B) the frequency of IL-10 + Tregs.The results are presented as the mean ± SD of the frequency of cells in PBMCs from eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05; *** p ≤ 0.001). Figure 5 . Figure 5.Effect of CUR and RES on Tregs frequency and IL-10 production.PBMCs were treated for 96 h with DMSO, CUR, and/or RES (5 µM) and analyzed by flow cytometry to assess (A) the frequency of CD4 + CD25 high CD127 low/neg cells; (B) the frequency of IL-10 + Tregs.The results are presented as the mean ± SD of the frequency of cells in PBMCs from eight healthy donors.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated with two-tailed unpaired Student's t test (* p ≤ 0.05; *** p ≤ 0.001). Figure 6 . Figure 6.Enhanced degranulation of NK cells upon CUR and RES treatment.(A) PBMCs, pretreated with 5 µM CUR and/or RES for 48 h, were evaluated for NK cell-mediated degranulation assay against K562 cells or medium alone as control.The percentage of CD107a in the NK cell subset is indicated in each plot.A representative experiment out of six performed with PBMCs isolated from six healthy donors is shown.(B) Summary of degranulation studies of NK cells from PBMCs isolated from six healthy donors.Dots correspond to the percentage of CD107a + NK cells in PBMCs from each donor, and the mean ± SD are also reported.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated vs. those obtained with DMSO-treated cells by two-tailed unpaired Student's t test (** p < 0.01). Figure 6 . Figure 6.Enhanced degranulation of NK cells upon CUR and RES treatment.(A) PBMCs, pre-treated with 5 µM CUR and/or RES for 48 h, were evaluated for NK cell-mediated degranulation assay against K562 cells or medium alone as control.The percentage of CD107a in the NK cell subset is indicated in each plot.A representative experiment out of six performed with PBMCs isolated from six healthy donors is shown.(B) Summary of degranulation studies of NK cells from PBMCs isolated from six healthy donors.Dots correspond to the percentage of CD107a + NK cells in PBMCs from each donor, and the mean ± SD are also reported.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated vs. those obtained with DMSO-treated cells by two-tailed unpaired Student's t test (** p < 0.01). Figure 7 . Figure 7. Surface expression of NK cell receptors upon CUR and RES treatment.PBMCs, pre-treated with CUR and/or RES at 5 µM for 48 h, were stained for NK cell activating receptors such as (A) NKG2D, DNAM-1, NKp46 and NKp30, (B) NK cell inhibitory receptors such as NKG2A and KIRs, and (C) NK cell exhaustion receptors such as PD-1 and TIGIT.Dots correspond to the mean of fluorescence (MFI) of the indicated receptors expressed on NK cells in PBMCs isolated from six healthy donors.Data are expressed as the mean ± SD bars.Statistical significance of the effects Figure 7 . Figure 7. Surface expression of NK cell receptors upon CUR and RES treatment.PBMCs, pretreated with CUR and/or RES at 5 µM for 48 h, were stained for NK cell activating receptors such as (A) NKG2D, DNAM-1, NKp46 and NKp30, (B) NK cell inhibitory receptors such as NKG2A and KIRs, and (C) NK cell exhaustion receptors such as PD-1 and TIGIT.Dots correspond to the mean of fluorescence (MFI) of the indicated receptors expressed on NK cells in PBMCs isolated from six healthy donors.Data are expressed as the mean ± SD bars.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated vs. those obtained in DMSO-treated cells with two-tailed unpaired Student's t test (* p < 0.05, ** p < 0.01). Figure 8 . Figure 8.Effect of CUR and RES on expression of CD68 in monocyte/macrophage cells.PBMCs, pretreated for 48 h with DMSO, CUR, and/or RES (5 µM), were analyzed by flow cytometry to assess the expression of CD68 on monocytes/macrophages (CD3 − CD19 − CD56 − CD14 + ).Dots correspond to the MFI of CD68 expressed on the monocyte/macrophage subset in PBMCs isolated from seven healthy donors.Data are presented as the mean ± SD bars.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated vs. those obtained in DMSOtreated cells with two-tailed unpaired Student's t test (* p < 0.05). Figure 8 . Figure 8.Effect of CUR and RES on expression of CD68 in monocyte/macrophage cells.PBMCs, pre-treated for 48 h with DMSO, CUR, and/or RES (5 µM), were analyzed by flow cytometry to assess the expression of CD68 on monocytes/macrophages (CD3 − CD19 − CD56 − CD14 + ).Dots correspond to the MFI of CD68 expressed on the monocyte/macrophage subset in PBMCs isolated from seven healthy donors.Data are presented as the mean ± SD bars.Statistical significance of the effects obtained with CUR and RES, alone or in combination, was calculated vs. those obtained in DMSOtreated cells with two-tailed unpaired Student's t test (* p < 0.05). Figure 9 . Figure 9. Proposed model illustrating the effects of CUR and RES combined treatment on tumor cells and PBMCs.The combination of bioavailable concentrations of CUR and RES reduced cancer cell survival without affecting PBMC viability, but instead increasing T cell activation (CD25 + ) and recognition of tumor by NK cells through the concomitant upregulation of the activating receptors (NKG2D and NKp30) and downregulation of the inhibitory and exhaustion receptors (KIR2DL2/L3/S2, KIR3DL1 and TIGIT).This figure was created using BioRender.com(accessed on December 11th, 2023). Figure 9 . Figure 9. Proposed model illustrating the effects of CUR and RES combined treatment on tumor cells and PBMCs.The combination of bioavailable concentrations of CUR and RES reduced cancer cell survival without affecting PBMC viability, but instead increasing T cell activation (CD25 + ) and recognition of tumor by NK cells through the concomitant upregulation of the activating receptors (NKG2D and NKp30) and downregulation of the inhibitory and exhaustion receptors (KIR2DL2/L3/S2, KIR3DL1 and TIGIT).This figure was created using BioRender.com(accessed on 11 December 2023).
2023-12-27T16:03:48.107Z
2023-12-23T00:00:00.000
{ "year": 2023, "sha1": "bc05ead99bc68e7030ea822e3a92f1d0d5f0c946", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/1/232/pdf?version=1703317537", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf460e0f7b7ec4d063150766e3dfe8b3ff427a93", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221541399
pes2o/s2orc
v3-fos-license
Standard headache and neuralgia treatments and SARS-CoV-2: opinion of the Spanish Society of Neurology’s Headache Study Group Introduction In recent months, doubts have arisen among patients, general practitioners, and neurologists as to whether some drugs commonly used in patients with headaches and neuralgia may favour or complicate the disease caused by SARS-CoV-2. Material and methods We collected information on the opinions of scientific societies and medicines agencies (American, European, and Spanish) to clarify doubts regarding the use of drugs such as lisinopril, candesartan, ibuprofen, corticosteroids, carbamazepine, and monoclonal antibodies targeting the calcitonin gene–related peptide in the context of the COVID-19 pandemic. Results We make recommendations about the use of standard headache treatments in the context of the COVID-19 pandemic, based on the current scientific evidence. Conclusions At present, there is no robust scientific argument to formally contraindicate any of the standard treatments employed for headaches and neuralgias. Introduction In the light of numerous questions raised in recent months by patients, family doctors, and neurologists about the possibility that certain drugs habitually used to treat headache and neuralgia may promote or complicate SARS-CoV-2 infection, the Spanish Society of Neurology's Headache Study Group wishes to make the following statement: This hypothesis is based on a limited number of studies that do not address this subject as the main objective, and use in vitro or experimental models of poor quality, which do not contribute scientific evidence. Nonetheless, their conclusions have been exaggerated by certain media outlets, causing concern among the general population and healthcare professionals. No scientific or neurological organisation has yet taken a position on the treatment of existing headache in patients with COVID-19; so far, the literature only includes opinion articles. We also consider it important to address this issue in the light of recent evidence of new-onset headache in patients with COVID-19, and the implications for their treatment. 1-3 Therefore, we issue the following detailed recommendations on the drugs concerned. Lisinopril and candesartan These drugs are indicated for the preventive treatment of migraine, with level of evidence 2 and grade of recommendation B. 4 Lisinopril is an angiotensin-converting enzyme (ACE) inhibitor and candesartan is an angiotensin receptor II blocker (ARB). These are the only renin-angiotensin system inhibitors used in the treatment of migraine. ACE inhibition prevents the conversion of angiotensin I into angiotensin II, which has potent vasoconstrictive effects and regulates aldosterone release by the adrenal glands. ACE is produced in various tissues, including in the central nervous system, kidneys, and lungs. ACE2 is an enzyme homologous to ACE, and is responsible for the degradation of angiotensin II to angiotensin; therefore, its physiological effect is opposite to that of ACE. ACE2 is a membrane enzyme that regulates the entry of the SARS-CoV-2 virus into cells. In humans, ACE inhibitors inhibit ACE but not ACE2, although animal studies have found that both ACE inhibitors and ARBs upregulate ACE2 expression when administered in high doses. [5][6][7][8] International, European, and American organisations 9-12 consider ACE inhibitors and ARBs not to be contraindicated in hypertensive patients with suspected or confirmed COVID-19: these drugs may even be beneficial, if upregulation of ACE2 leads to decreased angiotensin II levels in the lungs. Furthermore, withdrawing these drugs may exacerbate the underlying cardiovascular or kidney disease in some patients, leading to greater mortality rates. Therefore, their suspension is also not indicated in patients with migraine. Nonetheless, we recommend not beginning preventive treatment of migraine with these drugs in the context of the COVID-19 pandemic. Other treatment options are available when first-line treatments have been exhausted; these include venlafaxine (level of evidence 2, grade of recommendation B), zonisamide (level of evidence 4), and lamotrigine (level of evidence 4). 4 We also recommend evaluating other treatment options in patients already taking lisinopril or candesartan for migraine and presenting risk factors for severe COVID-19 (Table 1). Ibuprofen Ibuprofen is indicated for acute episodes of migraine of moderate intensity, with level of evidence 1 and grade of recommendation A. 4 Table 1 Risk factors for severe COVID-19 proposed by the United States Centers for Disease Control and Prevention. • Individuals aged 65 years and older • Individuals living in nursing homes for the elderly or long-term care institutions • Individuals of all ages with any of the following conditions, particularly if they are poorly controlled: -Chronic lung disease or moderate-to-severe asthma -Severe heart conditions -Immunocompromised state: cancer treatment, organ transplantation, immune deficiencies (HIV/AIDS), long-term use of corticosteroids or other systemic immunosuppressants -Severe obesity (body mass index ≥ 40) -Diabetes mellitus -Haemoglobin disorders -Chronic kidney disease under treatment with dialysis -Liver disease Without a doubt, this is the drug that has been subject to the greatest controversy and raised the most concern, as it is one of the most widely used drugs worldwide for the symptomatic treatment of migraine attacks. The concerns about ibuprofen originated in an article published in The Lancet on 11 March 2020, warning that ibuprofen may increase ACE2 expression, leading to clinical worsening of COVID-19. 13 Eight days later, the World Health Organization announced this finding, but issued a correction shortly thereafter, stating that there was no scientific evidence supporting this claim. The United States Food and Drug Administration, European Medicines Agency, and Spanish Agency of Medicines and Medical Devices have issued statements to the same effect, and recommend that research be done on the subject. [14][15][16] The possible association between exacerbation of SARS-CoV-2 infection and treatment with ibuprofen and ketoprofen is being evaluated by the European Medicines Agency's Pharmacovigilance Risk Assessment Committee. In rats, ibuprofen increases ACE2 levels in the heart only; this effect has not been observed in other organs. 8,17 It should be noted that the dose used in the experimental animals is equivalent to 3 g in humans, whereas the maximum recommended dose for migraine treatment is 1800 g/day. 4 Therefore, we recommend avoiding new indication of ibuprofen only in patients with migraine and presenting risk factors for severe COVID-19 (Table 1). In patients already using the drug, we do not recommend changing the treatment, although physicians should advise patients to avoid excess consumption of ibuprofen (more than 15 pills/month). It should also be noted that alternative drugs are available for treating moderate migraine attacks; these include naproxen, indometacin, and diclofenac. Finally, triptans can also be used: these drugs are indicated for moderate-severe migraine attacks. Corticosteroids Corticosteroids are indicated as a transitional preventive treatment for cluster headache and for the treatment of status migrainosus (level of evidence 4 and grade of recommendation C for both indications). 4 Indication of these drugs to treat COVID-19 is controversial. The United States Centers for Disease Control and Prevention contraindicate corticosteroids based on previous experience with the 2003 SARS-CoV epidemic, when several articles reported that early use of corticosteroids was associated with higher plasma viral load and delayed viral clearance. 18 However, they may be beneficial in the early stage of SARS-CoV-2 infection, reducing the duration of mechanical ventilation and overall mortality rates in patients with established moderate-to-severe adult respiratory distress syndrome 19 ; no randomised studies have been performed, however. 20 As corticosteroids are known to cause immunosuppression, we recommend avoiding the habitual indication of 1 mg/kg orally in patients with cluster headache during the pandemic and extending the use of occipital nerve block with triamcinolone, methylprednisolone, or betamethasone; these drugs have the same level of evidence and practically no systemic effects, since the corticosteroid is injected locally. 4 Clinical guidelines also mention other treatment options for delayed preventive treatment, including verapamil (level of evidence 1, grade of recommendation A), lithium carbonate (level of evidence 2, grade of recommendation B), topiramate (level of evidence 2, grade of recommendation B), valproic acid (level of evidence 4, grade of recommendation C), gabapentin (third-line treatment), and botulinum toxin A. 4 Parenteral or oral corticosteroids are also used to treat status migrainosus in emergency departments 4 ; however, as they are prescribed for very short schedules, we do not recommend avoiding them, except in cases of strong suspicion or diagnosis of severe COVID-19 (Table 1). Carbamazepine Carbamazepine is indicated for the treatment of trigeminal neuralgia, with level of evidence 1 and grade of recommendation A. 4 No study has addressed the relationship between the drug and COVID-19; however, given that its potential adverse effects include leukopaenia, we recommend considering other options before indicating it as a new treatment, especially in patients with risk factors for severe COVID-19 (Table 1). COVID-19 typically presents with lymphocytopaenia, which may be exacerbated by carbamazepine-induced leukopaenia. Patients not presenting risk factors for severe COVID-19 (Table 1) and who have already been receiving the drug for over 3 months should be monitored with laboratory blood analyses. This recommendation should also be extended on an individual basis to patients with trigeminal neuralgia taking sodium channel blockers: eslicarbazepine acetate, oxcarbazepine (level of evidence 4, grade of recommendation C), and lamotrigine (level of evidence 2, grade of recommendation B). Other treatment options are avail-able, including pregabalin, gabapentin, baclofen (level of evidence 4, grade of recommendation C), and botulinum toxin A. 4 Anti-CGRP monoclonal antibodies Published trials and clinical experience have shown that treatment of migraine with anti-CGRP monoclonal antibodies does not increase the risk of infection or immunosuppression, as they seem not to compromise the immune system. Therefore, we do not recommend suspending this treatment during the pandemic. The literature includes studies of MERS-CoV and SARS-CoV infection in patients receiving immunomodulatory or immunosuppressive treatment. Disease progression and mortality rates were not worse in patients receiving immunosuppressants, chemotherapy, or organ transplants. 21 In the current pandemic, case series have been published of patients undergoing organ transplantation 22,23 or receiving immunomodulatory treatment for multiple sclerosis 24 ; poorer prognosis is not reported in these patients. Furthermore, it has been suggested that these treatments may have a protective effect: through their influence on lymphocytes, they may block the cytokine storm occurring in the most severe cases of infection. A clinical trial is also underway into the treatment of COVID-19 with fingolimod, which induces lymphocytopaenia through lymphocyte sequestration. 25 However, insufficient information is available to support or reject this hypothesis. Interactions between preventive drugs for headache and COVID-19 drugs It is essential to take precautions against potential interactions between preventive treatments for migraine, such as amitriptyline, beta blockers, verapamil, mirtazapine, and valproic acid, and drugs used to treat COVID-19, such as lopinavir/ritonavir, hydroxychloroquine, chloroquine, and azithromycin. 26 The majority of these drug-drug interactions are classed as risk category C; therefore, the joint use of these drugs is not contraindicated, but close monitoring is needed to prevent adverse reactions or loss of efficacy.
2020-07-29T13:06:14.472Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "1378f049ee83bdc5904fd745faab8b0e30d5c9b3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.nrleng.2020.07.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d31df0720452b03748aa85a5cd70e712304f05a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222071151
pes2o/s2orc
v3-fos-license
Challenges for the care delivery for critically ill COVID-19 patients in developing countries: the Brazilian perspective The delivery of critical care is a major challenge for developing countries. The inequity of access to an ICU bed, heterogeneous triage policies, a low staff/patient ratio and suboptimal adherence to evidence-based practices contribute to disproportionally high mortality of sepsis and acute respiratory distress syndrome in these countries. In addition, limited step-down and specialized ward beds’ availability further widens the gap between critical and non-critical care inside hospitals. As the COVID-19 pandemic spreads through the world, developing countries are challenged with the surge of pneumonia cases where up to 30% of all hospitalized cases will require ICU admission. In August 2020, Brazil is a hotspot of COVID-19 with more than 100,000 deaths. Other Latin American countries such as Mexico, Peru, Colombia, and Chile are also among the 10 countries with most cases worldwide. Several factors seem to have contributed to the dramatic progress of the epidemic in the country. Initial measures of social distancing were adopted at the beginning of the epidemic in several states. However, the lack of central coordination and, at a certain point, the denial of the pandemic by a populist government meant that more effective measures such as lockdown were not adopted whereas use of unproven therapies such as hydroxychloroquine was encouraged. Also, the low availability of tests and progression towards the interior and peripheries of large cities made the epidemic hard to control causing overwhelming hospitals and ICUs. Background The delivery of critical care is a major challenge for developing countries [1]. The inequity of access to an ICU bed, heterogeneous triage policies, a low staff/patient ratio and suboptimal adherence to evidence-based practices contribute to disproportionally high mortality of sepsis and acute respiratory distress syndrome in these countries [2][3][4][5]. In addition, limited step-down and specialized ward beds' availability further widens the gap between critical and non-critical care inside hospitals. As the COVID-19 pandemic spreads through the world, developing countries are challenged with the surge of pneumonia cases where up to 30% of all hospitalized cases will require ICU admission [6]. In August 2020, Brazil is a hotspot of COVID-19 with more than 100,000 deaths. Other Latin American countries such as Mexico, Peru, Colombia, and Chile are also among the 10 countries with most cases worldwide. Several factors seem to have contributed to the dramatic progress of the epidemic in the country. Initial measures of social distancing were adopted at the beginning of the epidemic in several states. However, the lack of central coordination and, at a certain point, the denial of the pandemic by a populist government meant that more effective measures such as lockdown were not adopted whereas use of unproven therapies such as hydroxychloroquine was encouraged. Also, the low availability of tests and progression towards the interior and peripheries of large cities made the epidemic hard to control causing overwhelming hospitals and ICUs. What are the challenges? Despite its high absolute number of ICU beds [7], even in comparison with western European countries, the heterogeneous regional distribution and payor-based access are major barriers for a more equitable delivery of critical care. Although increases in the number of ICU beds were recently made across the country as preparation for the pandemic, they are still insufficient to compensate regional differences (the North region has 50% fewer ICU beds per capita as compared with the Southeast) or the imbalances between the public and private sector. The number of public ICU beds per capita is 72% lower than private sector ones, and only 22% of the population has access to private healthcare. Additionally, ICU staffing is an important shortcoming in the COVID-19 pandemic in developing countries. First, the ICU staffing can be considered low compared to developed countries, the current national norm defines a minimum of 1 nurse for each 10 ICU beds, and nursing technicians are the central workforce in several ICUs. Add the fact of the significant turnover due to a large number of healthcare professionals who are sick due to COVID-19, and the need for healthcare personnel with ICU training increases with the new ICU beds implemented for the pandemic. Other aspect to take in consideration is also the increasing complexity due to changes in case-mix of ICUs, where a large number of patients with multi-organ failure surge simultaneously. Data from the Brazilian ICU registry (www. utisbrasileiras.com/en that covers 1/3 of all ICU beds in the country) shows in 2019 an average of mechanical ventilation rates around 19% and an increase to 41% with COVID-19 [8]. This sudden shift in case-mix and increases in severity of illness of ICU patients could partially explain the high mortality rates for mechanically ventilated patients with COVID-19. The recently published results of the registry, in 13,941 COVID-19 patients requiring ICU admission, show a mortality rate of 32% for all patients and 67% for those requiring MV. This high mortality represents an excess even if compared to the more recent data on sepsis epidemiology, where rates were approximately 55% [4]. Recently, clinical characteristics and outcomes of COVID-19 patients from National Registries of critical care in LMICs (Brazil, Argentina, Sri Lanka, India) and HICs (Australia, New Zealand, Netherlands) were made publicly available by an international benchmarking initiative (icubenchmarking.com). Overall ICU mortality rates are comparable (ranging from 26 to 33%, except for Australia/new Zealand with rates of 7.8%). However, when the mortality of ventilated patients is analyzed, it tends to be higher in LMICs. In Brazil, lessons learned from the pre-COVID-19 period may be helpful and represent actionable information that hopefully can be translated into improved outcomes. Several studies performed in Brazilian ICUs demonstrate that there are opportunities to improve the quality of care. If the low baseline adherence to protective ventilation in ARDS [9,10], sepsis protocols (less than 60% of patients) [5], or light sedation (less than 40% of patients) [9] may be seen as bad news, they also represent potentially modifiable factors (Fig. 1). Recent studies demonstrate that the use of quality improvement (QI) interventions is associated with improved outcomes in LMICs. Brazilian data shows that interventions such as early sepsis triage and treatment [5], the use of protocols for the prevention of ICU-acquired complications, and organizational changes (i.e., promoting autonomy for ICU nurses or adding a pharmacist to the staff) [11,12] are associated with lower hospital mortality and lengths of stay. In addition, the use of a structured checklist during multidisciplinary rounds and the presence of an intensivist may improve adherence to sedation and protective ventilation protocols, both key factors to decrease the duration of ICU stay and to improve survival [13,14]. Conclusion In conclusion, there are no easy solutions and developing countries such as Brazil need to fix the system from "access to bedside care" in order to improve outcomes during and after the pandemic. In this regard, better triage procedures and improved ward care could help to get the right patients to the ICU on time. Also, providing better ward care could minimize long-term complications due to physical and cognitive impairment in this population. Using proven quality improvement interventions should be #1 priority in the ICU as they represent a cost-effective strategy that usually does not require fancy technology or expensive drugs.
2020-10-01T13:53:24.775Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "a01cae3ef0811d826b9033a27260db938b066fb6", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-020-03278-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a01cae3ef0811d826b9033a27260db938b066fb6", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
8131248
pes2o/s2orc
v3-fos-license
Canopy and knowledge gaps when invasive alien insects remove foundation species The armored scale Aulacaspis yasumatsui invaded the northern range of the cycad Cycas micronesica in 2003, and epidemic tree mortality ensued due to a lack of natural enemies of the insect. We quantified cycad demographic responses to the invasion, but the ecological responses to the selective removal of this foundation species have not been addressed. We use this case to highlight information gaps in our understanding of how alien invasive phytophagous insects force cascading adverse ecosystem changes. The mechanistic role of unique canopy gaps, oceanic island examples and threatened foundation species with distinctive traits are three issues that deserve research efforts in a quest to understand this facet of ecosystem change occurring across multiple settings globally. Introduction Alien invasions are one of the critical means by which anthropogenic activities are altering biodiversity, and the direct and indirect effects of biological invasions on ecological processes are manifold. 1 Despite a growing interest in this subject, the extent of ecosystem change in response to invasions may be much greater than is often assumed. 2 A recent literature survey of invasive alien insects and consequential ecological changes revealed that almost two thirds of published reports were constricted to North America case studies, and the vast majority of these were restricted to responses of native biodiversity. 3 The authors pointed Canopy and knowledge gaps when invasive alien insects remove foundation species Thomas E. Marler 1, * and John H. Lawrence 2 1 Western Pacific Tropical Research Center; University of Guam; UOG Station; Mangilao, Guam USA; 2 United States Department of Agriculture; Natural Resources Conservation Service; Mongmong, Guam USA out that effects of invasive insects on ecosystem processes were rarely explored. Cycas micronesica is an island endemic cycad species 4 that was the most abundant tree species on Guam 5 prior to the invasion of Aulacaspis yasumatsui. 6 Cycas micronesica was assigned endangered status by the IUCN 7 only three years after the invasion due to the epidemic mortality of the plant population. 8 It is arguably already functionally extinct as a consequence of failure to recruit. 9 Mechanisms that mediate the impacts of exotic plant invasions are weakly understood, 10 and the knowledge gap is possibly more daunting for the impacts of exotic insect invasions on plant communities. 3 Here, we use the A. yasumatsui invasion into island habitats and the resulting widespread loss of C. micronesica trees to discuss three issues deserving focused attention for future research on the ecosystem responses to invasive alien insects. Canopy Gaps While light is a source of energy, it is also the language that plants use to decide how to grow. We propose that the role of increased light penetration following the selective removal of a common tree species by an alien phytophagous insect has not been adequately studied as a mediator of ecosystem change. While gap formation has been described as a consequence of European gypsy moth [11][12][13] and hemlock woolly adelgid 14 events deposit woody debris with the leaf litter layer and leaf re-growth in the canopy is rapid, enabling rapid re-establishment of canopy shade. These phenomena do not occur during selective gap formation by invasive phytophagous insects. Tree-fall gaps generated by catastrophic wind events or other causes lead to extensive soil disturbance as a component of gap formation. The pits and mounds associated with uprooting can mediate the changes in community structure following a tree-fall gap. 17 Alternatively, invasive herbivorous insect damage to a single forest taxon creates gaps with standing dead trees and no immediate disturbance of the soil surface. 13 These changes in light environment without concomitant disturbance of the soil surface pose interesting scenarios for comparing various means by such as shifts in forest species composition or alterations in biogeochemistry were not anatomized. Ecological processes following the formation of tree-fall gaps have been extensively studied. 16 However, gap formation through selective mortality of a single common forest tree species by an invasive phytophagous insect exhibits unique traits, and therefore general attributes of gap formation and subsequent closure may not translate fully to this form of disturbance. Our model species is a mid-story tree, and after widespread tree mortality the gap dynamics defined by the overstory tree species remain intact (see Fig. 1A and B). In contrast, gap formation by non-catastrophic defoliation during wind events is generally less selective, with most trees experiencing defoliation regardless of species. Moreover, small to moderate wind which gap formation occurs and how light regimes are influenced by those gaps. Light quality. Contemporary research is beginning to show that UV radiation delivers highly specific information that orchestrates plant metabolism and morphology 18,19 and directly modulates the relations of plants and herbivores. 20 UV light is substantially reduced by plant canopy cover, so understory plants that persist through the formation of herbivory-induced canopy gaps will experience drastic shifts in UV radiation. Similarly, recent evidence reveals that green light is a discrete signal that affects plant biology, including counteracting the signaling from blue or red wavebands. 21,22 Green and far-red light pass through the plant canopy with greater efficiency than red and blue light, therefore, gap formation during mortality of a dominant canopy along with increased insolation following the loss of a foundation species due to an invasion of an alien phytophagous arthropod should be of importance to modelers and ecologists. Island Biology Endemic flora on islands are acutely vulnerable to alien invasions, 31-34 as exemplified by our report. 8 Despite this defining characteristic of island biology, to our knowledge there are no published reports on ecosystem-level responses to alien phytophagous insect invasions on any island worldwide. For example, Kenis et al. 3 reviewed 403 primary research publications on invasions of alien insects of all types and the subsequent ecological responses. This list included 102 publications that addressed phytophagous insect invasions. Of these, only six investigations were conducted on islands, and none of these included ecosystem level data. The authors noted that the few accessible primary research reports on ecosystem processes following phytophagous insect invasions were restricted to only three insect genera invading North America. The remainder of the globe has not avoided ecological changes following invasions of phytophagous insects. Clearly, these are global impacts that deserve vetting in case studies outside of North America. Species migrations occur continuously, and the movement of a species into new territory does not necessarily lead to an invasion. A complex set of factors collide to determine if an exotic species will become invasive following purposeful or inadvertent entry into a new geographic region. 35,36 A troublesome facet of some invasions is the subsequent regional spread from the initial successful population. 37 We predicted that A. yasumatsui would invade Guam three years prior to the 2003 invasion, a prediction that was founded on the invasion of Hawaii by the armored scale. Similarly, the nearby island of Rota was invaded in 2007 and the island nation of Palau was invaded in 2008. These islands fall within the endemic range of C. micronesica, 4 and are connected to Guam by daily airplane flights. tree species will exert acute shifts in green, blue, red and far-red signals to the understory community. These shifts in light quality mediate many of the subsequent community changes following gap formation, and we contend that gap formation by phytophagous insect removal of a foundation canopy species may exhibit light quality alterations that differ drastically from that of other forms of gap formation. Sunflecks. Effective use of sunflecks for carbon gain is critical for growth and survival of understory plants. Pearcy and Way 23 propose more research on sunfleck use by plants in context of factors influencing global climate change, and specifically discussed heat, drought and elevated CO 2 . 24 We assert that alien insect invasions are another critical global change factor and the sunfleck regimes created by selective removal of a single foundation tree species by an invasive insect are distinct from sunfleck regimes in undisturbed forests or those created by other forms of canopy disturbance. Insolation. Selective loss of a foundation species alters the dynamics of forest ecosystems, including biogeochemical features. 15,25 Learning from descriptive papers reporting on biogeochemical changes following disturbances is maximized by identifying the functional or mechanistic basis for these changes. Furthermore, we have noticed that the post-invasion pulse of leaf litter decomposes rapidly, and subsequent to this initial litter pulse the soils in areas with high density dead C. micronesica trees exhibit an impoverished litter layer for several years (Fig. 1D). We suggest that increased insolation is one mediator of biogeochemical changes following alien phytophagous insect invasion that deserves further study. Exposure of soil surfaces to solar radiation increases the rate of litter decomposition. 26,27 Mechanisms may be direct effects on litter such as photodegradation of lignin 28 or rendering litter nitrogen more available to the detritus microorganism consortium. 29 Alternatively, landscapescale soil warming as a result of greater insolation may mediate litter decomposition dynamics. 30 Seeking a greater understanding of the direct roles of changes in light quality and sunfleck dynamics Manipulative experiments offer powerful approaches for clarifying ecological mechanisms, but they are limited for assessing cascading and compounding effects because manipulations are possible at scales smaller than ecological processes. In this light, Sagarin and Pauchard 38 argued that ecologists should be poised to take advantage of 'natural' experiments that may arise from localized declines of certain species or groups of species. Because islands are insular and offer disjunct biological populations for comparison, island studies offer powerful interpretive results for understanding how introduced predators exterminate native prey species. 33,39,40 We suggest that our case study serves as an example of how valuable islands may be for filling information gaps in how invasive phytophagous insects can decimate native host plant species, 8 then how the loss of those species affects ecosystem change. Our continual reconnaissance efforts have allowed us to pinpoint the date that the spreading A. yasumatsui population entered the various C. micronesica populations in Guam and Rota, a process that spanned five years. By comparing the localized cascading effects of these sequential invasions with the C. micronesica habitats in Yap, which have not experienced the armored scale invasion to date, we can exploit a clearly defined natural experiment. Unique Case Studies According to the mass-ratio hypothesis, 41 controls on function are in proportion to abundance of a species. In this light, many dominant tree species are widely considered as foundation species where they exert a disproportionate control over ecosystem dynamics. Indeed, several case studies have documented ecosystem changes following the loss of foundation species. 14,15,25,42 If an invasion purges a foundation tree species possessing unique functional traits that are not redundant among sympatric tree species, the community-level ecological changes following the invasion may be greater than for cases where a foundation species possesses traits similar to the remaining species. Our model species, C. micronesica, is the only native gymnosperm in the Mariana Islands and the only widespread native tree supported by nitrogen-fixing endosymbionts. Prior to the armored scale invasion, the population greatly exceeded 3,500 trees per hectare in some locations. At least two arthropods depend on the C. micronesica population; the endemic stem borer Dihammus marianarum exploits cycad stem tissue for larval food, and the cone borer Anatrachyntis sp requires microstrobili for larval food. 6 The tree's ability to resist damage during and resilience after the frequent tropical cyclones 43 that define forest physiognomy of the region 44 render it an important food source following major tropical cyclone damage. 45 Nitrogen cycling. Of particular interest is the influence of C. micronesica on nitrogen cycling in the forests it occupies. Although all cycads associate with nitrogen-fixing cyanobacteria endosymbionts, 46 we are aware of only one attempt to quantify the rate of nitrogen fixation. For the Australian Macrozamia riedlei, annual nitrogen fixation was up to 8.4 kg nitrogen per hectare, and was sufficient to fully replace natural volatilization losses of nitrogen in the cycad habitat. 47 The literature on plant invasions and their influence on ecosystem processes is extensive. [48][49][50][51] The disruption of established nitrogen cycling by alien plant invasions is acute for species that associate with nitrogen-fixing symbionts, a notion that has been verified by studying invaded habitats 52,53 and recovery of altered communities following restoration of sites that were formerly occupied by invasive tree species. 54 Our case study represents a previously unstudied inverse of this phenomenon, in that an endemic nitrogenfixing foundation tree species is being selectively removed from natural communities. We are unaware of any published reports on the consequences to nutrient cycling and other ecosystem traits during widespread mortality of a foundation species with these characteristics. Our case study also highlights the risk of coextinction of the organisms that depend on the tree for survival, and a clear understanding of how widespread mortality of this dominant plant species will damage mutualisms may support more successful future restoration programs. Benchmarking. Understanding the direction and magnitude of change in any ecosystem trait in response to any driver is enabled by appropriate benchmarks. The population of A. yasumatsui insidiously floated and crawled across the island of Guam with unexpected rapidity and in patterns that defied explanation. During the onset of this invasion and subsequent occupation we received funds for mitigation efforts from many federal and conservation entities, but we were unsuccessful in convincing any of them to provide funds for research. This unfortunate Guam experience was due to the wholesale focus on eradication attempts, and likely occurs all too often in other newly invaded regions. In our case, the changes in ecosystem properties ensued and progressed to a point that effectual benchmarking was unachievable for many habitats. The invasion of Guam Island by coconut rhinoceros beetle (Oryctes rhinoceros) was detected in 2007 and by the Little Fire Ant (Wasmannia auropunctata) in 2011. These two insects possess characteristics that predict immense detrimental ecological responses to their invasions. We experienced the same phenomenon as with the A. yasumatsui invasion where funds were available for surveys and attempts at containment or eradication, but funds for establishing benchmarks or any other form of ecological research could not be acquired. We are now forced to rely on the isolated Yap Island C. micronesica populations as our benchmark for some longterm community-level changes because we have no data from many invaded habitats that have already experienced many ecological changes. We recommend that agencies interested in supporting conservation projects recognize that benchmarking during the nascent stages of an invasion is an endeavor worthy of funding. Conclusions The increasing rates in which alien insect invasions are modifying Earth's forests highlight the urgency and necessity to increase our understanding of how invasions influence ecosystem function. 55 Predicting how plant communities will respond to global changes then how the community responses feedback to ecosystem processes is of great importance, 56 yet most available information is anecdotal and restricted to effects addressing biodiversity while ignoring ecosystem function and processes. Herein we argue that a priority on case studies that exhibit unique traits may better fill the current void in knowledge and enable our ability to define generalizations across systems. The goal of being able to identify dominant forest species that are vulnerable and to anticipate the consequences of an invasion with at least some predictive power may be best achieved by prioritizing these unique case studies. Canopy gaps created by the selective removal of a dominant tree species are distinct from canopy gaps created by other means, and the role of changes in light quality, sunfleck dynamics and total insolation are subjects that deserve further study. Oceanic islands offer insular living laboratories for studying disturbance, yet we are unaware of any published reports on how ecosystem processes have responded to an alien phytophagous insect invasion's removal of an island foundation forest species. The invasion of A. yasumatsui has selectively removed a foundation species that was not only the most abundant tree at the time of the invasion, but one that exhibited many unique characteristics such as association with a nitrogen-fixing endosymbiont. The remaining taxa that will define the recovery of canopy structure exhibit few functional redundancies with the cycad species. Therefore, we predict the changes to ecosystem processes will be vast and if documented adequately will serve to inform the current information void on how invasive phytophagous insects alter ecosystem properties. Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed.
2016-05-15T16:58:41.676Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "4b418896e27713a94749bea11714cf5357d40f9d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4161/cib.22331", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b418896e27713a94749bea11714cf5357d40f9d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235633547
pes2o/s2orc
v3-fos-license
Evaluation of vanillin as a probe drug for aldehyde oxidase and phenotyping for its activity in a Western Indian Cohort BACKGROUND: Aldehyde oxidase (AO), a molybdoflavoenzyme, is emerging as a key player in drug discovery and metabolism. Despite having several known substrates, there are no validated probes reported for studying the activity of AO in vivo. Vanillin (4‐hydroxy 3‐methoxy benzaldehyde) is an excellent substrate of AO, in vitro. In the present study, vanillin has been validated as an in vivo probe for AO. Subsequently, a phenotyping study was carried out using vanillin in a subset of Indian population with 100 human volunteers. METHODS: For the purposes of in vitro probe validation, initially the metabolism of vanillin was characterized in partially purified guinea pig AO fraction. Further, vanillin was incubated with partially purified xanthine oxidase fraction and AO fractions, and liver microsomes obtained from different species (in presence and absence of specific inhibitors). For the phenotyping study, an oral dose of 500 mg of vanillin was administered to the participants in the study and cumulative urine samples were obtained up to 8 h after giving the dose. The samples were analyzed by high‐performance liquid chromatography and metabolic ratios were calculated as peak area ratio of vanillic acid/vanillin. RESULTS: (a) The results of the in vitro validation studies clearly indicated that vanillin is preferentially metabolized by AO. (b) Normal distribution tests and probit analysis revealed that AO activity was not normally distributed and that 73.72% of the participants were fast metabolizers, 24.28% intermediate metabolizers, and 2% were slow metabolizers. CONCLUSIONS: Data of the phenotyping study suggest the existence of AO polymorphism, in a Western Indian cohort. Introduction A l d e h y d e o x i d a s e ( A O ) a n d another related enzyme xanthine oxidoreductase/xanthine oxidase (XO) are the members belonging to the family of "molybdenum hydroxylases" and are known to catalyze the metabolism of both compounds present endogenously and also compounds foreign to the body, namely xenobiotics. These are cytosolic enzymes that find wide distribution in the animal kingdom and are members of the non cytochrome P-450 family of enzymes. Both these enzymes require a flavin cofactor for catalytic activity. [1] Although, the two enzymes are found to have a high degree of homology with respect to their amino acid sequences, they exhibit diverse substrate affinities. [1,2] XO is responsible for the metabolism of drugs such as doxorubicin, mitomycin-C, 6-thioguanine, and menadione, while drugs such as brimonidine, carbazeran, famciclovir, and zaleplon, are some of the known substrates for AO. [3][4][5] For xenobiotics metabolized by the CYP450s, there are well-established methods for prospective estimation of parameters such as clearance, inter-patient variability, and drug-drug interactions. However, methods for such pharmacokinetic predictions with respect to compounds metabolized by non cytochrome enzymes (one of which is AO) are not yet well established. [1,6,7] Regardless of the metabolic pathway, there are two key requirements for such predictions -(a) identifying a suitable probe drug; one that has an enzymatic biotransformation pathway that is exclusively catalyzed by the non cytochrome enzyme, to assess the in vitro activity of the enzyme and (b) subsequent generation of phenotypic and genotypic data, with respect to the enzyme, in healthy participants. Relative to AO, XO is a better studied enzyme and phenotyping and genotyping data for XO is currently available for a few ethnic populations that include Caucasians, Ethiopians, Japanese, and the Greeks. [8][9][10][11] The realization of the role played by human AO in xenobiotic metabolism is more recent and rapidly growing. Thus, identification of a suitable probe drug for AO and subsequently generating phenotype and genotype data for AO would be of considerable interest. The well-known flavoring agent, vanillin, is a natural product which is obtained from vanilla bean and is a "Generally Regarded as Safe or GRAS" molecule. [12] It has been found to be an excellent substrate of AO in vitro, in guinea pig AO containing subcellular fractions [13] which is very similar to human liver AO. [14] Based on this knowledge, the present study was carried out with the dual objectives of validating vanillin as an AO probe drug/substrate in an in vitro model as also subsequently using it for phenotyping of AO activity in normal, healthy, human participants. Methods The study was carried out in two parts: (I) The in vitro probe validation study and (II) Phenotyping study in normal, healthy, human participants. In vitro probe validation study We first studied the in vitro metabolic conversion of vanillin to vanillic acid to establish that vanillin was exclusively metabolized to vanillic acid by AO. Materials used in the experiments Vanillin was obtained from Himedia and Vanillic acid and allopurinol from Sigma Aldrich, USA. Raloxifene was obtained from Clearsynth Laboratories Ltd, and nicotinamide adenine dinucleotide phosphate reduced tetrasodium salt (β-NADPH) from Sisco Research Laboratories Ltd. Human, dog, and monkey liver microsomes and human liver cytosol were obtained from Xenotech LLC, USA. The AO fraction was obtained from the livers of guinea pigs, and the XO fraction was isolated from hepatic tissue of Wistar rats, by partial purification using ammonium sulfate precipitation as described by Kadam and Iyer [15,16] Rat liver microsomes were prepared as reported by Walawalkar et al. [17] Animal ethics committee permission was obtained. Methods Liquixx kit was used to estimate the protein content of the isolated enzyme fractions and microsomes by the Biuret method. This part of the study included the following experiments. Note that, unless otherwise specified, all enzyme incubations were done in potassium phosphate buffer (0.05 M, pH 7.4) in a volume of 600 µl and temperature of 37°C. Furthermore, during sample preparation, all centrifugations steps were done at 8000 × g for 10 min. Evaluating metabolism of vanillin to vanillic acid (the primary metabolite of vanillin) by aldehyde oxidase using aldehyde oxidase rich hepatic cytosolic fraction obtained from guinea pigs This was conducted by incubating vanillin in partially purified guinea pig AO fraction, and establishing both K m and V max enzyme kinetic parameters for the conversion of vanillin to vanillic acid. Briefly, V max is the theoretical maximum initial velocity of the reaction when the enzyme is exposed to infinitely high concentrations of the substrate while K m (Michaelis constant) is that concentration of the substrate at which one is expected to observe half the theoretical value of V max (Thus, an enzyme with a high K m has a less affinity for its substrate and needs a higher concentration of substrate to achieve V max ). Initial experiments were carried out to establish the linearity of vanillic acid formation with different concentrations of AO protein (0.05, 0.1, 0.25, and 0.5 mg/ml protein) and varying time points (0, 2.5, 5, 10, 15, and 20 min). Post incubation, 200 µl aliquots were withdrawn, and the reactions terminated with 100 µl of 0.5 M perchloric acid containing p-nitrocatechol as the internal standard (IS, 3 µg/ml). The samples were vortexed, centrifuged, and 50 µl of supernatant was injected onto the high-performance liquid chromatography (HPLC). All incubations were done in a final volume of 1.6 ml. An incubation protocol was also designed to assess the inhibition of formation of vanillic acid using an AO specific inhibitor, raloxifene. Raloxifene (at concentrations of 0, 1, 3, 10, 30, 100, 300, and 1000 nM which are based on reported IC 50 values in literature [18] was co-incubated with vanillin (7 µM) in the presence of 0.1 mg/ml protein AO fraction for 10 min. Post incubation, the samples were processed similar to that mentioned above. Confirming lack of metabolism of vanillin to vanillic acid by xanthine oxidase XO is also a molybdenum hydroxylase enzyme and can potentially metabolize vanillin to vanillic acid. Vanillin was incubated with partially purified rat liver XO fraction. Briefly, incubations were done in a volume of 500 µl and at final concentrations of vanillin (25 µM), and XO fraction (0.2 mg protein/ml). Incubations were conducted for 30 min and terminated with 250 µl of 0.5 M perchloric acid containing p-nitrocatechol as internal standard (IS, 3 µg/ml). The samples were vortexed, centrifuged, and an aliquot of the supernatant (50 µl) was injected onto the HPLC. Further, inhibition of formation of vanillic acid was also studied using allopurinol, a specific inhibitor of XO. The incubation conditions were similar to that mentioned above and contained allopurinol at a final concentration of 5 µM. Post incubation, the samples were similarly processed. Since XO rich fractions can potentially be contaminated by small amounts of AO, [15,16] the inhibition of formation of vanillic acid by raloxifene was also studied in the XO fractions. The incubation conditions were identical to that mentioned earlier, except that they contained raloxifene at a final concentration of 10 µM. Post incubation, the samples were similarly processed. Confirming lack of metabolism of vanillin to vanillic acid by CYP450s This experiment was conducted by incubating vanillin in hepatic microsomal enzyme fractions obtained from four different animal species (human, rat, dog, and monkey). Briefly, incubations were conducted in a final volume of 500 µl, and at a concentration of 25 µM vanillin, 0.5 mg microsomal protein/ml, and 0.6 mM NADPH. Incubations were conducted for 30 min and terminated with 250 µl of 0.5 M perchloric acid containing p-nitrocatechol as internal standard (IS, 3 µg/ml). The samples were vortexed, centrifuged, and an aliquot of the supernatant (50 µl) was injected onto the HPLC. Since, microsomal fractions have also been reported to be contaminated by XO and AO during isolation procedures, inhibition of formation of vanillic acid in microsomes was studied in presence of both their inhibitors, allopurinol (XO) and raloxifene (AO). Separate incubations with the same conditions mentioned above and containing allopurinol (5 µM) and raloxifene (10 µM), respectively, were conducted. It should be noted that, both vanillin (25 µM, K m = 7 µM) and inhibitors (allopurinol and raloxifene) were used at concentrations higher than their reported IC 50 values to detect even a fairly low affinity formation of vanillic acid. This was done to detect metabolite formation by even a minuscule amount of AO or XO contamination in XO/CYP450 enzyme preparations. Overall, incubations, in the presence and absence of NADPH (to detect CYP450 mediated metabolism), and with and without inhibitors of XO and AO (to detect role of XO vs. AO contaminants), respectively, were conducted in microsomal preparations. High-performance liquid chromatography analysis of vanillin and vanillic acid for in vitro assays The analysis was carried out on Waters Alliance 2695 Separation module having a ultraviolet/visible detector at 280 nm and chromatograms were analyzed by Empower 2.0 version software. Mobile phase containing 1% acetic acid (A) and acetonitrile (B) was used at A: B: 85:15, and a flow rate of 1 ml/min. The retention times of vanillin, vanillic acid, and p-nitrocatechol were 10.2, 6.7, and 13.5 min, respectively. Statistical analysis K m and V max were derived by linear regression analysis of double reciprocal plot (Lineweaver-Burk plot) and a related method of plotting, the Eadie-Hofstee plot of the velocity and substrate concentration data. IC 50 values for drug and metabolite were calculated from the data plotted as log inhibitor concentration on X axis and percent inhibition on the ordinate. The plotting and analysis functions in Microsoft Excel program of the Microsoft Office Suite 2007 were used. Phenotyping study Ethics The study protocol was presented to and approved by the Institutional Ethics Committee. A written, informed consent was taken from all participants before the start of the dosing. The protocol approval number is EC/OA/-48/2015 and the trial is registered with the Clinical Trials Registry of India (CTRI/2015/09/006225). Choice of vanillin as a substrate Vanillin is a flavoring agent that is known to be GRAS. It was chosen after the in vitro studies proved its specificity for metabolism by AO. Study procedure A total of 100 normal, healthy participants (46 women and 54 men) aged between 18 and 45 years were recruited. Those with a history of drug or alcohol use 48 h before the study, vanillin allergy, or those on treatment for any chronic condition were excluded from the study. A general medical and physical examination was then carried out. Participants were asked to abstain from vanillin containing foods and beverages 24 h before the study and until the last urine sample collection. After an overnight fast, on the day of the study [Day 1], the participants were given 500 mg of vanillin in a capsule on an empty stomach and cumulative urine was collected over the next 8 h. Aliquots were kept at − 80°C and later analyzed by HPLC. Sample preparation and high-performance liquid chromatography analysis This method was developed in house. Briefly, a 2 ml aliquot of each participant sample was heated in presence of 1 ml of 0.5M perchloric acid containing para-nitrophenol as internal standard, at 95°C for 5 h, to cleave the glucuronide and sulfate conjugates. At the end of 5 h, the samples were cooled to room temperature, volume made up to 5 ml with water, and centrifuged at 8000 ×g for 5 min. The supernatants were then analyzed on HPLC at 300 nm using a method as reported by Farthing et al., [19] that was modified suitably. The retentions times of vanillin, vanillic acid, and p-nitrocatechol were 11.4, 9.4, and 16.4 min, respectively. Evaluation of phenotype Metabolic ratios (MR) values were calculated by taking the ratio of peak area ratios of vanillic acid/internal standard and vanillin/internal standard. MR values were tested for normality using the Kolmogorov-Smirnov (KS test), the D'Agostino and Pearson Omnibus and Shapiro-Wilk tests in Graph Pad Prism version 5.0 using the column statistics option in column analyses. Further, a probit analysis was carried out to determine the value of antimode (cutoff point) to distinguish between slow and fast metabolizers. In vitro probe validation study Evaluating metabolism of vanillin to vanillic acid by aldehyde oxidase in the hepatic aldehyde oxidase fraction from guinea pig Formation of vanillic acid was found to increase linearly with an increase in protein concentration and time. From the results of the linearity experiments, protein concentration (0.1 mg/ml) and time (10 min) were chosen for further experiments. The kinetic parameters, K m and V max were estimated to be 7 µM and 0.232 nmol/mg/ml, respectively. Protein concentration (0.1 mg/ml), incubation time (10 min), and vanillin concentration (7 µM) were the optimized conditions for the inhibition assay. Using these conditions, the IC 50 of raloxifene was found to be 98.62 nM. Allopurinol did not inhibit the formation of vanillic acid to any significant extent. Evaluating metabolism of vanillin to vanillic acid by xanthine oxidase On incubation of vanillin with rat hepatic XO fraction, approximately 4% conversion to vanillic acid was seen. This conversion was not significantly inhibited by allopurinol, a known inhibitor of XO, indicating that vanillic acid formation is not XO mediated. However, this conversion was significantly inhibited (up to 76%) by raloxifene, a specific inhibitor of AO. Evaluating metabolism of vanillin to vanillic acid by CYP450s Vanillic acid formation was seen in rat and monkey liver microsomes to a small extent (3%-4%) both in the incubations with and without NADPH. CYP450 catalyzed production of a metabolite necessarily requires NAPDH cofactor. Further, this conversion was not inhibited by allopurinol. Raloxifene, on the other hand, showed up to 80% inhibition of vanillic acid formation, both with and without NADPH. Human liver CYP450 microsomes showed an insignificant formation of vanillic acid. No vanillic acid formation was seen in dog liver microsomes. The results of all in vitro experiments are summarized in Table 1. Human phenotyping studies A representative HPLC chromatogram of a participant urine sample is presented in Figure 1. AO phenotyping data, expressed as the MR of vanillic acid/vanillin are presented as a frequency distribution in Figure 2. MR values were not normally distributed as reflected by the P values of the three statistical tests; KS test (P = 0.027), Shapiro-Wilk test (P = 0.0008) and D'Agostino and Pearson Omnibus test (P = 0.005). Probit transformations of the log MR values resulted in nonlinear probit plots with bimodal distribution, as shown in Figures 3 and 4. Participants with MR value <9.7 (i.e., the antimode/cutoff value, log MR 0.99) were assigned as slow metabolizers and those above this value were assigned as extensive or fast metabolizers. Subjecting the data from probit analysis to Hardy-Weinberg analysis, yielded the distribution of allele frequencies expected in the sample population to be 0.1414 for the defective alleles and 0.8586 for the wild type alleles, respectively. This corresponds to 1.99% of the sample population being homozygous for the defective alleles, 24.28% of the sample population being heterozygous and 73.72% of the sample of participants being homozygous for the wild type alleles. In the absence of a gene dosing effect, the heterozygotes appear to segregate with the homozygotic wild type allele possessing metabolizers resulting in a bimodal distribution with only 2% of the population being detected as poor metabolizers. No adverse events were reported/observed in any of the participants. Discussion Several literature reports have shown that AO catalyzes the metabolism of drugs containing aldehydes and nitrogen heterocycles. The conversion of the anticancer drug used to treat metastatic colorectal cancer, 5-fluoropyrimidine, to its active form (5-fluorouracil) is mediated by AO. AO is also implicated in the metabolism of antimalarials (quinine), anticancer drugs (methotrexate, 6-mercaptopurine, and cyclophosphamide), antiviral drugs (famciclovir, zidovudine AZT), antipsychotic drugs, and antiepileptic drugs (zonisamide). [20] Although the number of metabolic reactions catalyzed by AO is minuscule relative to those by CYP450 superfamily, AO has evolved as an important enzyme involved in drug metabolism in the past few years. The primary reason for this is the paradigm shift in drug discovery and development. Pharmaceutical corporations have avoided NCEs whose metabolism is CYP450 mediated, with the goal of attenuating CYP450-mediated drug-drug interaction potential and population differences in the clearance of drugs due to phenotypic/genetic differences in CYP450 alleles. As a result, molecules that are metabolized by non-CYP450 enzymes, for example, AO and XO, are increasingly emerging as leads. [1,2,4] Several studies have reported in vitro variability in the levels of AO, similar to that observed for other drug metabolizing enzymes that may result in clinical pharmacokinetic variability of drugs metabolized predominantly by AO. Liver cytosolic fractions prepared from 13 Caucasian livers showed a 16.6 and 2.75-fold variation in intrinsic clearances, when assayed with N-[(2'-dimethylamino) ethyl] acridine-4-carboxamide and benzaldehyde as substrates, respectively. [21] This was partially attributed to lability of AO activity during processes of homogenization and storage or inter-individual variability. Further, inter-individual variability may be due to the differences in AO gene expression or presence of genetic polymorphism. Similarly, conversion of methotrexate to 7-hydroxy methotrexate, determined in six human liver cytosol samples showed a high variability (48-fold). Since 7-hydroxy methotrexate is cytotoxic and is pharmacologically active, it has been suggested that this variation should be taken into consideration while prescribing methotrexate. [20] Variability in PK parameters of AO cleared drugs has also been reported and extent of variability seems to depend on the substrate in question. [22] In order to better predict the pharmacokinetic attributes and clinical variability of NCEs/drugs metabolized by AO, one crucial fragment of information is the quantitative understanding of the potential inter-individual variability in the AO-mediated metabolism of drugs due to genetic polymorphisms in AO. [2] One reported, genotyping study in an Italian population has indicated a potential for differences in human AO activity in different individuals containing some of the detected SNPs, indicative of variant alleles. [23] In contrast, a phenotyping study for AO enzyme using a probe substrate has yet not been reported in the literature, to the best of our knowledge. For the generation of phenotyping data, it is necessary to first evaluate a substrate probe's utility based on in vitro studies using suitable in vitro drug metabolism models and consequently validate the substrate probe. Vanillin (4-hydroxy 3-methoxy benzaldehyde), is a flavoring agent procured from the bean of vanilla. Vanillin is established as a good substrate of AO in several animal species in vitro. We chose the guinea pig AO fractions that were partially purified, as guinea pig AO is reported to behave similar to human liver AO. [14] The choice of validation parameters was based on the steps outlined by Pelkonen and coworkers. [24] The metabolism of vanillin was first elucidated in guinea pig AO fraction and kinetic parameters were established. Further, the lack of metabolism of vanillin by XO fraction and liver microsomes was studied to establish the selectivity of AO towards vanillin. In this regard, although conversion of vanillin to vanillic acid was observed in the XO fraction, the reaction was not inhibited by allopurinol, a specific inhibitor of XO. However, production of vanillic acid was strongly inhibited by raloxifene, an AO inhibitor. This indicated that the XO fraction was possibly contaminated by small amount of AO due to the similar procedures involved in their isolation, yielding the observed results. The results of the experiments in microsomes of rat, monkey and human, with and without NADPH cofactor and/or specific inhibitors, revealed similar rates of low level conversion of vanillin to vanillic acid, irrespective of whether NADPH was present or absent in the incubation. The conversion in the absence of the cofactor NADPH indicates that the production of vanillin is not CYP-mediated and supports the involvement of a non-CYP enzyme. Microsomal preparations from commercial sources have been reported to be contaminated with cytosolic enzymes. [25] This explains the observation of vanillic acid formation in the liver microsomes, since CYP450s have not been reported, till date, to be involved in the biotransformation of vanillin to vanillic acid. In addition, since formation of vanillic acid is inhibited by raloxifene alone, the conversion observed is due to AO contamination, akin to that observed in the XO fractions. The notable exception was that dog liver microsomes did not show any observable production of vanillic acid, both in presence or absence of NADPH. Importantly, dogs have been reported to be devoid of AO activity. [14] Hence, this observation further validates the fact that metabolism of vanillin to vanillic acid in microsomes is probably due to AO contamination of microsomal samples, being seen in all microsomal samples, except those obtained from AO deficient dogs. Overall, guinea pig AO fractions showed facile conversion of vanillin to vanillic acid. Rat liver XO fraction and microsomes from all species, except dog showed much lower extent of formation of vanillic acid. In all cases, vanillic acid production was inhibited by raloxifene alone while allopurinol did not inhibit this reaction. Thus, the results of the probe validation studies confirmed that AO alone is responsible for the bioconversion of vanillin to vanillic acid. Vanillin was therefore used as validated substrate probe in this phenotyping study on 100 normal, healthy participants. Vanillin was found to be safe with none of the participants reporting any adverse events. Hardy-Weinberg analysis yielded the distribution of allele frequencies expected in the sample population to be 0.1414 for the defective alleles and 0.8586 for the wild type alleles, respectively. Based on the probabilities of the presence of different allele combinations in population and in the absence of a gene dosing effect, 2% of the population appeared to be poor metabolizers. Variations in AO activity can have serious clinical implications on the clearance of drugs metabolized by AO. A standard dosage of these drugs may lead to either therapeutic failure or toxicity, which may have toxicological implications or fatal consequences. This is of grave concern for drugs having a narrow therapeutic index. Genotyping data from the Italian population study have indicated a potential for differences in human AO activity in different individuals. The present phenotyping data further indicate that inter-individual differences exist in AO activity, at least in a subset of the Indian population. These evidences combined with the knowledge that the number of drugs metabolized by AO would probably rise significantly over the next few years, point to the importance of this data set, which may allow for better in vitro-in vivo correlation of pharmacokinetics of AO substrates. Conclusion Aldehyde oxidase phenotyping was done using vanillin as a probe substrate in a Western Indian population and the percent of slow metabolizers was 2%, suggesting the existence of a polymorphism in AO.
2021-06-26T06:17:16.389Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "e0e0a10cdf2baba21005e7800d15fe958a1e96a7", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "26f0fecd1ac251d2e649bfe29861404037acab6d", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248181501
pes2o/s2orc
v3-fos-license
Undergraduate R Programming Anxiety in Ecology: Persistent Gender Gaps and Coping Strategies The ability to program in R, an open-source statistical program, is increasingly valued across job markets, including ecology. The benefits of teaching R to undergraduates are abundant, but learning to code in R may induce anxiety for students, potentially leading to negative learning outcomes and disengagement. Anecdotes suggest a gender differential in programming anxiety, with women experiencing greater anxiety. Currently, we do not know the extent to which programming anxiety exists in our undergraduate biology classrooms, whether it differs by gender, and what instructors can do to alleviate it. Instructor immediacy has been shown to mediate related anxieties such as quantitative and computer anxiety. Likewise, students’ use of adaptive coping strategies may mitigate anxieties. We investigated students’ R anxiety within a lower-division ecology course and explored its relationships with gender, instructor immediacy, classroom engagement, and reported coping strategies. Women reported significantly higher R anxiety than men, a gap that narrowed, yet persisted over the semester. In addition, several specific coping skills were associated with decreases in R anxiety and increases in self-concept and sense of control; these differed by gender identity. Our findings can guide future work to identify interventions that lessen programming anxiety in biology classes, especially for women. INTRODUCTION Rapid increases in technology, data accumulation, and data science have led to a world that is steeped in information ready to be accessed and employed (Friedman, 2017). Indeed, no other time in history has been marked by such a sheer amount of available information. Yet, because of this acceleration in technology and data collection, organizing, accessing, and analyzing information has increasingly become a challenge. Computer programming skills are essential to tackle this challenge and are increasingly recognized as a valuable skill set for undergraduate students (Barone et al., 2017;Wilson-Sayres et al., 2018;Wright et al., 2019). Programming skills can lead to more job opportunities (seven million openings called for programming in 2015) and higher-paying jobs ($22,000/year more, on average, just within careertrack jobs) and are useful across a variety of job categories (Burning Glass Technologies, 2016). By introducing a programming language in undergraduate courses, identifying the barriers that inhibit the development of programming skills, and designing instructional elements that improve student learning and engagement in programming, universities can help their graduates access these more abundant and higher-paying jobs and can contribute to national and global efforts to better use the information currently available. One program used frequently in the field of ecology, and thus likely being introduced more regularly into ecology classrooms, is the statistical and graphical program R (Auker and Barthelmess, 2020; also see Touchon and McCoy, 2016). R is open source, robust, and adaptable; it can be used for a range of purposes, including linguistic analysis of text files, complex statistical analyses, and imputation of missing data for data sets with thousands of entries (e.g., with packages in R such as Quanteda and MICE). R programmers across the world contribute to and iteratively improve R on a daily basis. Thus, it is both broadly used and useful across fields, including biology and industry. For example, in the journal Ecology, peer-reviewed papers that used R for statistical analysis increased from 14.3% in 2008 to 82% in 2018, demonstrating its increasingly widespread use in the field (Auker and Barthelmess, 2020; also see Touchon and McCoy, 2016). Because it is open source, there are fewer financial barriers to access (computer and Internet access are still required), contributing to its potential for aiding students long after college and reducing barriers for low socioeconomic status individuals. While the benefits of learning to program, and specifically learning to program in R, are clear, anxiety experienced while learning R may present an obstacle for many students (Connolly et al., 2008;Nolan and Bergin, 2016). In academic contexts, anxiety is often associated with the emphasis on right answers and fear of failing, not understanding, and not being capable of executing an expected task, all of which can threaten students' sense of self-worth and competence (McInerney, 1997;Harper and Daane, 1998;Cooper et al., 2018). Programming-specific anxiety is "a psychological state engendered when a student experiences or expects to lose self-esteem in confronting a computer programming situation" (Connolly et al., 2008, p. 3). Students can experience and develop anxiety in programming contexts because they often arrive at college unprepared in programming skills, a situation that is then exacerbated by frequent negative feedback from programming software (e.g., frequent error messages when learning how to code). Programming anxiety in undergraduate biology courses can also relate to quantitative or math anxiety and computer anxiety. Quantitative or math anxiety, described as "feelings of tension and anxiety that interfere with the manipulation of numbers and solving mathematical problems" (Kelly et al., 2015, p. 173), can arise and overlap with programming anxiety when students use a programming language to conduct statistical analyses, such as in R. Mathematic abilities have also been shown to predict programming skills (Owolabi et al., 2014, p. 715). Additionally, computer anxiety, or the "generalized emotion of uneasiness, apprehension, anxiousness of coping, or distress in anticipation of negative outcomes from computer-related operations" (Chang, 2005), is similarly related to programming anxiety due to computing-specific challenges that students can encounter. Thus, in working across these constructs, we define R programming anxiety as "feelings of uneasiness, apprehension, or distress in anticipation of negative outcomes from programming within R; such feelings may be associated with anticipation of programming, statistical, or computer-related difficulties." Conversely, a positive sense of control and self-concept, two constructs related to and influential for anxiety, can help combat anxiety and are critical indicators when considering ways to decrease anxiety (McInery, 1997). Sense of control relates to students' sense that they have control over their own actions when engaging with R, and self-concept measures students' overall confidence and beliefs about themselves in the context of a given R task. Students' task-specific sense of con-trol and self-concept are inversely correlated with the anxiety they experience when engaging with that task; positive increases in these two constructs can help reduce learners' fear of task failure and thus their anxiety (McInerney, 1997). While low levels of anxiety can sometimes motivate students who are interested in the topic at hand (Pekrun et al., 2007), moderate to high anxiety typically demotivates students and has a negative effect on student learning (Yerkes and Dodson, 1908;Seipp, 1991;Akgun and Ciarrochi, 2010;England et al., 2017), especially when the task at hand, such as learning to use R, is cognitively difficult (Teigen, 1994). Indeed, anxiety in science and biology classrooms is well documented and may be one factor that contributes to low rates of student retention in science, technology, engineering, and mathematics (STEM) fields, exacerbated by low student preparedness for quantitative course work upon entry to college (Kelly et al., 2015;England et al., 2017, England et al., 2019Cooper et al., 2018;Cooper and Brownell, 2020). Further, programming anxiety may be a particularly salient issue in biology classrooms, as degree programs often vary widely in their statistics pedagogy, and many students view programming as a means to study biology rather than an intrinsically motivating activity in and of itself (Metz, 2008). Finally, the way in which R coding is taught may have an effect on student programming anxiety. There are increasingly calls for biology classrooms to move to active styles of instruction (e.g., American Association for the Advancement of Science [AAAS], 2011;Theobald et al., 2020), and while these offer clear benefits in terms of cognitive gains (Freeman et al., 2014;Theobald et al., 2020), they also can result in increased student anxiety while learning via fear of negative evaluation by one's instructors or classmates (Cooper et al., 2018;Downing et al., 2020). Despite its likely importance and demonstrably increasing role in students' biology education experience, most research relating to programming anxiety 1) is focused on computer anxiety and not specific to programming; 2) was published more than 10 years ago, at a time when programming was less ubiquitous; or 3) if published within the last 10 years, was carried out within the context of computer science classrooms (Chua et al., 1999;Connolly et al., 2008). This research leaves a large gap in our understanding of how undergraduates learn and build programming efficacy in modern contexts and in biology classrooms. Although there is a paucity of formal studies on biology undergraduates' programming anxiety, with related research showing mixed results and maintaining the same limitations of era and scope described (Chua et al., 1999;Stoilescu and McDougall, 2011), instructors often cite anecdotal evidence that suggests trends in students' experience. Notably, it is generally assumed that women experience higher degrees of programming anxiety in the biology classroom. Although biology majors are typically 60% female Wright et al., 2016), there is evidence that women are underrepresented in computational biology (Bonham and Stefan, 2017) and that, within the field of biology, women underperform on exams (while controlling for grade point average) and participate less in class Wright et al., 2016). The fact that gender gaps remain a factor in biology student success and representation in computational biology necessitates further investigations into mechanisms underlying these gaps and requires that we periodically test our assumptions about how gender affects different facets of the student experience Flanagan and Einarson, 2017). Indeed, the relationship between gender and programming anxiety is likely to change over time as societal contexts shift, computers become ubiquitous, computer skills gain importance, and more diverse identities are represented in science and technology (Powell, 2013). Furthermore, it is also important to recognize that there is a paucity of research on whether such gaps exist for the populations of biology students who identify as nonbinary, gender fluid, transgender, or elsewhere on the gender spectrum-identities that are not well-represented or well-characterized within the field. As biology programs and training incorporate more programming into their curricula, we posit that gender-mediated differences in programming anxiety are an important consideration when attempting to support the achievement and participation of students of all genders in our classrooms. Understanding how our teaching techniques influence student anxiety when learning quantitative skills is critical to students' skill development and their self-efficacy. It has been posited that instructors may be able to mitigate student anxiety in several ways, such as by introducing group work (e.g., Cooper et al., 2018) or validating students' thinking (e.g., Downing et al., 2020). One factor that has been shown to decrease quantitative anxiety specifically is instructor immediacy, or the perceived social and/or physical distance between the student and instructor (Witt et al., 2004;Williams, 2010;Kelly et al., 2015). Instructor immediacy is measured as verbal and nonverbal behaviors exhibited by the instructor that have been correlated with a decrease in perceived social distance, having strong positive impacts on student learning (Gorham, 1988;McCroskey et al., 1995;Witt et al., 2004;Chesebro and McCroskey, 2001). In addition to decreases in student anxiety, instructor immediacy leads to increased student engagement in the classroom, which may serve to combat student disengagement in the face of tasks that induce anxiety (Roberts and Friedman, 2013). This may be especially important for programming anxiety, as computer anxiety is correlated with computer avoidance (Chua et al., 1999). While instructor immediacy has been shown to decrease student math and quantitative reasoning anxiety (Witt et al., 2004;Williams, 2010;Kelly et al., 2015), and increase student engagement, we have encountered few, if any, studies examining how instructor immediacy impacts student anxiety or engagement while learning to program. In addition to instructor behaviors, specific coping strategies students use when confronting challenging tasks, such as programming in R, likely influence the degree of anxiety they experience. Coping, or an individual's behavioral response to a stressor, is described by many researchers as context specific, with the individual context influencing the coping behavior (Lazarus, 1993). Coping responses are also malleable, meaning that they can be influenced, learned, and changed (Lazarus, 1993), but they tend to become more stable over time with repeated use (Spencer et al., 1997). Coping strategies can either be maladaptive (leading to negative well-being and negative outcomes) or adaptive (leading to positive well-being and productive outcomes; Skinner et al., 2003;Henry et al., 2019). Thus, due to their malleability and positive outcomes associated with adaptive coping strategies, students' coping responses to R-induced stressors may be fruitful targets for instructional interventions. In this study, we examine both instructor immediacy and student coping behaviors as they relate to students' reported R anxiety, R self-concept, and R sense of control over the course of the semester. Research Questions, Aims, and Predictions Here, we investigated whether Q1) gender (being a man or woman) is related to R programming anxiety, Q2) instructor immediacy is related to changes in R programming anxiety, Q3) R programming anxiety or instructor immediacy are related to student engagement when learning to code in R, and Q4) coping strategies employed when experiencing R coding challenges relate to changes in reported anxiety metrics. Together, these questions build a framework for understanding whether students who identify with specific genders are more at risk of experiencing programming anxiety, and whether instructors can alleviate student anxiety and increase student engagement in their classrooms by modifying their interpersonal relationships with students. Further, by identifying the coping skills that students do and do not use, we can work to improve instructional materials in ways that facilitate student persistence in challenging or anxiety-producing tasks. Notably, Q4 arose as a later addition to the study as a result of early findings that led us to seek additional information to explain observations in anxiety metrics. We explored this framework in an undergraduate ecology lab class at a large research university. This class affords most of these students their first opportunity to write code for an independent project. We specifically addressed student anxiety while learning to code in the statistical program R, due to its increasing prevalence across STEM and its widespread use in our classrooms. We predicted that: P1) women would report higher programming anxiety and lower sense of control and self-concept in R (hereafter referred to as "R anxiety") than men, but that anxiety for all students would decrease and their sense of control and self-concept would increase over the semester due to more R exposure; P2) students with more immediate instructors would report greater decreases in R anxiety over the semester and greater increases in R sense of control and self-concept; P3) students with higher R anxiety would engage less in the classroom, but students with more immediate instructors would engage more; and P4) students who use adaptive coping strategies would show decreases in R anxiety over the semester and increases in R sense of control and self-concept. University and Course Context We conducted this study in the Principles of Ecology course, at the University of Colorado Boulder (CU-Boulder), a large research university (>34,000 students). This course enrolls between 90 and 150 students per section and is a required course (typically taken in the second year) for undergraduates seeking a degree in ecology and evolutionary biology. Two lecture sections of the course are taught in the Fall by two different faculty instructors, with one lecture section in the Spring. Students also enroll in laboratory sections of the course, which are capped at 14 students per section and taught by graduate teaching assistants (TAs). Beginning at ∼5 weeks into the semester and for the remainder of the course, every student works as part of a small lab group within the laboratory section to conduct an independent research project as part of the lab requirements. This includes every step of the scientific process, including statistical analysis, which students must do in the program R. Most students have not had any experience with R before entry to this course, and even if they have, it is many students' first opportunity to write their own code. Students are taught basic programming in R as part of the lab curriculum during two 2-3 hour lessons. TAs in the Ecology and Evolutionary Biology department at CU-Boulder (around seven per semester, each teaching two lab sections) are the sole instructors of the labs and have almost complete control of how content is taught (i.e., they choose among active-learning techniques and lecture techniques to deliver the same content across labs). All TAs are experienced in using R for statistical analysis and have ample access to the R code they need to know to teach the class before facilitating the R lessons. Overall lesson length varies slightly due to TA instructional style and students' pace. In this study, all TAs used workshop-style active-learning strategies while teaching R, leading students through a script and checking in with their students frequently, and based on our observations, only small variations in teaching strategy and lesson length existed. Though teaching strategies were similar, active learning presented opportunities for potential differences in instructor immediacy, because TAs could control the pace and style of content presentation. Students are also exposed to R code during the lecture sections but are not asked to write their own code for lecture activities. Thus, the majority of their R learning occurs during the laboratory sections. This research was reviewed and approved by the University of Colorado Boulder's Institutional Review Board (IRB no. 18-0471). Study Design We employed a pre-post study design to examine changes in programming anxiety over the semester as related to our factors of interest (gender, engagement, etc.). At the beginning and end of each semester for three semesters, we deployed a survey to students we recruited from the Principles of Ecology course, with a total of 376 students taking part in our presemester survey and 362 taking part in our postsemester survey across semesters (credit for completion was given). Of the students invited to participate in the study, 63% participated in both preand post-semester surveys, and 215 students completed both pre-and post-semester surveys (we only analyzed data from full responses [= completed surveys?]). In all three semesters, students were surveyed on: demographics (pre-and postsemester survey, see Table 1 for demographic summary), R anxiety (preand postsemester survey), and instructor immediacy (postsemester survey only). In the third semester of this study, we added a coping skills scale to the postsemester survey and observations of student engagement in the lab sections of this course. R Anxiety Measure While there is no developed R-specific anxiety measure to our knowledge, much work has been done to understand the factors that contribute to related anxieties, such as math, programming, and computer anxiety (Heinssen et al., 1987;Connolly et al., 2008). As explained in the Introduction, these anxieties are likely to intersect when students are engaged in using R. Thus, for this study, we drew upon prior work in these areas and lightly edited the programming anxiety measure developed by Connolly and colleagues (2008) to focus on programming in R through simple language changes (e.g., "learning computer terminology" was changed to "learning R terminology"). We employed the lightly edited survey in the pre and post surveys at the start and end of the semester. Connolly and colleagues originally adapted the Computer Anxiety and Learning Measure (McInerney, 1997) TABLE 1. Self-reported demographic breakdown of students (by semester) who participated in both pre-and post-semester surveys and whose data were analyzed a Responses to an open-ended question asking students to report on their gender included woman, man, female, male, and nonbinary. The responses "female" and "male" were interpreted respectively as "woman" and "man" to align with the construct of gender. d Demographic data that were only collected in semesters 2 and 3 of this study, which is why there are no data present for semester 1 (denoted by a dash). for their research (they changed "computer" to "computer programming" for each item). This measure was originally written and validated by McInerney (1997) in a large lower-division undergraduate population, similar to the students in this study. There are four constructs that this measures in regard to computing situations: 1) gaining initial skills, 2) sense of control, 3) self-concept, and 4) state of anxiety. In testing the dimensionality of this measure, McInerney (1997) found that these four constructs were stable over multiple samples and showed the following structures. The "gaining initial skills" construct was explained by a model with a single higher-order factor (gaining initial computing skills) and four factors beneath it. The "sense of control" construct was explained by a two-factor model with one substantive factor and one methodological "artefactor," which stands for artificial factor; in other words, the two factors both measure "sense of control" and arise due to the fact that some items were positively worded and some were negatively worded. The "self-concept" construct was similarly explained by a substantive factor and a methodological artefactor arising from positively and negatively worded questions (artefactors can commonly arise from method effects associated with negatively worded items; e.g., Tomas et al., 2013). Finally, the "state of anxiety" construct was explained by a model with a single higher-order factor (state of anxiety) and four factors beneath it. These structures gave us confidence in using these scales to measure the four constructs. Because we were interested in reporting on and analyzing the four overarching constructs, we averaged across items and reported overarching results for the four main constructs. Negatively valenced items were reflected so that all items reflect increases in the given construct (see Supplemental Material). Thus, our use of this instrument results in measurement of four constructs of R anxiety with Likert scale responses, for which we calculated internal reliabilities for our population; "gaining initial skills in R" (α = 0.95), "sense of control in R" (α = 0.87), "R self-concept" (α = 0.96), and "state of anxiety in R situations" (α = 0.95). We refer to the groups of questions that measure each of these constructs as "scales" of the broader R anxiety measure throughout the paper. The gaining initial skills in R scale includes 20 items that ask students about how anxious they would feel while performing specific learning tasks in R, including learning about basic functions, using R, and receiving feedback on R skills. An example item from this scale is "Rate the extent to which taking a course in R would make you anxious." This scale has the highest possible score of 5, with 1 being least anxious about gaining initial skills in R and 5 being most anxious. McInerney (1997) included the gaining initial skills in R construct due to the assumption that anxiety about computers is context specific, necessitating exploration of the beginner-specific experience. We did not deploy this scale in the third semester of this study, in order to prevent survey fatigue when a coping skills scale was added. We chose to remove this scale as we felt that the other three anxiety scales best encapsulated potential overall, long-term changes in student self-efficacy and R anxiety, while anxiety about gaining initial skills in R is more specific to the initial introductory stages of a student's R learning trajectory. The sense of control in R scale, with 14 items, is designed to measure students' sense of self-control over situations that include R, which are examined through asking students how often they engage in positive and negative self-talk. For example, students are asked to rate how often they think "I feel in control of what I do" or "What if I hit the wrong key?" while using R. This scale has the highest possible score of 5, with 1 being the lowest sense of control in R and 5 being the highest sense of control in R. The R self-concept scale, with 22 items, is designed to measure students' self-image and self-efficacy in regard to situations with R. For example, students were asked to rate their agreement with items such as "I am sure I could solve any problems I had while I was using R." This scale has the highest possible score of 5, with 1 being the least positive R self-concept and 5 being the most positive R self-concept (McInerney, 1997;Connolly et al., 2008). McInerney (1997) describes sense of control and self-concept as two critical constructs when considering ways to decrease anxiety. That is, a student's sense of control and self-concept are correlated with anxiety; increases in these two constructs can help reduce the learner's fear of task failure and anxiety (McInerney, 1997). Finally, the state of anxiety in R situations scale, with 22 items, is designed to measure student worry, distractibility, comfort, and physiological symptoms while using R. For example, students were asked to rate how often "I feel helpless when I use R" and how often they experience "sweaty palms." This scale has the highest possible score of 4, with 1 being the lowest state of anxiety and 4 being the highest state of anxiety. The state of anxiety in R situations construct was included by McInerney (1997) because of its importance in measuring situation-specific anxiety and its relevance to anxiety students experience when being evaluated. Together, these four constructs (78 total items) can help us to assess and understand students' overall anxiety when coding in R. Immediacy Measures We used two scales to measure instructor immediacy: The Revised Non-verbal Immediacy Measure (RNIM; developed and validated by McCroskey et al., 1995) and the full Gorham's Verbal Immediacy Measure (developed and validated by Gorham, 1988). The RNIM is a revised version of the Non-verbal Immediacy scale developed by Richmond, Gorham, and McCroskey (1987), asking students to score the frequency of instructor behaviors such as "gestures while talking to the class," "smiles at the class while talking," and "has a very relaxed body position while talking to the class," for 10 items on a five-point Likert scale, with 1 representing "never" and 5 representing "very often" (McCroskey et al., 1995). McCroskey et al. (1995) collected the original evidence of scale validity for undergraduate students across 2,300+ undergraduate students from five different universities in five countries, demonstrating reliability across all, including the U.S. population (n = 365, α = 0.85, α = 0.82 in our study). Gorham's Verbal Immediacy measure uses 20 items on a five-point Likert scale for the frequency of the instructor's verbal behaviors, such as "uses personal examples or talks about experiences she/he has had outside of class," "addresses students by name," or "asks questions that solicit viewpoints or opinions" (Gorham 1988). Gorham (1988) collected the original evidence of scale validity in a population of 387 undergraduate students, demonstrating stable dimensionality and reliability (α = 0.94 for a subset of the original items, with all items loading onto a single factor). We used the full version of Gorham's scale (α = 0.8 in our study population). These two measures (30 items total) were included in the post survey at the end of the semester (see Supplemental Material). Coping Skills Measure We used coping skills scales to quantify self-reported 1) Avoidance/Behavioral Disengagement (e.g., "I reduce the amount of effort I put into solving the problem," five items), 2) Active Coping (strategies students employ in the moment; e.g., "I concentrate my efforts on solving the problem," two items), 3) Planning (future-oriented, using organization and planning to approach a problem; e.g., "I try to come up with a strategy about what to do," two items), 4) Instrumental Support Seeking (content-specific; e.g., "I get help and advice from my peers," two items), and 5) Self-Blame (e.g., "I criticize myself," two items; Carver, 1997). All scales were framed with a statement asking students to consider how they cope when encountering challenges in R. These questions were included in the post survey sent to students in the third semester of this study (n = 91 with full data; pre and post survey responses). These scales were adopted largely from the Brief COPE (Carver, 1997), which is based on the COPE instrument (Carver et al., 1989). Notably, the Self-Blame scale is unique to the Brief COPE and was not part of the original COPE. Evidence of validity for the original COPE instrument was collected from 978 undergraduates at the University of Miami. Carver and colleagues used principal-factors factor analyses to investigate the dimensionality of their items. They formulated 11 scales, which included items represented in the Behavioral Disengagement, Active Coping, Planning, and Instrumental Support Seeking scales used in this work (with Active Coping and Planning loading onto a single scale). Item loadings were all above 0.3 for the items used in our study, and Cronbach's alpha was 0.62 or greater in these initial analyses to assess internal reliabilities (Carver et al., 1989). The Brief COPE drew on the COPE by selecting two items with high factor loadings and good performance in the field from each scale. Items for the Brief COPE were edited slightly to sharpen their focus, and the Self-Blame scale was added. Evidence of dimensionality and internal reliability of the Brief COPE (from which we drew the vast majority of items used in this study) was gathered from a sample of 168 community residents recovering from Hurricane Andrew (David et al., 1996). Exploratory factor analyses yielded nine factors for the Brief COPE, which included the scales used in this study. Again, the Active Coping and Planning items loaded onto a single factor. Cronbach's alpha values investigating internal reliability for these scales were all above 0.64 (Carver, 1997). For this study, we did not use the entire Brief COPE, but rather chose four scales that were relevant to our context, because we anticipated that 1) they would be used by undergraduates in the context of R challenges, and 2) they had potential to make reasonable targets for future instructional interventions. We chose the 1) Avoidance/Behavioral Disengagement, 2) Active Coping and Planning scale, 3) Instrumental Support Seeking scale, and 4) Self-blame scale from the Brief COPE. In addition, we added back in a few relevant items from the full COPE. Because we chose only a subset of the scales, added back a few items from the full COPE instrument, and were curious about whether our students perceived the scales in the same way as prior populations, we chose to run a confirmatory factor analysis (CFA) to confirm the factor structure of the scales we chose (see Supplemental Material). Our CFA analyses confirmed that a five-factor structure fit our data. The scales included 1) Avoidance/Behavioral Disengagement behaviors, 2) Active Coping, 3) Planning, 4) Instrumental Support Seeking, and 5) Self-Blame. Notably, Active Coping and Planning were split in our best-fit structure, whereas they were previously combined in the Brief COPE. Classroom Engagement Observations In the third semester of this study, we collaborated with the Technology in Education Group (authors S.S. and J.F.) on our campus to conduct the Behavioral Engagement Related to Instruction protocol (BERI) with all consenting students in each section. The BERI protocol was chosen over other protocols due to its specific focus on measuring students' engagement levels rather than students' performance of specific tasks. For example, while other protocols like the LOPUS define "listening" as "Listening to TA, video, or student presentations as a class," the BERI defines it as "Student is listening to lecture. Eye contact is focused on the instructor or activity and the student makes appropriate facial expressions, gestures, and posture shifts (i.e., smiling, nodding in agreement, leaning forward)," which specifically addresses the levels of student engagement while listening. The BERI was first designed for large classroom settings, but use in small classrooms allowed us to quantify student engagement for a majority of students in the third-semester sample while they learned to use R (Lane and Harris, 2015). The BERI was validated in undergraduate classrooms to ensure reliability across courses and class sizes, making it valid for use in our study population and class size. Specifically, this protocol quantifies time spent performing engagement behaviors, including listening, writing, reading, engaged computer use, engaged student interaction, and engaged interaction with the instructor (see BERI observation protocol in Lane and Harris, 2015). Conversely, it also tracks the time that students spend exhibiting disengaged behaviors: settling in/packing up, unresponsive, offtask, disengaged computer use, disengaged student interaction, and distracted by another student (Lane and Harris, 2015). Every 2 minutes, observers record what behavior each consenting student is exhibiting using the Generalized Observation Reflection Platform (developed and copyrighted by UC Davis). Each observer (four observers: C.F., S.S., J.F., L.C.) went through the same formal training on how to use the scale and online program. The training consisted of a preliminary discussion of the observation protocol and codes, watching and discussing a video of a classroom in which the students displayed the various engagement codes (video was paused and codes were discussed throughout), and finally watching videos together with other trainees while using the protocol. Video examples were watched and coded until all coders in the training felt confident applying each code and until consensus coding was consistent (i.e., the coders applied the same codes consistently to student behaviors and reached > 90% agreement). After the training, class observations were conducted. One observer attended each class, observed, applied engagement codes, and took notes related to student engagement, especially when something notable happened or the observer was unsure of a code. Observers communicated frequently before and during the observation process to discuss codes, ensure that codes were being applied similarly across contexts, and address any questions. In total, we conducted three classroom observation sessions in each lab section (as recommended by commonly used protocols like the COPUS; Smith et al. 2013), two of which took place during students' R data analysis classes at the beginning of the semester, and one of which occurred in a lab section that did not include the use of R, which we used as a baseline (control) measurement of typical engagement (total number of observations = 43). Through these observations, we captured almost the entire time that TAs instructed students in R over the course of the semester, as well as an additional observation in a non-R lecture. Statistical Analyses Due to the continuous nature of our data and our desire to include random effects in our statistical tests, we used linear mixed-effects models for each of our analyses. All assumptions were met for these regressions: linear relationships between variables, multivariate normality, and little multicollinearity or autocorrelation (VIF all < 5). Fixed effects (e.g., verbal immediacy) included in each model were determined through a priori hypothesis generation. Variable values for constructs measured via surveys represent the arithmetic average of items within the survey, while values for demographics (e.g., gender) represent responses based on a single question. In an initial analysis for research Q1 ("Is gender related to programming anxiety?"), we initially included student gender (man or woman), ethnicity, first-generation status, and learning disability status in an initial trial model, because we hypothesized that these may all have an effect on anxiety (Onwuegbuzie, 1999;England et al., 2019). However, because of small sample sizes and lack of statistical significance, we removed the other demographic factors to focus only on gender. Similarly, we were interested in whether immediacy interacted with gender to influence anxiety and engagement. We initially ran all models predicting anxiety and engagement with gender and a gender × immediacy interaction. These models were overparameterized and did not meet the assumptions of linear mixed-effects models described earlier, so we report models without these interaction terms. In addition, we were interested in how anxiety might interact with gender to predict engagement and how coping might interact with gender to predict changes in anxiety. We did not add gender and all of the gender × anxiety or gender × coping interactions to these models (Q3A and Q4), because this would have resulted in models with too many predictor variables, given our sample size (overparameterization). Instead, we ran separate models for men and women. (Gender was self-identified through an open-response question in the survey. Responses of "female" and "male" were interpreted as "woman" and "man," respectively, to align with the construct of gender.) Notably, we were unable to include nonbinary students in our statistical analyses out of caution to not overextend information gained from a small sample size (n = 3); however, nonbinary students are represented in our graphs, because gender is a spectrum. Thus, when we refer to gender in our results and models, we specifically refer to the difference between students identifying as women and men. Results did not differ by gender for engagement, so we report only the model with gender identities combined. However, results differed by gender for the coping model, so we report those results by gender. In all analyses, we included semester as a random effect (denoted "1|Semester") to account for random variation in student responses between semesters, except for in our Q4 model, because we only deployed the coping skills scale in the third semester. For Q3, which tested how instructor immediacy and anxiety relate to student engagement, we included student nested within semester as a random effect to account for random variation within individual students, because we conducted multiple observations of each student. In this model, we use presemester measures of student anxiety, as we thought that anxiety toward the start of the semester would more strongly influence how students behaved and that experience in the classroom would likely influence the end-of-semester anxiety more than vice versa. In some cases, metrics predicting our dependent variables were highly correlated (e.g., Nonverbal Immediacy and Verbal Immediacy were correlated as confirmed through linear regression). To avoid potential type 2 errors that might arise from distribution of variance among correlated predictors, we analyzed each correlated predictor in its separate model as opposed to a larger multivariate model. Equations that we used to test each research question are presented, with "Anxiety Metric" referring to each of our four R anxiety scales (gaining initial skills in R, sense of control with R, R self-concept, and state of anxiety in R situations). Thus, when the term "Anxiety Metric" is used, it indicates that four similar models were run, one with "gaining initial skills in R" as the metric, another with "sense of control with R" as the metric, and so on (note, however, that in model 3A, these metrics are used as predictors and referred to by name). Timing in Q1 refers to a dummy variable with "pre-semester" and "post-semester" as the values. The response variable for Q1 includes both pre and post scores to allow for examination of the Gender * Timing interaction in this model. "Change in anxiety metric" was calculated as the difference between pre and post survey scores for each metric for each student. Thus, we ran four models corresponding to each R anxiety scale for each question below. Q1. Is gender related to R programming anxiety, and how did this change over time? Change in Anxiety Metric = Nonverbal Immediacy + (1|Semester) Change in Anxiety Metric = Verbal Immediacy + Q3. Are R programming anxiety and/or instructor immediacy related to student engagement in the classroom? (Analysis run separately for both R and control classroom observations, unique ID removed as random effect for control observations due to the presence of only one control observation, analyses conducted for third semester only.) A. Programming Anxiety: Percent Engagement = Presemester Initial Skills Anxiety + Presemester Sense of Control + Presemester R Self-Concept + Presemester State of Anxiety + (1|Semester) + (1|Unique ID) B. Instructor Immediacy: Percent Engagement = Nonverbal Immediacy + (1|Semester) + (1|Unique ID) Percent Engagement = Verbal Immediacy + (1|Semester) + (1|Unique ID) Q4. Are specific coping skills correlated with greater changes in programming anxiety? (Analysis run for women and men separately, analyses conducted for third semester only.) Change in Anxiety Metric = Behavioral + Active + Planning + Instrumental Support Seeking + Self-Blame How Does Gender Affect R Programming Anxiety, and How Does This Change over Time (Q1)? For each of the results, we ran both the full model including the interaction term for gender and time and simple effects models within gender and within time period (pre or post). This approach allowed us to fully characterize what was happening within each gender at each time point. When a significant (p < 0.05) or nearly significant (statistically insignificant but with a p value below 0.15) result was found from the full model, it is noted, and results from the simplified models are used to describe the specific effects of the interaction. Additionally, for all scales, nonbinary students are represented in our graphs to acknowledge that gender is a spectrum beyond man and woman, despite the fact that we could not include them in our statistical analyses (n = 3). Gaining Initial Skills in R. This scale has the highest possible score of 5, with 1 being least anxious about gaining initial skills in R and 5 being most anxious. Women reported significantly higher anxiety presemester but not postsemester when gaining initial skills in R, with 24% higher gaining initial skills anxiety presemester and 11% higher gaining initial skills anxiety postsemester as compared with men (pre p < 0.001, pre df = 97; post p = 0.13, post df = 97; see Figures 1 and 2). Importantly, women reported significantly lower anxiety about gaining initial skills in R after the semester as compared with before the semester (p = 0.02, df = 125, beta = 0.37). Men did not report a significant difference in gaining initial skills anxiety between pre and post surveys (p = 0.8, df = 65). There was a nearly significant interaction between gender and time (pre/ post) for this metric (p = 0.1, df = 191, beta = −0.41), which results from women reporting greater decreases in their anxiety about gaining initial skills than men despite their consistently higher anxiety overall (Table 2). for men (green), women (orange), and nonbinary (purple) students. Nonbinary students are shown here for representation purposes, but conclusions cannot be drawn due to a small sample size (n = 3). Statistically significant differences in each metric between students identifying as women and men students are denoted with asterisks. Women consistently reported higher anxiety (state of anxiety, gaining initial skills anxiety) and lower confidence (sense of control, self-concept) both pre-and postsemester compared with men. FIGURE 2. Changes in anxiety scores (rows) from pre-to postsemester surveys for men (green), women (orange), and nonbinary (purple) students. Nonbinary students are shown here for representation purposes, but conclusions cannot be drawn due to a small sample size (n = 3). An increase in a given metric would be shown above 0 on the y-axis, while a decrease would be shown below 0 on the y-axis. Statistically significant differences between pre-and postsemester students for men and women separately are denoted with asterisks. All students' reported anxiety (state of anxiety, gaining initial skills anxiety) and confidence (sense of control, self-concept) generally improved over the course of the semester, although changes did not remove the gender gap. Sense of Control in R. This scale has the highest possible score of 5, with 1 being the lowest sense of control in R and 5 being the highest sense of control in R. Women reported a significantly lower sense of control in R, with women reporting 17% lower sense of control presemester and 7% lower sense of control postsemester as compared with men (pre p < 0.0001, pre df = 190; post p = 0.025, post df = 190; see Figures 1 and 2). Both women and men had significant increases in their sense of control between pre and postsemester survey responses (women's values: p < 0.0001, df = 252, beta = −0.53; men's values: p = 0.04, df = 124, beta = −0.22). We observed a nearly significant interaction between gender and time (pre/post) for this metric (p = 0.07, df = 378, beta = 0.31), which results from women making greater gains in sense of control than men despite their consistently lower sense of control in R overall (Table 2). R Self-Concept. This scale has the highest possible score of 5, with 1 being the least positive R self-concept and 5 being the most positive R self-concept. Women reported a significantly lower R self-concept, reporting a 17% lower self-concept presemester and 12% lower self-concept postsemester as compared with men (pre p < 0.001, pre df = 190; post p = 0.02, post df = 190; see Figures 1 and 2). Women reported a significant increase in R self-concept after the semester as compared with before the semester (p = 0.007, df = 250, beta = −0.29). Men also reported a significant increase in their R self-concept (p = 0.05, df = 126, beta = −0.22). State of Anxiety in R Situations . This scale has the highest possible score of 4, with 1 being the lowest state of anxiety and 4 being the highest state of anxiety. Women reported a significantly Undergraduate R Anxiety in Ecology df = 191), or gaining initial skills in R anxiety (p = 0.29, df = 98). Do Programming Anxiety or Instructor Immediacy Affect Student Engagement in the Classroom (Q3)? Overall, students were highly engaged during the lab classes in both sections where they were learning R, with 94% mean engagement in each of the two R-focused sections. Students were 8% less engaged, but still highly engaged in the "control" observation during a lab that focused on poster creation and no R content, with a mean percent engagement of 86%. R Anxiety. Percent engagement in the classroom during R sessions was not correlated with presemester R self-concept (p = 0.22, df = 46), sense of control with R (p = 0.32, df = 46), or students' state of anxiety while using R (p = 0.28, df = 46). No significant relationships between any metric of anxiety and percent engagement were found in control sessions either (p values all > 0.7). Here, we do not report on gaining initial skills in R, because we did not deploy that scale in the semester that we did classroom observations (so we could add the coping skills scale without increasing survey fatigue). Instructor Immediacy. Percent engagement in the classroom was not correlated with nonverbal immediacy (p = 0.42, df = 46) or verbal immediacy (p = 0.56, df = 46) for the R sessions. No significant relationships were found when examining engagement in control sessions either (p values all > 0.09). Are Coping Skills Correlated with Changes in Programming Anxiety (Q4)? Behavioral Disengagement. For women, lower reported rates of avoidance/behavioral disengagement were associated with greater increases in their sense of control in R (beta = −0.58, p = 0.01, df = 57), greater increases in R self-concept (beta = −0.54, p = 0.003, df = 57), and greater decreases in their state of anxiety while using R (beta = 0.42, p = 0.01, df = 57; see Figure 4). For men, avoidance/behavioral disengagement was not associated with changes in any metric of anxiety (all p > 0.05). Active Coping. Active coping was not associated with changes in any metric of anxiety for men or women (all p > 0.05). Planning. For men, higher rates of planning were associated with greater increases in sense of control in R (beta = 0.41, p = 0.03, df = 23), greater increases in R self-concept (beta = 0.47, p = 0.008, df = 23), and greater decreases in their state of anxiety while using R (beta = −0.54, p = 0.0002, df = 23; see Figure 5). For women, planning was not associated with changes in any metric of anxiety (all p > 0.05). Instrumental Support Seeking. For women, higher rates of instrumental support seeking were associated with greater higher state of anxiety, reporting a 16% higher state of anxiety presemester and an 11% higher state of anxiety postsemester as compared with men (pre p < 0.001, pre df 190; post p = 0.04, post df = 190; see Figures 1 and 2). Both men and women reported lower levels of anxiety after the semester as compared with before the semester, although the effect was not significant for men according to results from the simple effects model (women's values: p < 0.001, df = 252, beta = 0.32; men's values: p = 0.09, df = 124, beta = 0.16). Is Instructor Immediacy Related to Changes in R Programming Anxiety (Q2)? Because we hypothesized that nonverbal and verbal immediacy would be highly correlated (which we confirmed through a linear regression), we ran separate models for each of these two metrics. Nonverbal Instructor Immediacy. The nonverbal immediacy scale has the highest possible score of 5, with 1 being the least nonverbally immediate and 5 being the most nonverbally immediate. The mean nonverbal immediacy score for women was 4.16, with men reporting an average verbal immediacy score of 3.83. Higher nonverbal immediacy scores were correlated with greater increases in students' sense of control with R (p = 0.02, df = 191, beta = 0.22) and R self-concept (p = 0.04, df = 191, beta = 0.21; see Figure 3). Nonverbal immediacy was not correlated with changes in students' state of anxiety while using R (p = 0.22, df = 191) or anxiety about gaining initial skills in R (p = 0.68, df = 98). Verbal Instructor Immediacy. The verbal immediacy scale has the highest possible score of 4, with 1 being the least verbally immediate and 4 being the most verbally immediate. The mean verbal immediacy score for women was 2.94, with men reporting an average verbal immediacy score of 2.76. Student-reported verbal immediacy was not correlated with changes in students' sense of control with R (p = 0.23, df = 191), R self-concept (p = 0.11, df = 191), students' state of anxiety (p = 0.17, self-concept (p = 0.05, df = 57, beta = 0.18), but were not related to their state of anxiety or sense of control (p > 0.05). In both cases, the effect of self-blame on the anxiety metrics was very small (beta < 0.2). Methodological Limitations This study was an observational study confined to a single course context that used mixed-model regression analysis to identify relationships among variables. Because randomization and a control/ comparison group were not used in this study, we cannot infer causation. For example, we cannot infer that the R instruction in the course under investigation resulted in the observed improvements to R anxiety, skills, self-concept, or sense of control. Nor can we infer that specific coping strategies caused changes in self-concept, sense of control, or R anxiety. Furthermore, we recognize that administering the pretest, which asked students about anxiety, might have induced anxiety via suggestion (i.e., via the self-fulfilling prophecy of anxiety or fear suggested by some of the items) and inflated levels of anxiety measured. While this condition was true for all students in this study, it nonetheless may have influenced our results. We also cannot extend the results of this study to other courses; our results should only be considered within the context of the course in question. Additionally, our results should only be considered for the student identities present in our data set. Students at our institution are predominantly white and ages 18-24. Our results may not apply to persons from underserved ethnic or racial groups or nontraditional students. Likewise, our results apply primarily to cisgendered individuals; gender non-conforming and nonbinary individuals made up a very small portion of our sample (n = 3). Furthermore, due to a highly unbalanced design (i.e., all except one of the instructors identified as women), we could not investigate the effect of instructors' gender identities on the outcomes of interest. Despite these limitations, the significant relationships observed as a result of our mixed models allow us to hypothesize mechanisms that may have caused these relationships and propose interventions and future studies for further investigation. Also, while our results are not broadly applicable, they can lend insight into courses similar to ours in which R skills make up a small but significant portion of instruction. In addition, our results allow us to add to the body of studies that characterize patterns in R anxiety across demographic variables, such as gender, which we report on here. decreases in their state of anxiety while using R (beta = −0.33, p = 0.02, df = 57), but not with changes in R self-concept or sense of control in R (p > 0.05; see Figure 6). For men, instrumental support seeking was not associated with changes in any metric of anxiety (all p > 0.05). Self-Blame. For men, higher rates of self-blame were associated with greater decreases in their state of anxiety while using R (beta = 0.14, p = 0.05, df = 23), but were not associated with R self-concept or sense of control (p > 0.05). For women, higher rates of self-blame were related to greater increases in R Figure showing the relationship between men's planning scores and changes in their state of anxiety with R, sense of control in R, and R self-concept (in order left to right). For men, higher rates of planning when encountering challenges with R were associated with greater decreases in their state of anxiety while using R (left; beta = −0.54, df = 23, p = 0.0002), greater increases in sense of control in R (middle; beta = 0.40, df = 23, p = 0.03), and greater increases in R self-concept (right; beta = 0.48, df = 23, p = 0.008). Data are shown with linear regression lines (dark gray line) and 95% confidence interval (gray shading). Another limitation of this study is that we ran ∼50 regressions to avoid overly parametrized models. This means that 2.5 (5%) of our inferences may be incorrect. However, following Gotelli and Ellison's (2013) suggestion, we did not globally reduce our alpha from 0.05. Specifically, a global reduction of alpha is excessively conservative, as it assumes that all tests are independent of one another and that all of the null hypotheses are true. Further, alpha is an important standard for comparison across scientific literature, and each test provides an important piece of information in distinguishing between scientific hypotheses. Another limitation in our statistical analyses relates to our analysis of Q3-whether presemester anxiety scores were correlated with percent engagement. We hypothesized that presemester anxiety scores would affect engagement more than post survey anxiety scores, because the labs these observations were conducted in represented the first R learning opportunity for many of our students and were closer in time to the pre survey. However, this does not take into account that anxiety scores are likely to change over time and student anxiety scores could have been different at the exact time of observation. A last limitation of our analyses is that data for Q4, which investigates coping mechanisms, were only collected for one semester (semester 3), and the small sample size could lead to overparameterization. However, this model met assumptions of linear analyses as described in the statistical methods. An additional limitation of this study is that our participants were drawn from a volunteer sample. Students were not required to participate and were instead offered the opportunity to participate and self-selected into the study. This has the potential to bias study results if the students who choose not to enroll represent a distinct subpopulation with different experiences in comparison with those who enroll. We have no reason to believe that this is the case. Of students who were invited to participate in this study, 63% participated, and their demographics were not different from those typical of the course. Thus, we are reasonably confident that our participant sample is not biased due to students' self-selection into the study. A final limitation of our study arises from the observational nature of this work. Our research questions and predictions are aimed at understanding how variation in one observed variable (the predictor) relates to another (the response). If variation in the observed measures of either the predictor or response variable is limited, it becomes more difficult to ascertain if there is or is not a relationship between the two variables, because we are limited to looking only at the range of values represented by the data. Within our data, variation is somewhat limited in our measures of instructor immediacy (all instructors had relatively high immediacy values) and quite limited in our measures of engagement (almost all students displayed very high engagement). Thus, our investigations are limited to investigating relationships among instructors with relatively high immediacy and among relatively engaged students. If we had more variation in our observed measures of these predictor variables or conducted an experiment, we would have more potential to observe significant relationships where currently we see none. More investigations examining a broader spectrum of immediacy and engagement could be done to elucidate whether these factors might affect R anxiety and other metrics under different conditions. DISCUSSION All Students Report Improvements in Anxiety over the Course of the Semester, but Women Consistently Report Significantly Higher R Anxiety, Lower Self-Concept, and Lower Sense of Control in R We found that, relative to their men classmates, women consistently reported 1) higher anxiety about gaining initial skills in R (24% pre and 11% post, although post was not a significant difference), 2) a lower R self-concept (17% pre and 7% post), 3) a lower sense of control in R (17% pre and 12% post), and 4) a higher state of anxiety in R situations (16% pre and 11% post) both before and after the semester. Our findings of a narrowed, yet persistent gender gap between women and men in programming anxiety and self-efficacy are similar to related prior research showing gender gaps in statistics anxiety (Ralston et al., 2016), math self-efficacy (Pajares, 2005), and computer anxiety (Chua et al., 1999;He and Freeman, 2010;Powell, 2013). Here, we complement the existing literature by investigating anxiety associated with learning to use a common data analysis tool and coding language (R). We also contribute evidence that can help us to understand how students' gender identity can impact the experience of learning to program in a biology course. We show that, despite greater numeric representation of women in biology Spini et al., 2021) and even as technological literacy in the general population has advanced along with the importance of teaching undergraduate students coding skills (Auker and Barthelmess, 2020), women in biology experience their R course work differently from men. Notably, however, due to sample size, we cannot comment on the experiences of individuals with gender identities other than "man" and "woman" nor whether gender gaps in R anxiety existed, persisted, or dissipated over the term of the study for such students. The continuation of the historic trend of women reporting lower levels of self-efficacy than men and experiencing negative psychological states when engaging in quantitative tasks is concerning. Notably, experiencing a negative physiological and psychological state during task engagement (i.e., experiencing anxiety) combined with lack of self-efficacy and development of self-concept at a task can decrease motivation and engagement (Wigfield and Eccles, 2000;Usher and Pajares, 2008). This, in turn, could preclude women from accessing mastery experiences while learning R, leading to a positive feedback loop that further threatens self-efficacy and leads to greater disengagement (Wigfield and Eccles, 2000;Usher and Pajares, 2008). While lower self-efficacy and higher anxiety in R are not direct measures of student performance or grades in courses that use programming skills, it is important to recognize that self-efficacy, self-concept, and identity development are important predictors of persistence in STEM (Graham et al., 2013) Thus, this trend has the potential to limit women's persistence in statistics-heavy aspects of ecology and evolutionary biology and other subdisciplines such as computational biology, which are increasingly valued in the workforce (Burning Glass Technology, 2016). Unfortunately, there is evidence that this negative cycle begins early, with gender gaps between women and men in math self-efficacy starting as early as middle school, contributed to by factors including media representation and parents' perceptions of their children's abilities (Meece and Courtney, 1992;Wigfield et al. 1996;Pajares, 2002Pajares, , 2005. However, we can potentially break this cycle by providing interventions that target programming anxiety. While we did not directly investigate methodology for, or results of, direct interventions on anxiety, several of our results discussed later suggest potential targets for such interventions. Despite the presence of persistent gender gaps between women and men, anxiety for all students decreased and self-concept and sense of control increased over the course of the semester. We also found evidence that the magnitude of gender gaps between women and men decreased postsemester for anxiety associated with gaining initial skills in R and sense of control in R (as indicated by significant or nearly significant interaction terms). Thus, aspects of the course curriculum may have served to reduce these gender gaps. However, given that we did not have a comparison group in this study, we cannot say with certainty whether this was the case. Given our results, however, we do hypothesize that lowstakes practice in R, frequent feedback, more mastery experiences, learning in an active-learning environment (workshop-style sessions in their lab course), and applying R skills to independent work for which students felt a sense of ownership (using R to analyze data from their independent projects) all may have contributed to these positive outcomes (Pajares, 2002;Corwin et al. 2018). However, again, due to the observational nature of this study, we cannot say whether these outcomes were because of successful pedagogical techniques or simply increased R exposure over time. Additional experimental or quasi-experimental experiments involving interventions may better serve to elucidate beneficial teaching practices (see the Implications for Teaching and Research section). Instructor Immediacy Has Minimal Effects on Student R Anxiety Contrary to a wide array of literature showing that instructor immediacy can decrease students' quantitative anxiety and positively impact student learning (Witt et al., 2004;Williams, 2010), we found minimal evidence in the context of our study for instructor immediacy impacting student R anxiety or confidence. Two exceptions were that nonverbal instructor immediacy was positively associated with students' R self-concept and sense of control in R; however, those effect sizes were extremely small and unlikely to be of high impact. Thus, we do not discuss these in depth. We hypothesize that we did not find evidence of a relationship between instructor immediacy and student R anxiety due to two main factors. First, there was a lack of variation in instructor immediacy-in general, immediacy levels were high. Statistically, it is harder to view an effect when there is limited variation in the data. This could also suggest there is a threshold after which immediacy does not make a significant impact on student anxiety or self-efficacy. Second, class context likely affected our findings. We conducted this research in small lab courses, with a maximum of 14 students in each section. These small class sizes, taught by graduate TAs likely meant greater opportunities for immediate behaviors, and because students received almost personalized instruction in R through workshop-style lessons, the effects of immediacy could have been eliminated (Furlich, 2016). To this effect, small class sizes may have precluded the effect of immediacy acting on anxiety in part through causing students to feel that they could easily access the instructor (Furlich, 2016). In this course, it is arguable that all students had access to their TAs, despite some small variation in how immediate they may have felt their instructors were. R Anxiety and Instructor Immediacy Are Not Correlated with Student Engagement We found no evidence that R anxiety and instructor immediacy impact student engagement in the classroom. Instructor immediacy has been shown to increase student willingness to communicate with the instructor (Allen et al., 2008), so we predicted that it might increase student engagement during class. However, we did not see an effect, likely because of extremely high overall levels of engagement in our observation sessions. Class sizes were very small, and each lesson in R was active and highly structured-students went through an R script at the same time as the TA, and TAs frequently checked in. One example of this is that some TAs asked students to put different-colored sticky notes on their monitors to denote their progress with a given section of code (e.g., red for stuck and need help from the TA, green for done and ready for the next section). Thus, it was difficult for students to not be fully engaged for the duration of the R workshop. This aligns with prior work suggesting that active-learning approaches increase student engagement (Ambruster et al., 2009). Adaptive Coping Skills Are Associated with Improvements in Anxiety but Differ by Gender We found several relationships between coping responses and anxiety. In addition, we found that men and women reported using different coping strategies, which has also been shown in other research (Lawrence et al., 2006;Eschenbeck et al., 2007;Madhyastha et al., 2014;Martínez et al., 2019). Notably, avoidance/behavioral disengagement, predicted to be a maladaptive coping response in academic contexts (Skinner et al., 2003;Henry et al., 2019), was associated with changes in anxiety metrics for women, but not for men. For women, less frequent avoidance/behavioral disengagement was associated with greater improvements in sense of control, greater increases in R self-concept, and greater decreases in state of anxiety. This has important implications for student persistence, as students who experience a lower sense of control and higher sense of anxiety are more likely to disengage in the long term (Wigfield and Eccles, 2000;Bonneville-Roussy et al., 2017). This also supports predictions that avoidance/behavioral disengagement is maladaptive in academic contexts, because women who engaged in more frequent disengagement did not experience decreases in anxiety or improvements in R self-concept and sense of control (Henry et al., 2019). Further, this aligns with empirical findings examining experiences of medical students and college students that indicated that emotion-focused coping, which includes denial, avoidance, and disengagement, resulted in lower motivation and satisfaction and was associated with higher rates of failure in comparison with other coping strategies (Struthers et al., 2000;Alimoglu et al., 2010) Patterns in self-blame did not align with our predictions of self-blame as maladaptive, standing in contrast to our results regarding disengagement. We found evidence that higher rates of self-blame were associated with greater decreases in men's state of anxiety and greater increases in women's R self-concept. While the effects we found of self-blame are very small, this counterintuitive result is notable and may be due to the complex relationship between self-blame and attribution of control. On one hand, self-blame is known to lead to rumination and inaction (Legerstee et al., 2010) and is associated with multiple types of anxiety in children (Rodriguez-Menchon et al., 2021) and increased stress in college students (Straud and McNaughton-Cassill, 2019). However, self-blame may also empower individuals to act if they blame their own behavior and perceive their responsibility in causing the stressor (as opposed to blaming their character or disposition; Shaver and Drown, 1986). If individuals perceive an issue to be their own fault as a result of some changeable controllable behavior, as opposed to the result of some uncontrollable external factor, they may feel a greater sense of control over a situation, and thus can feel empowered to improve the situation (Rotter, 1966;Weiner, 1985). Thus, the dual nature of self-blame may help to explain our finding of weak associations between self-blame and improvements in anxiety and self-concept. Responses such as active coping, planning, and instrumental support seeking are predicted to be adaptive in STEM academic contexts (Henry et al., 2019) and are typically associated with positive outcomes such as avoiding burnout (Sevinç and Gizir, 2014;Shin et al., 2014), increased motivation (Struthers et al., 2000), satisfaction (Alimoglu et al., 2010), and increased academic achievement (Brdar et al., 2006). We found some support for this in our data. For men, higher rates of planning were related to greater increases in sense of control and R self-concept and greater decreases in their state of anxiety. This corroborates findings that planning is one component that negatively predicts burnout (anxiety is one correlate of burnout; Shin et al., 2014;Lyndon et al., 2017) and positively predicts academic achievement (Struthers et al., 2000), but adds granularity to who may benefit the most from planning. Similarly, we also found that instrumental support seeking was associated with decreases in women's state of anxiety when using R, again corroborating evidence that seeking support often occurs in response to anxiety in an effort to alleviate it (Rijavec and Brdar 1997) and helps to avoid burnout (Shin et al., 2014). These findings suggest that women in particular may have found that the support provided in class helped to alleviate their anxiety. Further, this finding suggests that targeting interventions that build in explicit, readily available opportunities for instructor support may assist in reducing gender gaps in anxiety. Surprisingly, active coping did not predict any of the anxiety metrics measured in this study, despite our prediction that it might alleviate anxiety and increase R sense of control. We initially predicted that active coping would help students to solve their problems in R, which we assumed would lead to a greater sense of control and lower anxiety. However, we failed to consider an important characteristic of the classrooms we studied. In the active-learning format, students were expected to be active in troubleshooting and solving problems in R. This explicit expectation could have increased the frequency of reporting this coping mechanism, dampening the ability to observe an effect. It could also be that the timescale of our study was also too short to adequately observe relationships between these variables. IMPLICATIONS FOR TEACHING AND RESEARCH Our findings show a persistent gender gap in anxiety that undergraduate ecology students experience while learning to code in the statistical program R-a gap that narrows over the course of the semester but is still maintained. Specifically, we found that women reported significantly higher anxiety and lower confidence than men, and that this gender gap remained over the course of a semester. However, our findings suggest that coping strategies share a moderately strong relationship with anxiety and the observed gender gap. As a result, teachers would benefit from focusing on interventions that improve coping strategies. We found that women who reported instrumental support seeking more frequently showed greater decreases in R anxiety, and those who reported disengaging less frequently showed greater increases in R sense of control and R self-concept and greater decreases in R anxiety. This result is particularly notable for R self-concept, given that students displayed lower levels of R self-concept in comparison with other anxiety metrics. Thus, women may especially benefit from interventions that discourage avoidance/behavioral disengagement and encourage help-seeking. Men seemed to benefit specifically when planning was used, especially with regard to improvements in their very low levels of R self-concept. All in all, interventions that provide training for how to develop a plan when facing a coding issue, build in opportunity for seeking instrumental support, and provide alternatives to disengaging in difficult R tasks may help students to alleviate or avoid anxiety while also working toward closing the observed gender gap. In previous work, growth mindset interventions have been shown to be particularly effective in encouraging instrumental support seeking and decreasing achievement gaps (Yeager et al., 2016;Casad et al., 2018;Fink et al., 2018;Covarrubias et al., 2019;Henry et al., 2019). For example, sending an email inviting students to join a peer-led tutoring program in a gateway STEM course that used growth language (e.g., "this program helps you and your peers to build a learning community," "mastering course material is a process that takes hard work and effort," "our job is to grow your understanding of the material step-by-step and to support you in this process") led to significantly higher rates of instrumental support seeking in women compared with when students received an email excluding growth language (Covarrubias et al., 2019). In another study with first-year college students in an introductory chemistry course, assignments that included growth mindset information (e.g., reading an article titled "You Can Grow Your Brain," which explains that the brain is malleable and knowledge is not fixed) and reflection practices (e.g., asked to reflect on how the growth mindset article would inform their study strategies) eliminated an achievement gap between underrepresented minority and white students that was present when these interventions were not conducted. Toward increasing engagement and decreasing achievement gaps, more structured courses and greater employment of active-learning techniques may be particularly effective (Tanner, 2013;Eddy and Hogan, 2014;Gavassa et al., 2019). Future research on 1) programming anxiety across biology programs, not just introductory ecology courses; 2) different contexts including, for example, community colleges and larger classes; 3) interventions that target adaptive coping strategies to alleviate programming anxiety; and 4) interventions to decrease disengagement in the face of challenging materials would help to elucidate how widespread gender gaps in programming anxiety are across our discipline and aid in the development of solutions for mitigating them.
2022-04-16T06:23:14.144Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "be6e23f530c03ffe034a316b079a3cf6b6a3dd69", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1187/cbe.21-05-0133", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33e70a070cf31607c5e30a107cc2f8ac417ec06a", "s2fieldsofstudy": [ "Biology", "Education", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
237440655
pes2o/s2orc
v3-fos-license
Converting health risks into loss of life years - a paradigm shift in clinical risk communication For facilitating risk communication in clinical management, such a ratio-based measure becomes easier to understand if expressed as a loss of life expectancy. The cohort, consisting of 543,410 adults in Taiwan, was recruited between 1994 and 2008. Health risks included lifestyle, biomarkers, and chronic diseases. A total of 18,747 deaths were identified. The Chiang’s life table method was used to estimate a loss of life expectancy. We used Cox regression to calculate hazard ratios (HRs) for health risks. The increased mortality from cardio-metabolic risks such as high cholesterol (HR=1.10), hypertension (HR=1.48) or diabetes (HR=2.02) can be converted into a loss of 1.0, 4.4, and 8.9 years in life expectancy, respectively. The top 20 of the 30 risks were associated with a loss of 4 to 10 years of life expectancy, with 70% of the cohort having at least two such risk factors. Smoking, drinking, and physical inactivity each had 5-7 years loss. Individuals with diabetes or an elevated white count had a loss of 7-10 years, while prolonged sitting, the most prevalent risk factor, had a loss of 2-4 years. Those with diabetes (8.9 years) and proteinuria (9.1 years) present at the same time showed a loss of 16.2 years, a number close to the sum of each risk. Health risks, expressed as life expectancy loss, could facilitate risk communication. The paradigm shift in expressing risk intensity can help set public health priorities scientifically to promote a focus on the most important ones in primary care. INTRODUCTION Much of our efforts in primary care are to identify health risks and to reduce their health impact. Thus, understanding, quantifying, and managing health risks have become one of the most important activities in clinical settings, particularly effective communication of these risks to the lay person. Currently, "relative risk" has been the most commonly used expression. The size of relative risks reported in different studies was perceived as the degree of harmfulness. Relative risk, a ratio-based measure, is not only difficult for the public to understand, but also cannot be directly compared with other relative risks, or with different reference AGING groups. In contrast, life expectancy, derived from collapsing age-specific mortality, is an absolute risk with implications well understood to most people. Life expectancy is the average number of years a cohort of people is expected to live, which has intuitive meaning and can be compared [1,2]. Prolonging one's life is an overarching goal of the public health community [3] Overcoming the loss of life expectancy, on the other hand, is a clinical goal shared by everyone. Loss of life expectancy in years can be a universal yardstick across different disciplines in clinical practice, reflecting the severity of a given risk. Nevertheless, life expectancy has not been extensively used in cohort studies, and its potential has not been fully recognized, mainly because its calculation requires a large cohort with an extended follow-up time yielding stable results with a sufficient number of deaths in each age group [4][5][6][7][8]. In contrast to methods adopted by the Global Burden of Disease Study [9], or the study of potential years of life loss (PYLL) at a population level [10], results possibly are relevant for individuals in their daily life have been limited. Priorities based on global or societal issues are important but different, and not perceived by individuals as motivating behavioral changes. The literature seldom applies PYLL to quantify the impact of risk factors at a personal level. In addition, identified morbidities in the population are not necessarily the same as causes of death, as registered. As the reduction of health risks to extend life expectancy is among the most important clinical and public health goals, the loss of life expectancy is easy to understand and intuitively memorable. More importantly, direct comparison between risks is commonly done using life expectancy. In this study, 30 health risks were identified from a standard medical screening program on half a million participants consecutively recruited between 1994 and 2008 in Taiwan. These risks represented common behavioral risk factors and medically screened risks. The objective of this study is to quantify the negative impact of each of these health risks on life expectancy. Loss of life expectancy was the difference in life expectancy between those with the risk and those without the risk. Years of life lost for individuals with two or three risks were also assessed against those without either one of the risks. RESULTS The study population consisted of 543,410 adults, including 48% men and 52% women, with an average follow-up of 8.1 years. A total of 18,747 deaths were identified. Approximately a third of the study population was middle-aged (40-59 years) at the beginning of follow-up. The elderly (60 years and older) accounted for 13% of the cohort but over 60% of the deaths ( Table 1). The magnitude and pattern of hazard ratios between univariate and multivariate were largely similar for each of the 30 risks. (Supplementary Table 1). The top 20 of 30 risks were associated with a loss of 4 or more years of life expectancy (Table 2 and Figure 1). Six of the top 10 risks were identical between men and women with at least 6 years of life lost, and three were found to have 9 years of life shortened: high heart rate, proteinuria, and diabetes. Ninety-one percent of the cohort had at least one risk that shortened life by 4 or more years, 68% had two, and 41% had three or more (Supplementary Table 2). Individuals with a heart rate greater than 90 beats/minute had a substantial loss of life expectancy, 10 years for men and 9 years for women. Individuals with an elevated WBC count (>9,000/mm 3 ) lost 7 years, and those with an elevated CRP (>3 mg/L) lost 5-7 years. Proteinuria, determined by dipstick, had 9-10 years loss as a whole, but a 7-year loss for those with trace proteinuria (data not shown). In the middle range of life years lost, long-duration sleep (5 years), physical inactivity (5 years), hypertension (4-5 years), obesity (3-4 years), and prolonged sitting (2-3 years) were identified. Among the risks with minimal loss of years were high cholesterol (1 year), pre-hypertension, and pre-diabetes (1-2 years). In contrast, those with low cholesterol lost 4-5 years, while being underweight showed a loss of nearly 8 years in men and 4 years in women. The loss of life expectancy for individuals with two coexisting risks is shown in Figure 2 for selected risks. Those having diabetes (8.9 years) and proteinuria (9.1 years) present at the same time showed a loss of 16.2 years, a number close to the sum of each risk. Similarly, inactive individuals (4.7 years) with hypertension (4.4 years) had a loss of 9.2 years, or with proteinuria (9.1 years) had a loss of 13.8 years. DISCUSSION We identified 30 risks from data collected in routine medical screening. By converting these health risks into life expectancy loss, patients can then easily understand the magnitude of each of their risks. If interpreted by clinicians, risk communication can be greatly facilitated. Given the reality of multiple risks in life, one is usually at a loss as to their relative importance. For example, three commonly encountered risks in men: hypertension, diabetes, and high cholesterol, are often interpreted as having a 48%, 102% and 13% increase in AGING Expressing the risk as years of lost life expectancy serves several purposes: simplifying the size of the risks, setting priorities for proactive action among competing interests, and motivating behavioral changes. The benefits of risk reduction will be in the number of years gained, which is an understandable term. It is a potential paradigm shift in risk communication that can facilitate preventive action during teachable moments. The seemly additive feature of life years lost was intriguing, as these results were not expected [4]. Whether additive or not, in theory, should depend on the interdependencies or interactions between the two risks. Positive synergistic effects of certain coexisting risks, e.g., smoking and diabetes, have a greater impact on life expectancy loss than the sum of the individual conditions. On the other hand, overlapping risks, e.g., smoking and drinking have a smaller joint effect than the sum of the individual risks. Since the exact relationship between risks is not fully understood, their joint impact on life expectancy requires further studies. This finding highlights the seriousness of the combined impact of multiple risks and underscores the utility of life years lost, with some risks more life-threatening than others. For the top 20 risks, an overwhelming majority of the study subjects, 9 out of 10, had at least one risk, losing 4-10 years of life, and over two-thirds had two or more, losing 8-16 years, and one-third had three risks, losing 12-25 years. At least in theory, two-thirds of the cohort could gain up to 16 years if their coexisting two risks were substantially modified. Armed with this information, health educators or clinicians can better motivate patients to set priorities and make behavioral changes. While each of the 30 health risks is important, we focused on the top ten risk factors for males and females. In addition, a few particular risks that may have general interests are also selected for discussion. High heart rate The independent relationship between resting heart rate and mortality has been reported [23,24]. Heart rate reflects the known relationship between cardiorespiratory fitness and mortality since low heart rates are characteristic of fit individuals. For those with a normal heart rate on the high side, 80-99 beats/min, a large loss of 5-8 years was found. Rapid heartbeat overloads the heart, with an extra 300 million beats in a span of 20 years for those with 90 beats/min compared to those with 60 beats/min. Indeed, heart rate has been inversely associated with longevity across mammal species, from rats, with 500 beats/min for 2 years, to whales, with 10 beats/min for 80 years. [23,24]. Proteinuria and low GFR Proteinuria and low GFR were components of chronic kidney disease. With proteinuria prevalence at 7-8%, its large life-shortening effect of 9-10 years has often been overlooked [25,26]. . Most with proteinuria are unaware of such a risk, a risk easily detected by urine dipstick AGING [26,27]. The only report on the risk of proteinuria in the form of life-shortening came from Canadian insurance data, with results similar to our study [28]. The prevalence of low GFR increased with age. The life shortening is about 4 to 7 years. Low eGFR and proteinuria, when found in diabetics, is known as diabetic kidney disease (DKD). DKD has been reported to shorten life expectancy by up to 16 years, as the presence of proteinuria made diabetes behave like a different disease [29]. Diabetes The 9 years of life lost from diabetes is much higher than hypertension or elevated cholesterol. The AGING differences in life loss observed between diabetic and nondiabetic participants were similar to those found in previous studies [30][31][32]. In the Framingham Heart Study, diabetic men and women 50 years and older lived on average 7.5 and 8.2 years less than their nondiabetic equivalents [31]. Inflammatory markers: CRP and WBC Both CRP and WBC count are well-known inflammatory markers, with increased mortality for all causes and for CVD [19][20][21][22]. In clinical settings, each of these two risks are routinely measured, but the large loss of life expectancy, 5-7 years for CRP (≥ 3.0 mg/L) and 6-7 years for WBC (≥ 9000 / mm 3 ) was surprising and has not previously been reported. Anemia The prevalence of mild anemia (hemoglobin 10 -13.4 g/dL) is 5.3% for males and 10.2% for females. Anemia was primarily caused by iron deficiency; low oxygencarrying capacity may result from other chronic cardiovascular diseases. Though the prevalence for females is higher than for males, the loss of life expectancy for males has higher impact at 7.5 years than for females at 4.9 years. Obesity or underweight The 3-4 years of life lost from obese individuals in this cohort is slightly smaller than those reported for Western populations [33]. In contrast, men who are underweight had a loss of 7 years, much larger than obese men. Such a paradoxical observation with underweight worse than obesity has been reported from Japan [34]. Underweight, quite common in Asian women (12%), had a loss of 4.3 years, larger than their obese counterparts of 2.6 years. Smoking, regular drinking, and betel quid chewing Smoking, drinking, and betel quid chewing were three major lifestyle risk factors for males. These were all high-risk factors for several cancer sites. The life loss of a regular drinker is slightly larger than betel quid chewing or smokers and may be due to accidents from drunk driving. Most betel quid chewers were also smokers and they got similar loss of life of 5 to 6 years. COPD COPD is not only a risk factor for lung diseases but also a systemic factor for several causes of death [35,36]. COPD has 6 years of life loss but when concurrent with smoking, the years of life lost may increase to ten or more years. Hypertension In a recent publication of Global Burden of Disease Study (2020), high systolic blood pressure (SBP) was, among 87 risk factors analyzed, the leading risk factor for attributable deaths globally [37]. A similar conclusion a decade ago was that high BP was the biggest single contributor to all-cause mortality [38,39]. However, the life-shortening effect ranked it about the tenth risk factor in this cohort with 5 to 6 years of life loss. Sleep duration Longer than 8 hours of sleep duration has been reported as an independent risk for mortality [40,41]. It was equivalent to a 5-year loss of life in this study. The mechanism, however, is not well known but may reflect some underlying health conditions. Hazard ratios of sleep shorter than 4 hours were comparable to those of longer than 8 hours (Supplementary Table 1), suggesting that 6 to 7 hours of sleep a day is the goal to achieve. Low and high total cholesterol The minimal loss of life years (1 year) for high cholesterol (≥ 240 mg/dL) and a larger loss (4-5 years) for low cholesterol (<160 mg/dL) may seem puzzling but has been repeatedly reported among Asians [42,43]. High cholesterol is known to be a major risk for heart disease. However, the relative proportion of heart disease among Asians was small, with heart disease mortality in Taiwan (11.7%) half that in the United States (23.1%). Low cholesterol was known to have increased mortality from hemorrhagic stroke and from liver cancer, which has been reported in other Asian populations [42,43]. Prolonged sitting Prolonged sitting has received increasing attention and was reported as a mortality risk independent of physical activity. [44,45] Prolonged sitting could shorten one's life by nearly 8 years when coupled with physical inactivity. Prolonged sitting and inactivity were the two common health risks, and their personal and public health implication cannot be overemphasized. There are some limitations to this study. First, the loss of life expectancy was calculated without adjusting for confounding factors. However, adjusting life expectancy for comorbidities is not commonly done and is more of an academic exercise. Second, the lifeshortening effect was calculated from differences in life expectancy, and technically speaking, different health risks based on different reference groups cannot be directly compared. Third, the life expectancy from this study is cohort-specific and may not be applicable to other populations, such as non-Asians. However, our study, by focusing on the life expectancy differences, should have minimized cohort-specific concerns [5]. Fourth, risk factors were determined from initial entry to the cohort and may have changed during the study. However, with such a large set of data, its impact would be very limited [25,46]. In conclusion, behavioral modification to reduce health risks is of prime importance in daily clinical practice, and yet, effective counseling has been difficult. Life-year loss, when used to represent the size of each risk, would offer a different perspective and should be readily understandable to patients to prioritize treatment strategy. This new mode of practice can be a paradigm shift in conducting effective clinical management. For the top 20 risks, 9 out of 10 individuals had at least one risk, losing 4-10 years, and over two-thirds had two, losing 8-16 years, and one-third had three risks, losing 12-25 years. The message of loss of life expectancy is more intuitive and could be a powerful motivator for behavior changes. Study population The study cohort consisted of 543,410 adults, age 20 or older, who participated in a comprehensive health screening program run by a private firm, MJ Health Management Institution, Taipei, Taiwan. MJ attracted self-paying participants from all over Taiwan. All participants paid to become members and to have health examinations and may appear to be of higher socioeconomic status than the average population. However, many members paid for their parents and relatives, who could have been in different or lower socioeconomic classes. MJ also accepted individuals paid for by different companies, constituting occupational cohorts. A detailed description of this cohort has been reported elsewhere [47]. This cohort is an open (dynamic) cohort, and study subjects have been successively recruited from all walks of life since 1994. Every individual's identification number was matched with the National Death file between 1994 and 2008. A self-administered questionnaire gathered demographic, lifestyle, and medical history information, including levels of physical activity, developed from frequency, duration, and intensity information [25,46]. Physical activity in this study referred to leisure-time physical activity only [46]. All clinics in the program used a centralized laboratory for consistency. Overnight fasting blood was collected and analyzed by a Hitachi 7150 auto-analyzer for a standard panel of blood tests. EKG recording for heart rate, dipstick for proteinuria, and spirometry for lung function were carried out, with details published elsewhere [25,46]. Health risks Selected health risks included behavioral risks and cardio-metabolic medically screened risks. The definitions, cut-points, and reference group for each risk are listed in Table 3. Selection of the risks for this study was based on: (1) finding a statistically significant hazard ratio (HR) for mortality; and (2) meeting a minimum requirement of a prevalence of 5% in either men or women in the cohort. The latter was set up to assure that the risk was commonly encountered, with at least one found in 20 individuals. Statistical analysis Life expectancy is one of the most widely used demographic measures, which summarize age-specific mortality rates into one number. The calculation of life expectancy and its confidence interval were carried out using the life table method developed by Chiang [48]. Confidence intervals for the year of life loss was calculated based on a comparison of two population means, i.e., life expectancies, using the standard independent t-test. The life expectancy calculated in this study referred to the average remaining years a 30-year-old person would be expected to live. Those with two coexisting risks, for selected risk factors, were compared against those with neither risk. The above analysis was limited to men, due to low-risk prevalence and smaller numbers of deaths in women. The hazard ratio for each risk factor for men and for women was also calculated using Cox regression for both univariate (adjusted for age only) and multivariate (adjusted for age, smoking, BMI, systolic blood pressure, fasting blood glucose, and total cholesterol) analysis. All statistical analyses were performed with SAS, version 9.4 (SAS Institute, Cary, NC, USA). AUTHOR CONTRIBUTIONS SPT and CPW designed the study and conceived the idea. MKT and PJL analyzed and interpreted the data. SPT, CPW, JPMW, CW, WG, TDC, CHC, and XW provided administrative, technical, or logistical support. SPT and CPW had final approval of the article. MKT and PJL were responsible for the collection and assembly of data.
2021-09-09T06:24:35.521Z
2021-09-07T00:00:00.000
{ "year": 2021, "sha1": "e11957e2aa2ccad41708568941d51057db3e2c68", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.203491", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0292dcd8c5db30629be26bb9b9fe3e14e7629082", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5825689
pes2o/s2orc
v3-fos-license
Discovery and replication of dopamine-related gene effects on caudate volume in young and elderly populations (N=1198) using genome-wide search The caudate is a subcortical brain structure implicated in many common neurological and psychiatric disorders. To identify specific genes associated with variations in caudate volume, structural MRI and genome-wide genotypes were acquired from two large cohorts, the Alzheimer’s Disease NeuroImaging Initiative (ADNI; N=734) and the Brisbane Adolescent/Young Adult Longitudinal Twin Study (BLTS; N=464). In a preliminary analysis of heritability, around 90% of the variation in caudate volume was due to genetic factors. We then conducted genome-wide association to find common variants that contribute to this relatively high heritability. Replicated genetic association was found for the right caudate volume at SNP rs163030 in the ADNI discovery sample (P=2.36×10−6) and in the BLTS replication sample (P=0.012). This genetic variation accounted for 2.79% and 1.61% of the trait variance, respectively. The peak of association was found in and around two genes, WDR41 and PDE8B, involved in dopamine signaling and development. In addition, a previously identified mutation in PDE8B causes a rare autosomal-dominant type of striatal degeneration. Searching across both samples offers a rigorous way to screen for genes consistently influencing brain structure at different stages of life. Variants identified here may be relevant to common disorders affecting the caudate. Introduction Human brain structure is under strong genetic control (1,2), but specific genetic variants influencing individual differences are largely unknown. Genes that contribute to structural brain variation are important to identify, as several known examples confer protection or risk for mental illness or brain degeneration. Carriers of the prevalent epsilon 4 allele of the apolipoprotein E gene, for example, have a three-fold increased risk for Alzheimer's disease (3). They also show cortical thinning even in childhood, which may influence their vulnerability to later illness (4). Searching the genome for associations to biological traits, such as measures of brain structure, may also help to identify genetic variants related to brain disorders (5). Here we used a genome-wide search to identify common genetic variants associated with caudate nucleus volume. We used two large independent samples to verify any associations and guard against false positive findings. As the caudate is implicated in several neurodegenerative, motor, affective, and developmental disorders, factors that influence its structure in human populations are of great interest. The volume of the caudate is a highly heritable feature of brain structure (2). The strong contrast between the caudate and the surrounding white matter in standard MRI imaging allows it to be accurately and reliably delineated by automated recognition programs. In addition, caudate degeneration is characteristic of several genetic diseases. Caudate degeneration occurs in several rare Mendelian disorders: Huntington's disease (6,7), pantothenate kinase-associated neurodegeneration (8), neuroferritinopathy (9), and autosomal dominant striatal degeneration (10,11). In these cases, linkage analysis in affected families revealed specific causal genetic variants associated with caudate degeneration and impaired cognition. These disorders are rare, and the genetic variants identified so far are not widely carried in the general population. Caudate volume is altered in several more common disorders such as major depression (12), Alzheimer's disease (13), ADHD (14), and schizophrenia (15,16). These disorders are highly heritable but their onset and trajectory is thought to be influenced by a large number of genetic polymorphisms, each with a small effect (17), as well as environmental factors. These features of the caudate make it extremely interesting and tractable for genome-wide association studies. Previous studies have explored how variants in genes expressed in the brain's monoamine neurotransmitter pathways may also influence caudate volume. The serotonin transporter polymorphism (5-HTTLPR) was associated with reduced caudate volumes in patients with depression (N=61) (18), and a DRD2 polymorphism was associated with reduced left caudate volume in memory impaired elderly subjects (N=49) (19). A DAT1 polymorphism was also associated with caudate volume in ADHD patients, their unaffected siblings, and healthy controls (N=90) (20). These reports suggest specific candidate genes that may affect caudate volume, but the studies are limited by small sample sizes. To date, no studies have attempted to replicate the findings in new samples, and the findings would be more credible if verified in larger samples (21,22). Here we used an unbiased search across the entire genome, in two separate cohorts scanned with brain MRI, to find common genetic variants associated with caudate volume. We used a large discovery sample of elderly subjects and a replication sample of young adult twins. By design, these samples differ by around 50 years in mean age. As such, they could fail to replicate genetic effects present during only one part of the human lifespan. Even so, the joint use of both samples, young and old, offers a rigorous way to screen for genes that consistently influence brain structure at different stages of life. Subjects We analyzed two independent samples with neuroimaging and genome-wide genotype data: the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Brisbane Adolescent/ Young Adult Longitudinal Twin Study (BLTS). The ADNI sample has been described previously (23) as detailed in the Supplementary Information. The ADNI cohort included multiple diagnostic groups, and the genome-wide analysis was deliberately not split into diagnostic groups as the goal was to analyze the full phenotypic continuum (24), maintaining greatest power to detect genetic associations. Our final analysis included 734 individuals (average age ± s.d. = 75.5 ± 6.8 years; 432 male/302 female) including 172 AD patients (75.5 ± 7.6 years; 94 male/78 female), 357 MCI subjects (75.2 ± 7.3 years; 227 male/130 female), and 205 healthy elderly controls (76.1 ± 5.0 years; 111 male/94 female). Effect sizes for individual genetic variants on brain measures are expected to be small, so the large phenotypic variation in this continuum of subjects should increase power to detect genetic effects (20,23 Genotyping Genotyping for both the ADNI and BLTS samples was performed using the Illumina 610-Quad BeadChip. After quality control, as previously outlined (23,25) (see Supplementary Information), 546 314 single nucleotide polymorphisms (SNPs) remained in ADNI, and 542 478 SNPs in the BTLS sample where only autosomal SNPs were analyzed. 520 459 SNPs were jointly analyzed in both samples. Imaging Acquisition and Pre-Processing High-resolution structural brain MRIs were acquired in both the ADNI and BLTS samples. Standard pre-processing was applied including registration to a standard template (26) so that images were globally matched in size and mutually aligned, but local differences in shape and size remained intact. Acquisition parameters and pre-processing details are found in the Supplementary Information. Automatic Delineation of Caudate Volume We extracted models of the caudate from each structural MRI using an automated segmentation method based on adaptive boosting, which learns the features that best differentiate the caudate based on expert manual delineations of a small subset of the MRI scans (13,27). Caudate nuclei were traced according to previously published anatomical protocols (28,29) and manual tracing guidelines and algorithm usage details are found in the Supplementary Information and Supplementary Figure 1. Heritability Analyses The heritability of caudate volume was calculated using structural equation modeling (SEM) implemented in the Mx software (version 1.68; http://www.vcu.edu/mx/). This method estimates path coefficients in the widely-used "ACE" model (30,31), fitted to the observed covariance matrices of MZ and DZ twin traits (see Supplementary Information). Genetic Analysis For the ADNI sample, association was conducted using Plink software (32) (version 1.05; http://pngu.mgh.harvard.edu/purcell/plink/) to conduct a regression at each SNP with the number of minor alleles, age, and sex as the independent variables and the quantitative phenotype (caudate volume) as the dependent variable, assuming an additive genetic model. In the BLTS sample, we performed mixed-model regression to conduct genetic association while adjusting for family relatedness (33), sex, and age. This analysis was performed using Efficient Mixed-Model Association (34) (EMMA; http://mouse.cs.ucla.edu/emma/) within the R statistical package (see Supplementary Information). Note that if all subjects had been unrelated, the results from the mixed-model regression would be equivalent to the results from the standard regression in Plink. Methods for additional genetic analyses including within-group permutation to control for diagnostic status and meta-analysis of genetic results can be found in the Supplementary Information. Reliability of Caudate Volume Measurement To assess the reliability of the volume measurements, 40 BLTS subjects were scanned twice (time between scans, ± s.d.: 120±55 days) and the left and right caudate were separately segmented using the algorithm. Measured volumes were highly reproducible for the left (ICC=0.986), right (ICC=0.985), and average bilateral (ICC=0.990) caudate volumes (Supplementary Figure 3). Heritability Estimates Heritability analyses use the classical twin design to ascribe proportions of the observed variance (e.g., of the volume or shape of a brain structure) in various degrees, to several factors: additive genetic effects (A), common environment shared by both twins (C), and unique environment/experimental error (E) (30,35). Heritability estimates, which were computed for caudate volume using the BLTS sample, were high (around 90%) relative to other brain structures (2); additive genetic effects significantly contributed to the model fit ( Table 1). The heritability is also evident in Supplementary Figure 2, where caudate volumes in MZ twins resemble each other (black dots) more closely than those in DZ twins (open dots). Genome-Wide Association Given the high heritability of caudate volume, we conducted a genome-wide association analysis to search for common genetic variants that might explain some modest proportion of the substantial genetic influence on caudate volume. We analyzed the 734 ADNI subjects as a discovery sample and the 464 BLTS subjects as a replication sample (1198 subjects, in total). For ADNI, we conducted a standard regression of phenotype on the additive allelic effect at each SNP, after statistically controlling for age and sex. Genome-wide association results for the ADNI sample are shown in Figure 1 and the most significant SNPs are presented in Table 2, at a threshold of P<1×10 −5 . Subsequent replication of the findings was conducted using the BLTS sample. As noted in the Methods, a mixed effects model was used to regress the phenotype on the additive allelic effect at each SNP, after statistically controlling for age, sex, and for genetic relatedness, through the kinship matrix. Q-Q plots and λ inflation factors (36) show no inflation of statistical significance (Supplementary Figure 4). Meta-analytic methods were used to combine the two groups rather than a combined "megaanalysis" (i.e., pooling all volume measures from both studies), due to differences in subject demographics and image acquisition. Note that in Table 2, if the reference allele is the same in both samples and the sign of the β coefficient is same in both samples, then the effect is in the same direction in both samples. Similarly, if the reference allele is different in both samples and the sign of the β coefficient is opposite in both samples, then the effect is in the same direction in both samples. A large peak of replicated association is found in and around two genes: WDR41 and PDE8B (Figure 2). Association is strongest for the right caudate, but it is also found at a slightly weaker significance level for the left caudate. A large region encompassing the most highly associated region of the genome contains both the untranslated region of PDE8B and several exons of WDR41. Many of these variants have functional relevance. rs335636 is found within a 5850 base pair deletion region (rs71823322) within the untranslated region of both WDR41 and PDE8B genes ( Figure 2). Several other SNPs within WDR41 are coding non-synonymous base pair changes, meaning that they change the amino acids formed by the WDR41 gene. A SNP in the same locus, associated at a slightly weaker significance level, rs919224 (P=1.82×10 −5 ) is directly adjacent to two SNPs that code for missense mutations in one of the exons of WDR41 (rs35774719 and rs17856057). These SNPs are not directly genotyped in the HapMap database so it is not possible to determine the exact linkage disequilibrium (LD) pattern between them. However, they do reside within an LD block so they are likely to be in high LD. Other genes of interest to caudate volume identified here -but with little evidence for replication -include GMDS, C10orf46, and TMSB4X. Intergenic SNPs were also identified but not replicated, in chromosomes 3 and 4. Using a permutation-based approach, there was little change in the P-values of the replicated SNPs when using a null distribution preserving the effect of diagnosis in the ADNI sample (see P | diag column in Table 2). Additionally, we tested whether the effect of the top SNP found through this study, rs163030, was present in each of the three diagnostic groups of the ADNI cohort. The effect of the most significant SNP on right caudate volume (controlling for age and sex) was found in the AD group (N=170; β=168.8; P=0.0139), the MCI group (N=357; β=143.5; P=6.41×10 −4 ), and as a strong statistical trend in the healthy elderly group (N=204; β=119.1; P=0.0537). As expected the significance levels are affected by the number of subjects in each group -the MCI group has a lower p-value than the AD group, which itself has a lower p-value than the healthy elderly group. Notably though, the effect size is in the same direction and of similar magnitude across all the subsamples split by diagnosis. This shows that the results are driven by all three groups jointly rather than originating as an effect of one group alone. In a post hoc exploratory analysis, we also examined an alternative analysis that used the BLTS cohort as the discovery sample and the ADNI cohort as the replication sample. For this "switched" analysis, we selected only those SNPs that had strong association with caudate volume (P<1×10 −5 ) within the smaller BLTS cohort, and sought replication in the ADNI cohort. Using the BLTS cohort as a basis to select candidate genes resulted in no replications within the ADNI cohort (see Supplementary Table 1). Selecting the sample with the highest sample size as the discovery sample maximized the likelihood for replication. Replication attempts of previously tested candidate genes Neither the 5-HTTLPR polymorphism (18) nor the DAT1 variable number of tandem repeats polymorphism, rs28363170, (20) were genotyped on the chip used in either sample so replication of the association could not be tested. The DRD2 Taq1A polymorphism (rs1800497) on chromosome 11 was previously associated with caudate volume (19 Discussion This study identified specific genetic variations associated with caudate volume in the human brain, in 1198 subjects. This is one of the largest brain imaging studies ever performed. There was sufficient power to trace heritable variation to specific variations on the genome, though not at a genome-wide significance level. We replicated the same genetic associations in samples from two continents (U.S. and Australia), separated in mean age by 50 years, and using data collected on scanners with different field strengths (4 Tesla and 1.5 Tesla). Additional replication in still larger samples would be advantageous, but this confirmation in two independent samples suggests that these associations may be robust and may persist throughout life. The caudate volume is a reasonable starting point for investigating genetic influences on brain structure because it is highly heritable (Table 1), reliably delineated by automated recognition programs (37,38) (Supplementary Figure 3), and has an established link to psychopathology. The estimates of caudate volume heritability from the BLTS cohort (shown in Table 1) are around 0.76 for the ACE model (one of the standard classical twin models used to assess heritability), and around 0.90 for the best-fitting AE model. This agrees with a prior study assessing caudate volume in twins (2), which showed caudate heritability of 0.70 to 0.79 in an ACE model. That study analyzed many other brain structures as well and, though the heritability coefficients of different regions were not directly compared for statistical significance, the caudate showed consistently high heritability relative to other structures. A relatively large region on chromosome 5 was found to have replicated significance in its association with caudate volume in each of the independent populations, including genes WDR41 and PDE8B (Table 2 and Figure 2). Functionally, the region containing both of these genes is essential to dopaminergic neuron development in zebrafish (39). WDR41 was also useful in improving the performance of a diagnostic classification algorithm, that used gene expression patterns to distinguish schizophrenia patients versus healthy controls (40). PDE8B is highly expressed in the rat brain and in neuronal cells (41). The protein product of the gene, phosphodiesterase, is a key protein in the dopamine signaling casade. Dopamine binding to receptors stimulates or inhibits cAMP production, which is subsequently degraded by phosphodiesterase 8B (42)(43)(44). PDE8B is associated with susceptibility to major depression and antidepressant treatment response (45), and has higher expression in Alzheimer's disease relative to controls (46). Additionally, autosomal-dominant striatal degeneration is caused by a mutation in PDE8B (10). The possible relation of these genes to a Mendelian disorder is also of great interest. Although specific variants known to cause Mendelian disorders do not necessarily influence normal variability or psychopathology, the same genes may be relevant for normal variability and psychopathology. In genetic studies of obesity, for example, common variants have subtler but similar effects to highly penetrant rare Mendelian mutations (47). In that study, common SNPs within ABCG8 and LCAT increased risk for dyslipidemia. Mendelian mutations within those same genes are causal for dyslipidemia. Similarly, in our study, common SNPs within the PDE8B/WDR41 region were associated with differences in caudate volume and a Mendelian mutation within the PDE8B gene is causal for an autosomal dominant form of striatal degeneration. This shows that Mendelian mutations may be clues for selectively picking genes to understand the normal variability, even though the specific Mendelian mutations themselves may not be involved in the normal variability. Such replicated genetic hits suggest that our findings are consistent with the literature on dopamine function in the caudate. The caudate receives projections from the dopaminergic neurons of the substantia nigra and has high concentration of D 1 and D 2 dopamine receptors (48). These genes are crucial for the development and function of dopamine neurons. This provides biological plausibility that they may also contribute to variations in caudate anatomy. WDR41 and PDE8B-mediated differences in caudate structure accounted for 2.79% and 1.61% of the trait variance in the ADNI and BLTS samples, respectively at the most associated SNP. These genetic influences on dopamine function and brain structure may also influence behavior, as dopamine is essential for normal cognitive function (49,50). As such, the genes identified here may become candidates for examination on studies of disorders that affect the caudate, to determine whether they are over-represented in subjects with developmental insufficiencies or deterioration in caudate function. Three other genes were identified as influencing caudate volume in the ADNI cohort, but were not replicated in the BLTS cohort. GMDS encodes an enzyme involved in metabolism pathways, and is also important for neuronal migration (51). C10orf46 (also called CAC1) has been characterized as a cell cycle associated protein (52). TMSB4X is expressed in the brain and involved with corticogenesis (53) and with actin polymerization (54). Lack of replication in both cohorts may be due to false positive findings or age-specific gene effects. Additionally, though the DRD2 Taq1A allele was previously identified as putatively affecting caudate volume (19) as well as availability of striatal dopamine D 2 receptors (55), we found little evidence for DRD2 Taq1A association with caudate volume in either cohort. Some strengths and limitations of this study deserve comment. First, we identified some variants of interest for caudate volume; however, we are unable to provide mechanistic evidence for how these single base pair differences in the genome affect brain structure. Further mechanistic understanding could be derived by studying both the expression and protein function of the gene products that lie downstream of the SNP variations identified here. Unfortunately, no expression or protein data is currently available in either cohort to directly test these hypotheses. Second, we provide strong support for a particular region in the genome associated to caudate volume, yet it remains to be demonstrated that the genetic factors identified here are of interest for pathophysiology. Third, the two neuroimaging samples are taken from different parts of the lifespan. Replications of SNPs many indicate gene effects that persist, or have different modes of action, throughout life. Lack of replications could either be true negatives, or may reflect age-or cohort-specific effects. In a sense, the use of two very diverse samples on two different continents presents a very high bar for replication. Due to large differences in the mean age of the samples, it would be logical to assume that some robust genetic events may not be simultaneously found in both of these young and old cohorts. For example, there may be a greater preponderance of aging or apoptotic events in the ADNI sample and more developmental or synaptogenic processes in the BLTS sample. As such, the use of two very different samples is likely to identify genes of enduring relevance across the lifespan. This may miss or fail to replicate effects that are only occurring, or are more dominant, in late or early life. On these grounds, replication should not be taken to imply that the genes found in our study operate on the same biological processes over the lifespan. Nor should it be taken to mean that genes not found in our study are not influential -other genes could impact caudate structure only during one phase of life. Fourth, like other multifactorial traits such as height (56), individual common variants have small effect sizes and account for only a small proportion of the overall heritability so can only be detected with large sample sizes. Missing heritability might be attributed to low power, rare variants, un-genotyped variants, epistatic interactions, or epigenetic contributions to heritability (57). Finally, the ADNI cohort includes subjects across the continuum of healthy aging to mild cognitive impairment to Alzheimer's disease. Any genetic association in ADNI could be mediated by normal atrophy that occurs with healthy aging or by the disease. To account for this, we were able to perform an analysis controlling for diagnosis through permutation. This showed little change in the degree of association, implying that illness category is not driving the association. Furthermore, the broad range of imaging phenotypes in ADNI is sensitive to effects that may be overlooked if the discovery sample were more narrowly defined. As single genes are likely to have small effects on behavior, several studies advocate examining multiple cohorts where the spectrum of observable variation is larger than that in the general population (20,23,58,59), especially in the discovery phase. Even so, we replicated the association in our young sample (healthy twins) so the gene effects are not restricted to those who are elderly or ill, and are also detectable in young people. In this study, we assessed caudate volume rather than surface morphology because volume is an easily measured summary phenotype that is known to associate with disease. Additionally, performing simultaneous searches across both surface vertices and the genome requires complex statistical methods (60,61) not yet optimized for surfaces. Volume effects are also more interpretable and can be readily verified by many other groups. Large GWAS commonly use a genome-wide significance threshold of P < 5×10 −8 (56) but less conservative thresholds have been established using permutation testing or by estimating the effective number of tests on the genome (62). Here we used a search criterion to select SNPs that were highly associated in the larger cohort, at P < 1×10 −5 , and then tested for replication in a separate cohort. This threshold does not represent a genome-wide significance threshold, but rather a two-stage process that identifies interesting SNPs to carry forward to a second stage in which they can be replicated. This threshold value is somewhat arbitrary but has been used previously in the literature to identify interesting SNPs in large association studies (63). It is of interest that although we found a replication across samples, the individual associations did not reached genome-wide significance level in each smaller sample. Similarly in a previous GWAS study (64), a top SNP was found in one cohort that was not genome-wide significant but replicated in others with a lower threshold. The meta-analysis in our study of the individual cohorts separately did not reveal genome-wide significance values for any SNP (Table 2). Thus, despite the replication in two samples, even more studies are needed to verify this association. The marginally greater effect size for genetic association in the right versus left caudate may be due to the known asymmetries in caudate volume. As we found in a recent non-genetic study of a partially overlapping sample (400 ADNI subjects), the right caudate was 3.9% larger than the left in controls, on average, and 2.1% larger in MCI subjects -an asymmetry not found in AD (13). This same asymmetry is reported in most, but not all, large morphometric studies (65)(66)(67)(68)(69). In the ADNI cohort, which focuses on elderly subjects, lower right caudate volume was associated with conversion from MCI to AD, with baseline ratings of dementia severity, immediate and delayed logical memory scores, future decline over one year in MMSE scores, and tau and p-tau protein levels in the cerebrospinal fluid (13). Taken together, these observations suggest that a depletion in caudate volume may be associated with deteriorating cognition, but cognitive associations may not be detectable in healthy subjects as other brain systems may compensate functionally for mild atrophy or developmental insufficiency. Future meta-analyses in even larger samples, may be sufficiently powered to relate genetic differences in brain structure to observable differences in cognition or risk for the diseases in which the caudate is implicated. Here we demonstrate a replicated -thought not genome-wide significant -association in a sample that is much smaller in size than those used in some current GWAS studies (56). This strongly suggests that MRI-based measures of brain structure are powerful, genetically informative tools with which to search the genome and may be used successfully to find genetic variants in multi-site genetic meta-analyses such as through the Enigma project (http://enigma.loni.ucla.edu). Our results highlight a region of the genome that may provide a stronger understanding of caudate neurobiology, brain structure in humans, and predisposition for the development of psychiatric and neurological illness. Manhattan plots show the significance of association of each SNP with caudate volume, from genome-wide association analysis conducted in the ADNI cohort. Each marker is represented as a dot and the −log 10 (P-value) is displayed on the y-axis. Association was conducted separately for average bilateral (top), left (middle), and right (bottom) caudate volumes. Markers above the blue line represent a P-value < 1×10 −5 . ChrXY represents the pseudo-autosomal region of the X chromosome, and ChrMT represents mitochondrial SNPs. BLTS Manhattan plots are shown in Supplementary Figure 4. Detailed view of the associated locus. Markers are represented as circles (SNPs with no known function) or downward-pointing triangles (coding non-synonymous mutations). Markers are placed at their position on chromosome 5 (x-axis) and graphed based on the −log 10 (P-values) of their association to the phenotype (y-axis). The level of linkage disequilibrium to the most associated SNP (rs163030) is represented in color using the CEU panel from HapMap Phase II. The location of genes is shown below the plots. Images were created using LocusZoom (http://csg.sph.umich.edu/locuszoom/). Stein Most associated SNPs from the ADNI discovery cohort (P < 1×10 −5 ) and BLTS replication cohort. Genes close to the SNPs are displayed and cells empty if no gene is within ±50kb. A1 is the reference allele, and Freq shows the frequency of the reference allele, β shows the mean increase in each volume (in mm 3 ) per added reference allele controlling for age and sex, SE gives the standard error of the beta coefficient, and P | diag gives the P-value controlling for diagnosis using a permutation based method. Information was pooled across samples using inverse weighted variance meta-analysis and the Pooled β is from the ADNI reference allele. Deviations from Hardy-Weinberg equilibrium are shown in Supplementary
2016-10-10T18:24:48.217Z
2011-03-03T00:00:00.000
{ "year": 2011, "sha1": "c85d61810ec0956992545590e94cf3a6f57077dd", "oa_license": null, "oa_url": "https://www.nature.com/articles/mp201132.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c85d61810ec0956992545590e94cf3a6f57077dd", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
10842596
pes2o/s2orc
v3-fos-license
Advances in the Research and Development of Natural Health Products as Main Stream Cancer Therapeutics Natural health products (NHPs) are defined as natural extracts containing polychemical mixtures; they play a leading role in the discovery and development of drugs, for disease treatment. More than 50% of current cancer therapeutics are derived from natural sources. However, the efficacy of natural extracts in treating cancer has not been explored extensively. Scientific research into the validity and mechanism of action of these products is needed to develop NHPs as main stream cancer therapy. The preclinical and clinical validation of NHPs would be essential for this development. This review summarizes some of the recent advancements in the area of NHPs with anticancer effects. This review also focuses on various NHPs that have been studied to scientifically validate their claims as anticancer agents. Furthermore, this review emphasizes the efficacy of these NHPs in targeting the multiple vulnerabilities of cancer cells for a more selective efficacious treatment. The studies reviewed here have paved the way for the introduction of more NHPs from traditional medicine to the forefront of modern medicine, in order to provide alternative, safer, and cheaper complementary treatments for cancer therapy and possibly improve the quality of life of cancer patients. History of Natural Health Products (NHPs) in Cancer Natural health products (NHPs) and natural products (NPs) play a leading role in the discovery and the development of drugs for the treatment of human diseases. Traditional medicines in the Native American, Chinese, and Indian cultures have utilized numerous natural products, including dozens of spices and plant extracts. Scientific research into the validity of these traditional products has shown that many do indeed have potent anticancer effects [1,2]. An extract from the Mayapple, Podophyllum peltatum, was traditionally used by Native Americans to combat skin cancers and other malignant neoplasms, as well as a host of other ailments. The major component of this extract was podophyllotoxin, which was the first in a series of effective anticancer agents called podophyllins [3]. Likewise, numerous natural products used in traditional Indian Ayurvedic medicine have been shown to have strong anti-inflammatory and anticancer properties. Curcumin, the active ingredient in turmeric, has been widely studied for its anticancer properties. Turmeric (Curcuma longa), itself, was widely used in Ayurvedic medicine and the therapeutic benefits, which are now attributed to the presence of curcumin, include the ability to suppress tumor growth in a wide variety of cancer types [4,5]. A total of 27 anticancer drugs from 1940 to 2010 were obtained from natural sources, for instance actinomycin D, paclitaxel, and vincristine, now one of the most commonly used chemotherapy agents in cancer treatment, while topotecan HCl, dexamethasone, etoposide, and even tamoxifen are mimics of natural products [1] (Figure 1). Camptothecin, found in extracts from Camptotheca acuminate and used in traditional Chinese medicine, has been found to have antitumor activity and its derivatives, topotecan, and irinotecan are routinely used to treat ovarian and colon cancers [6]. The discovery of the anticancer activities of so many traditional medicines and natural products has been supported by scientific evidence and validation. This was in part Figure 1: Sources of anticancer drugs from the 1940s to 2010. * Natural product (N), derived from a natural product, usually a synthetic derivative (ND). Natural product "Botanical" (NB); natural product mimic (NM); totally synthetic drug (S) made by total synthesis, but the pharmacophore is/was from a natural product (S * ). successful due to the initiation of the Cancer Chemotherapy National Service Center (CCNSC) in 1955, by the National Cancer Institute (NCI). The mandate of this program was to screen for antitumor agents on a larger scale by establishing a strict standardized protocol for testing potential anticancer compounds [7]. Since the 1980s, research into the anticancer effects of natural products has yielded many promising results. For example, resveratrol, a polyphenol present in grapes, shows potential as both a preventative and an antitumor agent [8]. Similarly, piperlongumine, extracted from Piper longum, selectively induces reactive oxygen species in cancerous cells, leading to apoptotic cell death [9]. In the 1980s, Bagshawe et al. developed a novel use for natural products, the antibodydirected enzyme-prodrug therapy (ADEPT). This technique used tumor-specific antibodies bound to an enzyme that would convert noncytotoxic prodrugs into their cytotoxic forms once in contact with the tumor [10,11]. Many natural products were successfully used as prodrugs, including doxorubicin and Taxol [3]. These earlier studies have paved Evidence-Based Complementary and Alternative Medicine 3 the way for the introduction of more NHPs from traditional medicine to the forefront of modern medicine. The scientific validation of these NHPs in terms of their efficacy, safety, and mechanism of actions will seal their position in modern medicine, especially in the field of cancer research and therapy. Current Trends in NHP Research and Cancer Even with all the incoming evidence, herbal drugs and other NHPs and NPs are usually shunned during systemic chemotherapy because of herb-drug interaction and exaggeration of chemotherapy-related toxicity. Current research is focused on the development of new and more effective chemotherapeutic agents that have little to no associated toxicity to the patient. Lately, this focus has been centered on NHPs and herbal formulations, mainly in the form of plants and other biological sources around the world. NHPs have been used for centuries by a variety of cultural backgrounds for a great number of illnesses; some of which continually provide new medicinal applications and intriguing anecdotal evidence, which merits further investigation. Today, there are numerous natural health products that fall under the umbrella of traditional medicine, such as the Indian herbs Tulsi (Ocimum sanctum), Neem (Azadirachta indica), and Ashwagandha (Withania somnifera), commonly known as Indian ginseng or winter cherry. These herbs have shown an incredible diversity of treatments for diseases in both ancient and modern times as well. Ayurvedic medicine has been very informative in the introduction of numerous NHPs. Tulsi, also referred to as "Holy Basil, " has in past decades been studied for its many health benefits, which includes but is not limited to treatments for bronchitis, pain, malaria, asthma, arthritis, cancer, diabetes, and numerous microbial infections [12,13]. One study claims that it is primarily the phenolic compound, eugenol, to which the health benefits of Tulsi are owed [12]; however more recent research suggests that there is an additional range of compounds at work, including the phytochemicals rosmarinic acid, apigenin, myretenal, luteolin, -sitosterol, and carnosic acid; all of which have been shown to be valuable in the reduction of chemically induced cancers through initiating apoptosis and maintaining antioxidative and antiangiogenic effects [14]. On the same page, Neem leaves have been shown to possess a strikingly similar range of pharmacological effects to Tulsi and in one study is referred to as a "living pharmacy" in itself [15]. The benefits of Neem range from reductions in inflammation, microbial infection, progression of diabetes, oxidative stress, cancer proliferation, and tumor development, indicating chemopreventive benefits. Some of the active compounds within Neem are Azadirone, Nimbidin, Nimbolide, and the Polysaccharides GIa and GIb [15,16]. Ashwagandha has been a staple in traditional Indian medicine for decades and has been widely used, owing to the various properties that have been attributed to it. Ashwagandha is proposed to have antioxidant, anti-inflammatory, anticancer, antistress, and adaptogenic properties [17,18]. The extracts of this plant have been studied intensely to validate the claims that have been the backbone of its use in ayurvedic medicine. A recent study in 2013 showed the efficacy of Withania extract against metastatic breast cancer. The ethanolic extract of this plant was efficient in preventing the invasion of breast cancer cells in a spheroid invasion assay, while inhibiting the metastasis of breast tumors to the lungs and lymph nodes in animal models [19]. In Phase II clinical studies, this herb was shown to promote "general well-being of patients, " when used in combination with chemotherapy, as well as enhance the cytotoxicity of chemotherapy in breast cancer patients. This combination treatment led to an increase in the quality of life of the breast cancer patients in this study [18,20]. Another example of an NHP that has been used for centuries is the extract of dandelion, a perennial weed known for its curative properties. The dandelion species have been used in many traditional and modern herbal medicinal systems and this use has been documented all across the continents. Various parts of this plant have been used in the treatment of different ailments, with the root being used in gastrointestinal diseases and the leaves as a diuretic and digestive stimulant. The whole plant has been taken as a cure for hepatitis and anorexia as well, although some of the claims associated with this weed have gone unsubstantiated [21,22]. Some preclinical research on dandelion has introduced this plant with numerous properties to the scientific community. Research has shown the anti-inflammatory, prebiotic, antiangiogenic, and antineoplastic properties of dandelion root [21]. However, some studies do not agree with others, leading to the publication of conflicting reports on this NHP. More recently, studies have shown selective efficacy of dandelion root extract (DRE) against several cancer cell types in a dose and time dependent manner. The investigation of the mechanism of action of dandelion root extract in cancer cells is under study, with focus on the identification of the possible apoptotic pathway in which this extract is selective to cancer cells. It has been shown that DRE targets the death-receptor mediated extrinsic pathway of apoptosis and its mechanism is dependent on the activation of caspase-8 [23][24][25]. Overwhelming scientific evidence with the aforementioned NHPs are paving the way for other NHPs, especially in cancer treatment and introducing these compounds and products as safe alternatives and effective contenders in the fight against cancer. Mechanistic Efficacy of NHPs against Cancer The different hallmarks of cancer and tumor cells include evading growth suppression signals, evading programmed cell death processes, inducing angiogenesis, and sustaining proliferative signaling, to name a few [26,27]. These hallmarks have been studied in great detail and therefore provide multiple means to target cancer cells selectively. The study of NHPs against cancer cells and xenograft models has therefore focused on identifying NHPs that can target pathways that convey survival protection to cancer cells, so as to selectively and effectively eradicate cancer cells. This review focuses on the role of NHPs in several pathways that are involved in cancer, including inflammation and inflammatory response, oxidative stress, and mitochondrial 4 Evidence-Based Complementary and Alternative Medicine response as well as receptor agonists/antagonists. The studies and results presented in this review are meant to highlight the importance of NHPs in the targeting of multiple pathways that are involved in cancer initiation and progression. Natural Health Products in Inflammation and Inflammatory Response in Cancer. It is well-known that the inflammatory response is vital in living organisms for their protection against foreign matter. Inflammation more commonly occurs in cases of infection and injury (acute inflammation), but in certain situations, a more persistent, deregulated, and maladaptive inflammation (chronic inflammation) can arise. This chronic form of inflammation is usually associated with chronic diseases like cancer, where there is exacerbation of the disease, due to the prolonged inflammatory response. This would lead to increased proliferation of the cancer cells, increased angiogenesis, and promotion of metastatic capabilities [28,29], making chronic inflammation a hallmark of neoplastic transformation [30]. Unlike acute inflammation, there is not much known about the processes and molecular pathways associated with chronic inflammation [29]; however, there are anti-inflammatories available that might be able to target the inflammation and possibly the molecular pathways involved in order to reduce the extent of inflammation and possibly the unwanted effect of inflammation in chronic diseases, like cancer. One of the most common examples for the role of inflammation in cancer progression has been the use of nonsteroidal anti-inflammatory drugs (NSAIDs) and COX-2 specific inhibitors, to reduce the risk of developing some cancers and preventing the mortality associated with those cancers [29]. These NSAIDs and COX-2 inhibitors act by interfering with eicosanoid signaling and metabolism, suppressing the formation of tumors and acting as antioxidants and antiangiogenics. It has been found that there is a shared pathology between cancer and inflammatory diseases, which is displayed in the gene expression signatures for cancer and those for inflammatory diseases [30]. These findings suggest that targeting the inflammatory response is a potential way to target different forms of cancer. To combat unwarranted inflammation, antiinflammatories like aspirin, ibuprofen, and prednisone are most commonly used, despite their side effects [31]. Alternatively, natural health products are used to heal a variety of ailments including inflammation in a safe and effective manner. This prompts continual research into the mechanism of natural anti-inflammatories as well as the discovery of new natural therapies. Several phytochemicals, including curcumin from turmeric and resveratrol from grapes, are used partly due to their anti-inflammatory activity. They inhibit inflammation by suppressing the activity of NF-B and possible STAT-3 [30,32]. Long pepper or Piper longum L. has been used as both a spice and a therapy for a number of centuries. Historically, it was used as a topical treatment for muscle inflammation but has shown efficacy in a number of diseases and conditions including diabetes, cancer, and obesity without having any toxic effects [33]. More recently, the plant has been studied as an anti-inflammatory agent for carrageenaninduced paw edema in rats. In the study, researchers found a significant decrease in paw inflammation of rats treated with long pepper indicating that long pepper suppressed acute and subacute inflammation [34]. In addition to this study, other work has been done on piperlongumine, an important component of the long pepper fruit, as a therapy against atherosclerosis. This study found that the anti-inflammatory and antiplatelet aggregation properties of piperlongumine prevented artherosclerotic plaque formation in mice, proving it to be a possible therapy for this inflammatory disease [28,35]. Furthermore, in vitro studies in various cancer cell lines showed the anticancer effect of piperlongumine in these cells, where it was shown that this compound was able to target the oxidative stress response of these cells, increase the levels of reactive oxygen species (ROS), and activate the expression of several key proapoptotic proteins. This anticancer effect was confirmed in in vivo models of breast adenocarcinoma [9]. Piperlongumine represents one of the compounds present within long pepper that provides the anti-inflammatory response from this plant; however the effect of this compound alone is significantly less than the whole plant extract in reducing inflammatory response [35,36]. In addition to long pepper and piperlongumine, a natural compound found in grapes, peanuts, and berries known as resveratrol (3,4 ,5-trihydroxystilbene) is a fat soluble compound that has also shown anti-inflammatory potential. Researchers became interested in elucidating the potential health benefits of resveratrol when it was reported to be present in red wine, which had previously been shown to reduce coronary heart disease by 20-30% [29,37]. It has been found that resveratrol can have a direct effect on the immune response of the body and thus can be used as an immunomodulator in patients with inflammatory diseases [29]. More specifically, resveratrol suppresses the expression of inflammatory biomarkers like TNF, COX-2, iNOS, and CRP, preventing inflammation [30]. Additionally, resveratrol has been shown to impede the activity of at least one type of matrix metalloproteinase enzymes which assist the invasion of normal tissue by cancer cells. The anti-inflammatory properties of resveratrol have also been demonstrated in vitro. Studies are inconclusive on whether or not high intakes of resveratrol are effective in protecting against and preventing cancer in humans [38][39][40]; however, due to the ability of this compound to act as a potential therapeutic measure for cancer, it is essential to pursue further studies on this NHP. As noted earlier, due to the low bioavailability of resveratrol in humans, studies suggest that even high intakes of resveratrol may not result in the same anticancer effectiveness that was demonstrated in cell culture [41]. Dandelion extracts have been found to have antiinflammatory activity in some cancer cells [22]. In a study by Jeon and colleagues in 2008, an ethanolic extract of the whole plant was shown to possess anti-inflammatory properties. This extract led to the downregulation of the production of NO and COX-2 in activated macrophage cells [42]. These findings suggest a great potential for NHPs with anti-inflammatory activity in the fight against cancer. The ability of these NHPs to target multiple pathways in inflammation and in cancer progression provides a potentially more efficacious way to selectively target cancer cells. Evidence-Based Complementary and Alternative Medicine 5 Natural Health Products in Oxidative Stress Response and Mitochondrial Involvement in Cancer. It has long been known that the mitochondria play a significant role in the carcinogenesis and cancer progression [43][44][45]. There are a number of metabolic alterations that are associated with mitochondrial functions that can be used to differentiate cancer cell mitochondria from normal cell mitochondria. For instance, the activities of some enzymes that are required for oxidative phosphorylation are usually decreased in cancer cells, unlike normal cells. It is however essential to note that although there are a large number of differences between a cancer cell and a normal cell mitochondria, these differences are not common to all cancer cells [44]. Mitochondrial respiration is coupled with the production of ROS and under normal conditions these oxidative species are usually neutralized into harmless forms. Under abnormal conditions, there is a corresponding increase in ROS and oxidative stress, which leads to the mutations in both mitochondrial and nuclear DNA, damage to proteins and lipids, and resistance to apoptotic induction. Oxidative stress and the resulting mutations are typically at the basis of some malignant phenotype, as can be seen in cancers [44,45]. It has been hypothesized that chronic inflammation (discussed above) is linked to oxidative stress and ultimately to carcinogenesis [44]. Data from a study carried out in 1994 showed that there was an increase in mitochondrial DNA mutations, which are typically associated with ROS production [46]. Furthermore, studies have shown some commonly known antioxidants, like vitamin E and some flavonoids present in natural extracts, can inhibit inflammation, thereby inhibiting carcinogenesis and progression of cancers [47,48]. A lot of research has gone into antioxidant mechanisms and the roles they play in tumor development. Some of this work has provided better understanding of NHPs in oxidative stress response, especially in cancer development and treatment. There are increasing numbers of NHPs that are involved in oxidative stress response. One famous compound, piperlongumine, discussed above, has been shown to target and inhibit the endogenous oxidative stress response of cancer cells, leading to an increase in the levels of ROS and a corresponding increase in oxidative stress. The inability of the cells' oxidative stress response to detoxify these reactive oxygen entities led to the induction of apoptosis in cancer cells. This effect was countered by the presence of the antioxidant, N-acetylcysteine. More importantly, this effect on ROS generation and oxidative stress pathway targeting was not observed in the noncancerous cells, suggesting a dependence on the oxidative stress response pathway in cancer cells [9]. These results confirm what has been previously known: targeting the mitochondria could provide a better selective target in cancer cells [49], for more efficacious treatment. Some NHPs are proposed to contain both antioxidant and prooxidant properties. Dandelion (Taraxacum officinale) flower ethanolic extract is considered one of such NHPs [50]. The purpose of this study was to characterize the antioxidant properties of dandelion flower extract (DFE), which was attributed to the presence of luteolin and luteolin-7-glucoside. Interestingly, it was observed that higher concentrations of this extract had prooxidant effects in colon cancer cells. This is an important finding as it shows the versatility of NHPs under different situations and conditions. It is also essential to note that the production of ROS can have both a proapoptotic and an antiapoptotic effect, depending on the conditions of the cells [51]. There is a growing body of evidence that proves that ROS production acts not only as destructive agents but also as chemical messengers [51]. This information, along with the fact that NHPs are so versatile, further proves the importance of NHPs as treatment options. Lots of studies have been carried out with other NHPs; for instance, epigallocatechin-3-gallate (EGCG) has been studied for its antioxidant capabilities for decades and treatment with this compound significantly slowed down the growth of breast cancer tumors in mice [52]. Other nutraceuticals, like resveratrol (discussed above), also have beneficial claims in antioxidant therapy. Several studies have shown that NHPs rich in flavonoids and phenolics play a significant role in oxidative stress response and concomitant use of NHPs with these components increases the activities of the other NHPs [48,52,53]. There are a lot of characteristics attributed to curcumin and its role as an anticancer agent. Curcumin, from turmeric, is another natural compound that has both antioxidant and prooxidant characteristics and this capability has been studied in various cell culture models and confirmed in invivo studies [5,54,55]. These studies definitely speak to the versatility of NHPs and natural compounds, in targeting multiple cell pathways, especially in the fight against cancer. Even with advances in NHP research, we are still a long way from understanding the connections between some of these NHPs and oxidative stress, especially in cancer research. It is therefore essential to further investigate how NHPs are able to distinguish between different conditions and act in accordance to both scavenge and induce the production of radical oxygen species. Natural Health Products as Receptor Agonists and Antagonists in Cancer. Abnormal and excessive signal transduction is a common hallmark of cancer cells. This is in part due to the ability of these cell types to upregulate the expression of both receptors and the ligands (usually growth factors) required to transmit downstream signals. This ability thereby confers hyperproliferative characteristics to cancer cells; for instance, this is observed in aberrant Ras and myc signaling [56]. This therefore suggests that a way to target cancer cells effectively will be to target the aberrant signaling pathways, either by knocking down the expression of ligands and/or receptors or preventing signal transduction by introducing an antagonist, thus, indicating the importance of NHPs as potential anticancer agents. Several NHPs have been reported to play an antagonistic role against several important receptors in the aberrant signaling pathways in cancer cells. Due to the presence of many components in some of these NHPs, it is not surprising that one or more of these components could target different receptors and signaling pathways. One particularly interesting class of compounds is the sesquiterpene lactones (SLs), found in an initial screen by the National Cancer Institute (NCI), the same screening that led to the identification of Taxol [57]. These SLs are generally plant secondary metabolites and although almost exclusive to the Asteraceae plant family, they can also be found in the Umbelliferae and Magnoliaceae families as well. These compounds are worth further investigation for their development as anti-inflammatory and selective anticancer agents [57]. Owing to this ability, extracts of plants high in SLs have been given considerable interest, especially in cancer and inflammatory diseases [23-25, 42, 57]. SLs have been shown to selectively target cancer stem cells and a host of them has successfully proceeded to Phase I and Phase II clinical trials. Some common ones include Artemisinin (Artemisia annua L.), Thapsigargin (Thapsia), and Parthenolide (Tanacetum parthenium) and these have been successful in a lot of studies involving various types of cancers: laryngeal, breast, colorectal, and non-small cell lung carcinoma (Artemisinin); breast, kidney, and prostate cancers (Thapsigargin); and AML and lymph node cancers (Parthenolide). The activity of these SLs is mainly attributed in part to their ability to target cell surface transferrin receptors (a distinct hallmark of rapidly proliferating cells), NF-B signaling (by disrupting the recruitment of I B kinase complex to the TNF receptor, which is essential for tumor initiation, progression and resistance), and the angiogenesis pathways, by inhibiting human vein endothelial cell proliferation and vascular endothelial growth factor and receptor expression [57,58]. Antiangiogenic drugs have been of great interest, especially in the field of cancer. The generation of new blood vessels provides tumor cells with growth and survival advantage, as tumor cells depend on an adequate supply of oxygen and nutrients for continued survival. This increase in tumor vasculature also increases the chances of metastasis to distant sites. Therefore finding antiangiogenic drugs that could not only inhibit angiogenesis but also decrease the chances of tumor metastasis is of utmost importance. The efficacy and benefits of SLs as antiangiogenics are proof that this (angiogenesis) is an essential target in the fight against cancer. More recently, there have been increasing evidence for the role of NHPs as antiangiogenics, especially those containing bioactive phytochemicals like sesquiterpene lactones. Moreover, the fact that many NHPs contain multiple bioactive phytochemicals makes them a viable source for receptor agonists and antagonists. There are many reports of natural health products with direct and indirect effects on angiogenesis; see Table 1 [47,[59][60][61][62][63][64][65]. These NHPs act to target the different pathways involved in angiogenesis by decreasing the expression of target proteins and receptors. For instance, the epidermal growth factor and its corresponding receptor have downstream effects on urokinase-type plasminogen activator (uPA), which in turn can promote angiogenesis. In addition, COX-2 and lipoxygenase (LOX-5) tend to have stimulatory effects on cancer progression and angiogenesis and increase in COX-2 expression is associated with a progression to invasive phenotypes in certain cancer types. Furthermore, vascular endothelial growth factor (VEGF) is associated with increased proliferation and migration of endothelial cells and an increase in the expression of metalloproteinases (MMP), leading to increased vascularization within a tumor that promotes metastatic capabilities. These suggest that NHPs and NPs that can target the expression of key players in angiogenesis (such as those listed in Table 1) are viable options for the treatment of cancer. Several other receptors play a role in the progression of cancers, especially the estrogen (ER) and androgen receptors (AR) which are seen in breast and prostate cancers [66]. There are two main isoforms of the ER, the alpha (ER-) and the beta (ER-) isoform, which have been shown to play opposing roles in the initiation and progression of breast cancers; the alpha isoform promotes tumor formation and progression, while the beta isoform takes on the role of a tumor suppressor [67]. This information indicates that Evidence-Based Complementary and Alternative Medicine 7 these receptors are another viable option in the attempts to target cancer cell and tumor progression. Emerging research suggests that the activation of the AR is able to inhibit breast cancer progression [67], and its expression is a significant prognostic marker in estrogen receptor positive breast cancers [68]. This suggests that downregulating the expression of the ER and/or activating the downstream signaling of the AR can be essential in the fight against breast cancer; however, AR plays a significant role in the development and progression of therapy-resistant prostate cancer [69,70], indicating that activation of AR to treat breast cancer will have dire effects on the prostate and suggesting that a focus on ER antagonists would be a more efficacious option. Lately, more research is going into identifying NHPs and NPs that can target these receptors, leading to a decrease in their downstream activity. Perhaps the most common example of an ER antagonist is tamoxifen, which is used especially for hormone-receptive breast cancer treatment. Recent studies have shown that administration of soy isoflavones, such as genistein and daidzein, can have an effect on the efficacy of tamoxifen. Some studies suggest that some isoflavones (genistein) can increase the potency of tamoxifen in ER− breast cancer cells and have the opposite effect in ER+ cells, while others indicate that daidzein, in combination with tamoxifen, has increased protection against both ER+ and ER− breast carcinomas [71]. This indicates that the use of NHPs/NPs must be used with caution and advocates a necessity for scientific validation of the mechanism of actions, indications, and contraindications of NHPs/NPs, to ensure safety and lack of toxicity associated with the administered treatment. Several NHPs have been shown to induce different types of programmed cell death, including apoptosis, necrosis, and autophagy, in cancer cells, some in a selective manner. However, understanding the mechanism of action of these NHPs sometimes proves difficult, as the multiple components tend to have multiple targets. This has led to increasing mechanistic studies into effective NHPs and some of these studies have identified active NHPs with receptor antagonistic activities. For instance, withaferin A, a naturally occurring bioactive component isolated from Withania somnifera, was shown to knock down the expression of ER-but not ER-, leading to its role in chemoprevention and apoptosis induction [72]. As mentioned earlier, several compounds found in soy isoflavones have significant anticancer activity, with genistein and daidzein's ability to target HER2/neu and EGFR, to inhibit angiogenesis [59]. Several studies have also shown that these compounds are able to target the ER, however not with great affinity. Another study showed that one of these compounds, daidzein, is converted to corresponding Equol by gut microflora. There are two isoforms of this compound: R-Equol, with a moderate binding preference for ER-and S-Equol, with a strong binding preference for ER-and both of these have much higher binding affinities for the ERs than their biosynthetic precursor compound, daidzein [73]. This suggests not only that NHPs contain bioactive components that could have multiple targets, but also that this bioactive components could act as precursor compounds for other compounds with even better activities against the different targets, as seen with the ER affinities of daidzein and its later compounds. Another source of bioactive components is the long pepper, from the genus Piper (Piperaceae). These species are one of the most widely used NHP worldwide, with many biologically active secondary compounds having been identified in this species. The most common compound identified in piper spp. is piperlongumine (PL). Scientific evidence has shown the anticancer efficacy of PL by targeting the ROS stress response mechanism in cancer cells [9]. Further studies have also shown the ability of PL to target receptors, including the PDGF receptors for the inhibition of angiogenesis and more importantly treatment with this compound led to the depletion of androgen receptor in prostate cancer cells, through a proteasome-mediated ROS dependent pathway [74,75]. This not only suggests the role of this NP as a receptor antagonist, but also provides a link between oxidative stress and receptor targeting, for an alternative path to cancer cell specific targeting, proving this usefulness of NHPs and NPs in the fight against cancer. Aside from receptors involved with increased gene expression and cell growth and proliferation, death receptors are also key players in normal and cancer cell growth and survival. These receptors belong to the tumor necrosis factor (TNF) superfamily of receptors that play a role in signaling cell death and survival pathways and include TNFR1/TNF-, FasR/FasL, and TNF-related apoptosis inducing ligand, TRAIL/TRAIL-R1 (DR4), or TRAIL/TRAIL-R2 (DR5) [76,77]. These death receptors are ubiquitously and constitutively active in tissue types in the human body, with some tissues having a higher expression than others. The main difference between normal tissues and tumor tissues is the increased expression of the noncanonical prosurvival signaling of these receptors in tumor cells; for instance, many tumor cells overexpress TRAIL-R3 and TRAIL-R4 (decoy receptors, DcR1 and DcR2), with truncated cytoplasmic death domains that prevent the transmission of signals following binding of ligand to receptor. Also there is evidence that implicates several noncytotoxic pathways that are mainly facilitated by the activation of NF-B and MAPK pathways, through the activation of RIP kinase [77,78]. This indicates that the death receptors provide another target in the fight against cancer. For instance, over the years, TRAIL and other ligands against TRAIL-R1/R2 have generated considerable interest, due to the selectivity of these ligands towards cancer cells, with little to no toxicity to noncancerous cells [77], suggesting their usefulness as cancer therapies. Studies have shown that several NHPs are able to target death receptor signaling pathways, again confirming their usefulness as potential anticancer agents. The selective anticancer efficacy of dandelion root extract has been attributed to its ability to induce death-receptor mediated extrinsic apoptosis in cancer cells selectively [23][24][25], and a loss of this activity was observed in cells with a dominant negative Fas-Associated Death Domain (FADD) [24]. This knack for death receptor targeting can be attributed to the presence of sesquiterpene lactones [58] and the suppression of cellular FLICE-like inhibitory protein (cFLIP), which is highly expressed in several cancer cell types, including pancreatic 8 Evidence-Based Complementary and Alternative Medicine cancer cells, by the triterpene, lupeol [79]. This compound is one of the bioactive components of dandelion extracts [23,80]. This inhibition of cFLIP has been shown to render TRAIL-resistant cancer cells sensitive to TRAIL therapy [79]. A derivative of resveratrol was found to induce FADDdependent apoptosis in several human leukemia cells, significantly higher than resveratrol by itself. This dependence on FADD was not contingent on the expression of Fas, TRAIL, or TNF- [81], suggesting that even in the absence of ligand and/or receptor, some NPs could still activate death receptor extrinsic pathway to selectively target cancer cells to apoptosis. Taxanes (e.g., paclitaxel and docetaxel), isolated from Taxus, are effective as cytotoxic agents, as they target and stabilize the microtubules and prevent their depolymerization thereby interfering with normal cell functions and induce apoptosis. Further studies on the mechanism of taxanes indicate that these compounds can induce the expression of TNF-and decrease the expression of certain TNF receptors [52], where prodeath TNFR1 expression was decreased and prosurvival TNFR2 expression was increased [82]. This could suggest a way by which cancer cells develop resistance to taxane treatment, although further investigation into this mechanism is still required, as TNFR2 is also implicated in cell death signaling [82]. Curcumin, discussed in previous sections of this review, has shown significant anticancer activity in several cancer cells, with the ability to target multiple signaling pathways. Not only does it antagonize cell surface receptors, like the epidermal growth factor receptor (EGFR), but it has also been shown to induce apoptosis in human melanoma cells through the activation of the Fas receptor and subsequent activation of caspase-8 [83], proving itself a worthy opponent in the fight against cancer progression. These indications of the roles of NHPs and NPs as agonists and antagonists of receptors for the selective targeting of cancer cells to the process of cell death give further validation to the use of these products as safer alternatives to current cancer treatments. However, these examples also indicate that there is a lot of work that is required to further understand how these NHPs/NPs are able to recognize the aberrant signaling system in cancer cells and exploit these differences and vulnerabilities, although these studies provide a stepping stone for future validation of NHPs in the mechanistic validation of selective targeting of cancer cells for programmed cell death processes. Characterization of Complex Natural Health Products: Fractionation and Metabolomics Profiling of Active Fractions NHPs are usually complex mixtures that contain many bioactive substances. Dosage forms are difficult to characterize, as traditional preparation, dosage, and usage do not account for the presence of the various bioactive components. It is therefore imperative that identification of pharmacologically active ingredients within any NHP is carried out. As mentioned earlier, in-vitro studies of dandelion root extract have shown its efficacy in various cancer cell lines, giving it the potential to be developed as an anticancer agent. Dandelion root is a very complex biological material containing many bioactive substances and there are several ways of preparing dandelion roots for consumption. Infusions or decoctions of dry roots have been used by cancer patients in case report of individuals who have gone into remission. To provide a reproducible dosage form, a highly standardized decoction of roots can be prepared and lyophilized for clinical and phytochemical evaluation. The resulting standardized product, from dandelion root, is a brown powder which was reconstituted in water and polysaccharides precipitated by 1 : 1 addition of ethanol, using a procedure adapted from purification of ginseng polysaccharides. The resulting polysaccharide fraction was obtained by centrifugation and represents 20% by weight of the product. This material is mostly inulin, a nondigestible polymer of glucose, rather than starch found as a storage product in many plants. As well small amount of plant cell wall derived hemicellulose and pectin are extracted. While these substances have yet to be assayed, inulin is generally considered to be relatively inert, while cell wall derived polysaccharides may have immunostimulant activity, as has been found for ginseng acidic polysaccharides that act via toll-like receptors. The remaining 80% of the products are small molecular weight compounds, which were subjected to metabolomic analysis with a Waters Xevo UPLC MS QTOF. This fraction is highly complex and 91 compounds were selected for identification based on published literature. Identification of 14 compounds was achieved by elemental composition and monoisotopic mass observed following electrospray ionization. The compounds identified are found in Table 2 and, except for sucrose, they represent known or potentially bioactive phytochemicals. Many of the compounds are sesquiterpene lactones, such as taraxifolide or phenolic glycosides such as cichorioside. Evidence-Based Complementary and Alternative Medicine 9 Using standard pharmacognosy methods for the identification of active principles, identification of active principles can be achieved. These studies provide the backbone that is required to ensure standardization of NHPs that could be beneficial in the treatment of diseases, especially cancer. In conclusion, antiproliferation activity has been found in both (water) polar and nonpolar (ethanolic) fractions of many NHPs, for instance, with dandelion root extracts, which contain sesquiterpenes, phenolics, and triterpenes [42,50]. As with many medicinal plants activity may be connected with the joint action of several compounds rather than a single active molecule. Further work is required to assay the antiproliferative activity of the identified compounds and examine their combined activity. Combinatorial Activity of Natural Health Products With all the evidence put forward in the previous sections of this review, it stands to reason that the multiple components present within an NHP play a role in and are responsible for the selective efficacy of these NHPs against cancer cells, in vitro and in xenograft models. It is therefore essential to carry out further studies on a whole extract of an NHP to determine if it has better efficacy as a whole or if the isolated and characterized compounds provide the efficacy and selectivity we require for cancer treatment. Studies on withania somnifera have led to the identification of several bioactive components (e.g., the efficacy of withaferin A); however studies, including clinical trials, have shown that the effect of singular compounds found within this extract does not compare to the benefits of using the whole extract, especially as an anticancer agent [17][18][19][20]. Unpublished data with some of the identified bioactive phytochemicals in dandelion root extracts also indicate that that the efficacy of the whole extract or a combination of two or more bioactive components is better than each of the single compounds themselves. Traditional medicine practice has been known to combine multiple NHPs for better advantage [60,84]. These combinations might also prove to be more selective than single component treatments. As seen with most of these NHPs that are able to target multiple signaling pathways, these characteristics are mainly due to the presence of multiple components. It is possible that the precise combinations of these components prevent or decrease toxicity associated with treatment, while proving to be more efficacious, at lower treatment doses, due to the synergistic activity of the different compounds. Hence NHPs appear to provide an alternative to current chemotherapy by providing safer, lower dose treatment options or providing a source of NPs that can be garnered for cancer treatment. Furthermore, a combination of two or more NHPs might increase the efficacy and selectivity, while reducing the chances of developing resistance to treatment, as these multiple NHPs could target even more pathways and be used at even lower doses. Extensive scientific validation will be required to determine the efficacy, safety, and mechanism of action of the combined treatment options for the effective treatment of cancer. The effectiveness of these NHPs may be increased when multiple agents are used in optimal combinations. Health Agencies and Regulatory Bodies Involved with Natural Health Products The whole purpose of the scientific validation of NHPs against diseases, especially cancer, is to provide awareness for these NHPs and NPs that have been used for centuries in various traditional medicines. The scientific studies carried out provide the necessary evidence regarding the efficacy of these NHPs, their indications and contraindications, and information on their safe and effective use. In Canada, Health Canada is the governing agency for the introduction of drugs and NHPs to the public, with divisions completely dedicated to NHPs, the Natural Health Product Directorate (NHPD), and the Therapeutic Product Directorate (TPD). These divisions were generated to assist and ensure that Canadians have access to NHPs that are "safe, effective, and of high quality, while respecting freedom of choice and philosophical and cultural diversity. " Regulations for NHPs came into effect in 2004 and take into account their unique nature and characteristics. At the end of 2012, the NHPD published information that outline how NHPs are assessed, with a focus on health claims, the use of risk information, and the use of NHPs in combination; these include the "Pathway for Licensing NHPs Making Modern Health Claims", "Pathway for Licensing NHPs making Traditional Health Claims" and "Quality of Natural Health Products Guide", which summaries the requirement for standardization of high quality NHPs. Even after clinical trials and progression to the market, Health Canada continues to collect information on adverse reaction reports for NHPs, to track and analyze these reaction reports for NHP use through the Canada Vigilance Program and other regulatory agencies, like the World Health Organization (WHO). This allows constant monitoring of NHPs to ensure continuous safety and efficacy associated with these forms of treatment. More information on application to Health Canada and requirements involved in getting an NHP to the market can be found at their website: http://www.hc-sc.gc.ca/dhp-mps/prodnatur/index-eng.php. In the United States, the Food and Drug Administration (FDA) is the agency in charge of regulating the production and provision of NHPs to the public. NHPs are referred to as complementary and alternative medicine (CAM), which are divided into 5 main domains: (a) whole medical systems; Ayurveda, homeopathic medicine, and traditional Chinese medicine (TCM). This is the most common domain of NHPs/CAMs, which requires vigorous reviews and validation by the scientific community, (b) mind-body medicine; meditation, prayer, and creative therapies, such as dance, (c) biologically based practices with herbs, foods, vitamins, and dietary supplements, (d) manipulative and body-based practices; chiropractic and osteopathic manipulation and massage, (e) energy medicine, including therapeutic touch. These domains undergo the same levels of rigorous review, as described in the Health Canada review aspect above. More information on application to the FDA and their requirements can be found at their website: http://www.fda.gov/ regulatoryinformation/guidances/ucm144657.htm. These regulatory agencies ensure that health claims made by traditional medicine have scientific validations for anecdotal evidence presented for centuries. They ensure proper standardizations involved in the production and usage of NHPs/NPs/CAMs to maximize the benefits of these products and medicines. Significance and Conclusions The toll of cancer on the human body and the society as a whole indicates a serious need for a better selective, effective, and cheaper mode of treatment. Natural health products hold a great potential to provide nontoxic alternatives for the treatment of cancer. More importantly, NHPs as a complex polychemical mixture of pharmacologically active compounds may target multiple vulnerabilities of cancer cells, without toxicity to the noncancerous cells. The complete scientific and clinical evaluation of the potential NHPs are essential to bring these products to mainstream cancer therapies, in order to provide alternative, safer, and cheaper complementary treatments for cancer therapy and possibly improve the quality of life of cancer patients. With the health regulatory agencies, including Health Canada and the FDA providing the required regulatory framework for the development of NHPs for therapeutic purposes, the future will see the growth and expansion of many cancer-selective NHPs in mainstream cancer treatment.
2016-05-12T22:15:10.714Z
2015-03-26T00:00:00.000
{ "year": 2015, "sha1": "77850a7581ef589b2873dbc21a19798a559e41fc", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2015/751348.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f1721d909d2600b5f85f7211b9f16f121d75836", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119294552
pes2o/s2orc
v3-fos-license
Isomonodromic deformations and very stable vector bundles of rank two For the universal isomonodromic deformation of an irreducible logarithmic rank two connection over a smooth complex projective curve of genus at least two, consider the family of holomorphic vector bundles over curves underlying this universal deformation. In a previous work we proved that the vector bundle corresponding to a general parameter of this family is stable. Here we prove that the vector bundle corresponding to a general parameter is in fact very stable (it does not admit any nonzero nilpotent Higgs field). Introduction Let E −→ X be a rank two holomorphic vector bundle over a compact Riemann surface X of genus g. The Segre-invariant of E is defined as where the minimum is taken over all holomorphic line subbundles L of E. This κ(E) is bounded above by g [Na]. If κ(E) > 0 then E is called stable, and it is called maximally stable if κ(E) > g − 2 with g ≥ 2. Now let π : X −→ T be a holomorphic family of compact Riemann surfaces of genus g, and let E be a holomorphic vector bundle over X . For every t ∈ T , set X t := π −1 (t) and E t := E| Xt . We shall further denote for every k ∈ Z. From a theorem of Maruyama and Shatz, [Maru], [Sh], it follows that T k is a closed analytic subset of T for each k. The codimension of T k in T will be denoted by codim(T k , T ). By convention, the empty set has infinite codimension. The Riemann-Hilbert type question addressed in [He2], [BHH], rephrased for the particular case of rank 2 vector bundles, would be the following: What is the maximal Segre-invariant for a family of holomorphic vector bundles that can be endowed with a flat logarithmic connection, inducing a prescribed logarithmic connection on a central parameter t 0 ∈ T ? Indeed, the general theme of [BHH], continued to be pursued here, is that the isomonodromic deformations provide a good "transverse" family of deformations of a vector bundle, in which exceptional behaviors, such as instability, seem to occur with an appropriate codimension. Here we consider the stronger condition of very stability. The very stable bundles were introduced by Laumon [La] and play an important role in the study of the Hitchin systems. They are generic but turn out to be extraordinarily hard to produce explicitly. In fact, the existence of very stable bundles is a landmark result [La]; the dominance of the Hitchin map was obtained as a consequence of the existence of very stable bundles. Let D = {x 1 , · · · , x n } ⊂ X be a finite subset of n ≥ 0 distinct points. The corresponding reduced divisor n i=1 x i on X will also be denoted by D. Let δ be a logarithmic connection, singular over D, on a holomorphic vector bundle E −→ X (see Section 2.1 for its definition). An isomonodromic deformation of the quadruple (X, D, E, δ) is given by the following data: • a holomorphic family of vector bundles E −→ X −→ T as above, • a divisor D on X given by the sum of the images of n disjoint holomorphic sections T −→ X , • a flat logarithmic connection ∇ on E singular over D, and • an isomorphism of pointed curves f : (X, D) −→ (X t 0 , D t 0 ), where t 0 ∈ T is a base point, together with a holomorphic isomorphism ψ : E −→ f * E of vector bundles such that f * ∇ = ψ * (δ). Any quadruple (X, D, E, δ) as above with 3g − 3 + n > 0 admits a universal isomonodromic deformation, satisfying a universal property for the isomonodromic deformations as above, with respect to a germification (T , t 0 ) of the parameter space [He1]. This is a consequence of the logarithmic Riemann-Hilbert correspondence [De] combined with Malgrange's lemma [Mal]. For this universal isomonodromic deformation, the parameter space is the Teichmüller space Teich g,n , with a central parameter corresponding to (X, D), and the family of pointed curves (X , D) over it being the universal Teichmüller curve. We recall the main theorem of [He2]. He2]). Let (X, D, E, δ) be as above with 3g − 3 + n > 0 and δ irreducible. Let E −→ (X , D) −→ T be the holomorphic family of vector bundles over pointed curves underlying the universal isomonodromic deformation of (X, D, E, δ). Then for the filtration · · · ⊂ T k ⊂ · · · ⊂ T g−1 ⊂ T g = T by closed analytic subsets defined in (1.1), the inequality Let again E be a holomorphic vector bundle on X of rank two. A Higgs field on E is a holomorphic section of End(E) ⊗ K X = E ⊗ E * ⊗ K X , where K X denotes the holomorphic cotangent bundle of X. Given a Higgs field θ ∈ H 0 (X, End(E) ⊗ K X ), we have trace(θ i ) ∈ H 0 (X, K ⊗i X ) , i = 1, 2 . A Higgs field θ on E is called nilpotent if trace(θ) = 0 = trace(θ 2 ), or equivalently, if θ 2 = 0; the nilpotent cone plays a very important role in Geometric Langlands program (see [Fr], [KW], [GW], [DP] and references therein). A holomorphic vector bundle E of rank two is called very stable if it does not admit any nonzero nilpotent Higgs field. For any g ≥ 1, there are maximally stable bundles which are not very stable. But a very stable vector bundle E is always maximally stable. Indeed, if L ⊂ E is a holomorphic line subbundle with deg(E) − 2 · deg(L) ≤ g − 2, then so H 0 (X, Hom(E/L, L ⊗ K X )) = 0. Now, for any nonzero element θ ′ ∈ H 0 (X, Hom(E/L, L ⊗ K X )) , the Higgs field θ on E defined by the composition has the property that θ 2 = 0. By a similar argument one can show that E is very stable if and only if H 0 (X, Hom(E/L, L ⊗ K X )) = 0 for every holomorphic line subbundle L of E. From now on, we are going to assume that the genus g of X is at least two. Note that then the maximally stable bundles are stable. Let E −→ X −→ T be a holomorphic family of rank two vector bundles over curves of genus g. The subset is a closed analytic subset of T [La]. The following is a natural question to ask: Is the subset T nil of the parameter space of a given universal isomonodromic deformation proper? Our aim here is to prove the following stronger version of Theorem 1.1: Theorem 1.2. Let (X, D, E, δ) be as above with g ≥ 2. Assume that δ is irreducible. Let E −→ (X , D) −→ T be the holomorphic family of vector bundles over pointed curves underlying the universal isomonodromic deformation of (X, D, E, δ). Then the closed analytic subset T nil , defined in (1.2), is a proper closed subset of T . In particular, the vector bundle E t is very stable for general t ∈ T . In view of Theorem 1.1, it is enough to consider the case where the holomorphic vector bundle E corresponding to the central parameter is stable but not very stable. Following Donagi and Pantev [DP], stable vector bundles that are not very stable will be called wobbly vector bundles. Let E −→ X −→ T be a family of wobbly rank two vector bundles over compact Riemann surfaces of genus g. Then there is a polydisc U in T and a section Θ of (End(E) ⊗ Ω 1 X )| U such that for each t ∈ U, the restriction Θ| Xt defines a nonzero nilpotent Higgs field on E t . Indeed, since one has a family of stable vector bundles on X, the family of possible Higgs fields for E is a holomorphic vector bundle V over T . The constraint θ 2 = 0 of nilpotence then gives us a cone N in V , which maps surjectively onto T since the family is wobbly. One can then choose a smooth point of V for which the tangent plane to N surjects to that of T , and get our section Θ from there. For that reason, in order to prove Theorem 1.2, we may use deformation theory of stable nilpotent Higgs bundles over curves. A Higgs bundle on X is a pair of the form (E, θ), where E is a holomorphic vector bundle on X of rank two and θ is a Higgs field on E. Such a Higgs bundle More specifically, we are going to elaborate, in Section 4.2, the deformation theory of stable nilpotent Higgs bundles over pointed curves. Isomonodromic deformations We begin by recalling a few results from Sections 2 and 4 in [BHH] that would be needed later; note that in [BHH] the Riemann surface is denoted X 0 instead of X. 2.1. Logarithmic connections. Let (E, X, D) be as in the introduction. Let Diff i (E, E) denote the holomorphic vector bundle on X defined by the sheaf of differential operators, of order at most i, from the sheaf of holomorphic sections of E to itself. We have a short exact sequence where σ is the restriction of the surjection σ 1 in (2.1). A logarithmic connection on E singular over D is a holomorphic homomorphism Similarly, we can construct the logarithmic Atiyah bundle At D (E) over X for a family E −→ (X , D) −→ T as in the introduction. Note that TX (− log D) fits in the short exact sequence Again, a logarithmic connection ∇ on E singular over D is a splitting of the corresponding Atiyah exact sequence. Rather than recalling the definition of flatness for ∇ (see for example [BHH,p. 131,Lemma 4.1]), we just point out the following. A logarithmic connection δ as 2.2. Infinitesimal deformations. As before, E −→ X is a rank two holomorphic vector bundle over a compact Riemann surface X of genus g ≥ 2, and D is a reduced effective divisor of degree n on X; let δ be a logarithmic connection on E with polar divisor D. An infinitesimal deformation of the n-pointed Riemann surface (X, D) is a family together with a divisor D on X given by n disjoint sections B −→ X such that f * D = D. The space of infinitesimal deformations of (X, D) is given by H 1 (X, TX(−D)), which coincides with the fiber of the holomorphic tangent bundle of Teich g,n at the point corresponding to (X, D). Any such nontrivial deformation naturally embeds into the universal Teichmüller curve. The space of infinitesimal deformations of the triple (X, D, E) is given by H 1 (X, At D (E)) [BHH,p. 127,(2.12)]. On the other hand, if we have an infinitesimal deformation of (X, D, E) that can be endowed with a flat logarithmic connection ∇, inducing δ on the central parameter, then by the universal property of the universal isomonodromic deformation, the deformation of (X, D, E) is already determined by the induced deformation of (X, D). Hence δ defines a homomorphism of infinitesimal deformations Proof. Using the notation of [BHH,p. 131,Lemma 4.1] and the commutative diagram in [BHH, p. 128, Section 2], we have the commutative diagram of homomorphisms of sheaves From (2.5) we have the commutative diagram of cohomologies where a (respectively, b) is the connecting homomorphism in the long exact sequence of cohomologies associated to the top (respectively, bottom) exact sequence in (2.5). Higgs bundles on a fixed curve In this section, we recall the deformation theory of Higgs bundles over a fixed Riemann surface. 3.1. Stable Higgs bundles. Let X be a compact connected Riemann surface of genus g, with g ≥ 2. For any integer d, let M d X denote the moduli space of stable vector bundles on X of rank two and degree d. It is an irreducible smooth quasi-projective variety defined over C of dimension 4g − 3. The locus of very stable bundles in M d X is a nonempty Zariski open subset [La], [BR,p. 229,Corollary 5.6]. Fix an integer d. Let M X := M d X be the moduli space of stable vector bundles on X of rank two and degree d. Let be the moduli space of very stable vector bundles on X of rank two and degree d. As noted before, U vs is a nonempty Zariski open subset of M X . Let be the complement consisting of wobbly bundles, which is a closed sub-scheme of M X . For any E ∈ M X , we have T * E M X = H 0 (X, End(E) ⊗ K X ), and hence the total space T * M X of the holomorphic cotangent bundle of M X in (3.1) is a Zariski open dense subset of the moduli space of stable Higgs bundles of rank two and degree d (openness follows from [Maru]). Let be the locus of all (E, θ) such that θ is nonzero nilpotent; this N X is a locally closed subscheme of T * M X . The closure of N X in T * M X is the union N X W, where W is the wobbly locus defined in (3.2). 3.2. Infinitesimal deformations of Higgs bundles. Take any Higgs bundle (E, θ) ∈ T * M X . Consider the O X -linear homomorphism This produces the following two-term complex C The space of all infinitesimal deformations of the Higgs bundle (E, θ) is parametrized by the first hypercohomology H 1 (C (E,θ) • ) [BR], [Mark], [Bo]. In particular, we have Now assume that (E, θ) ∈ N X , where N X is constructed in (3.3). Note that this implies that E ∈ W. The subsheaf of E defined by the kernel of θ is a line subbundle of E; this line subbundle of E will be denoted by L. Let constructed above, so End n (E) consists of nilpotent endomorphisms of E with respect to the filtration 0 ⊂ L ⊂ E; the superscripts "s" and "n" stand for "solvable" and "nilpotent" respectively. For the homomorphism f θ in (3.4) we have The restriction of f θ to End s (E) will be denoted by f s θ . We have the two-term complex D which is a subcomplex of C (E,θ) • constructed in (3.4). Denote the degree of the line bundle L by ν. Consider N X in (3.3); let Z ν ⊂ N X be the locus of all (E ′ , θ ′ ) ∈ N X such that deg(kernel(θ ′ )) = ν. So we have (E, θ) ∈ Z ν . The tangent space to Z ν at this point (E, θ) has the following description: is the complex in (3.7) [BR,p. 228]. Higgs bundles on a family of curves We now recall the deformation theory of families of Higgs bundles over moving curves. As noted in Section 2.2, infinitesimal deformations of a triple (X, D, E) is given by the first cohomology space of the logarithmic Atiyah bundle associated to this triple. Now we have to enrich this space with the data corresponding to deformations of a given Higgs field on E. Afterwards, we will calculate the obstruction space for an initial nonzero nilpotent Higgs field to extend to a nilpotent Higgs field on a germification of the family. 4.1. Infinitesimal deformation of a n-pointed curve and Higgs bundle. Let E be a rank 2 vector bundle on X of genus g ≥ 2, and let D = n i=1 x i be a divisor on X. There is a natural homomorphism where Diff 1 (End(E) ⊗ K X , End(E) ⊗ K X ) is the vector bundle on X defined by the sheaf of differential operators of order at most one mapping locally defined holomorphic sections of End(E) ⊗ K X to itself. To explain this η, if • α is a locally defined holomorphic section of At D (E), • β is a locally defined holomorphic section of End(E), • ω is a locally defined holomorphic section of K X and • s is a locally defined holomorphic section of E, where L σ(α) ω is the Lie derivative of ω with respect to the vector field σ(α) (the homomorphism σ is defined in (2.2)); note that both sides of (4.2) are sections of E ⊗ K X . To prove that (4.2) defines η, substitute (f β) ⊗ ( 1 f ω) in place of β ⊗ ω in (4.2), where f is a locally defined nowhere vanishing holomorphic function on X. Then in the right-hand side of (4.2), the first term becomes α(β(s))⊗ω + df (σ(α)) f ·β(s)⊗ω while the second term becomes . From this it follows immediately that η is well-defined by (4.2). We will give another description of the homomorphism η which is more canonical. Let p : P GL(2) −→ X be the holomorphic principal GL(2, C)-bundle on X corresponding to E; so for any x ∈ X, the fiber p −1 (x) is the space of all C-linear isomorphisms from C ⊕2 to the fiber E x . Let dp : TP GL(2) −→ p * TX (4.3) be the differential of the above projection p. The kernel T rel := kernel(dp) ⊂ TP GL (2) is identified with p * End(E). The pullback p * At D (E) is identified with (dp) −1 (TX(−D)) ⊂ TP GL(2) . The image of p * K X under the dual homomorphism (dp) * : p * K X −→ T * P GL(2) (see (4.3)) will be denoted by S. Let θ be a Higgs field on the vector bundle E. Let be the O X -linear homomorphism, where η is the homomorphism in (4.1). Lemma 4.1 ( [Bi,p. 105,Proposition 2.4]). The space of infinitesimal deformations of a n-pointed Riemann surface equipped with a Higgs bundle z = (X, D, (E, θ)) as above is the first hypercohomology Note that in the case where θ is the zero-Higgs field, the space of infinitesimal deformations in Lemma 4.1 coincides with H 1 (X, At D (E)) ⊕ H 0 (X, End(E) ⊗ K X ); as shown in Section 2.1, the space of infinitesimal deformations of the triple (X, D, E) is H 1 (X, At D (E)). 4.2. Deformation of a n-pointed curve with a nilpotent Higgs bundle. Consider the data z = (X, D, (E, θ)) in Lemma 4.1. Assume that (E, θ) ∈ N X , where N X is defined in (3.3). As before, the line subbundle of E defined by the kernel of θ will be denoted by L. Let At D (E, L) ⊂ At D (E) be the holomorphic subbundle of co-rank one generated by the sheaf of differential operators α such that α(L) ⊂ L. Note that we have a commutative diagram where all the rows and columns are exact; the bottom exact row is the one in (2.2) and End s (E) is the vector bundle in (3.5). It can be shown that the homomorphism ψ θ in (4.4) satisfies the equation ψ θ (At D (E, L)) ⊂ End s (E) ⊗ K X . Indeed, this follows from the expression in the right-hand side of (4.2). The restriction of ψ θ to At D (E, L) will also be denoted by ψ θ ; this should not cause any confusion. Lemma 4.2. The space of infinitesimal deformations of the n-pointed Riemann surface equipped with a nilpotent Higgs bundle z = (X, D, (E, θ)) , such that the Higgs field remains nilpotent, is the first hypercohomology H 1 Proof. Consider the short exact sequence of complexes where the middle complex is the one in Lemma 4.1, and the homomorphism f ′ θ is induced by f θ in (3.4); note that from (3.6) it follows immediately that f θ induces such a homomorphism. Consider the above complex C induced by the homomorphism of complexes in (4.6) coincides with the natural homomorphism from the infinitesimal deformations of (X, D, (E, θ)) (without assuming that the Higgs field remains nilpotent) to the failure of the Higgs field to be nilpotent. The lemma follows from it. Proof of Theorem 1.2 Let E be a holomorphic vector bundle on X of rank two and degree d, which is wobbly. Let θ be a nonzero nilpotent Higgs field on E. As before, let L ⊂ E be the line subbundle defined by the kernel of θ. Given a logarithmic connection δ on E with polar divisor D, consider the composition homomorphism where q is the homomorphism in (4.5). Let (q • δ) * : H 1 (X, TX(−D)) −→ H 1 (X, Hom(L, E/L)) (5.2) be the homomorphism of cohomologies induced by q • δ in (5.1). Proposition 5.1. Let (X , D, E) be an infinitesimal deformation, with parameter space B = Spec (C[ε]/ε 2 ), of a stable rank two vector bundle E over a n-pointed Riemann surface (X, D). Assume that E is endowed with a nonzero nilpotent Higgs field Θ, inducing a nonzero nilpotent Higgs field θ on E. Further assume that E is endowed with a flat logarithmic connection ∇, singular over D, inducing a logarithmic connection δ on E with polar divisor D. Then where (q • δ) * is the homomorphism in (5.2). Proof. Consider the complex B z • in Lemma 4.2. The identity map of B z 0 produces a homomorphism of complexes The i-th hypercohomology of the bottom complex coincides with H i (X, At D (E, L)). Let be the homomorphism of hypercohomologies associated to the above homomorphism of complexes. Since the nilpotent Higgs field θ on E extends as a nilpotent Higgs field Θ on E, from Lemma 4.2 it follows that the homomorphism T δ in (2.4) factors as where T δ 0 : H 1 (X, TX(−D)) −→ H 1 (B z • ) is a homomorphism, φ is the homomorphism in (5.3) and is the homomorphism of cohomologies induced by the inclusion ι : At D (E, L) ֒→ At D (E) (see (4.5)). From Lemma 2.1 we know that is the homomorphism of cohomologies induced by the homomorphism q in (4.5). Now from (5.4) and (5.5) it follows that On the other hand, from (4.5) it follows immediately that q * • ι * = 0. Hence we conclude that (q • δ) * = 0. Let Teich g,n be the Teichmüller space for n-pointed surfaces of genus g ≥ 2. Take (X, D, E, δ) as before. Let (X , D, E , ∇) be the universal isomonodromic deformation of (X, D, E, δ), with parameter space T = Teich g,n and central parameter t 0 ∈ T , together with an isomorphism of n-pointed surfaces f : (X, D) −→ (X t 0 , D t 0 ) and a holomorphic isomorphism ψ : E −→ f * E such that f * ∇ = ψ * (δ) as in the introduction. Note that for any t 1 ∈ T , the universal isomonodromic deformation (X , D, E, ∇) is also the universal isomonodromic deformation of (X , D, E, ∇)| t 1 . As before, for any t ∈ T , we shall denote (X , E)| t by (X t , E t ). In view of Theorem 1.1 and the openness of the very stability condition, the following theorem implies Theorem 1.2. Theorem 5.2. Let (X, D, E, δ) and (X , D, E , ∇) be as above with δ irreducible. Then there is a point t ∈ T such that the vector bundle E t −→ X t is very stable. Proof. In view of Theorem 1.1 we can take E to be stable. Assume that E is wobbly. Let θ be a nonzero nilpotent Higgs field on E. In view of Proposition 5.1 it suffices to show that (q • δ) * = 0 , (5.6) where (q • δ) * is the homomorphism in (5.2). Note that the homomorphism q • δ in (5.1) is nonzero because the logarithmic connection δ is irreducible. Consider the short exact sequence of coherent sheaves where T is a torsion sheaf on X, so H 1 (X, T) = 0. Hence the long exact sequence of cohomologies associated to it gives a surjection H 1 (X, TX(−D)) (q•δ) * −→ H 1 (X, Hom(L, E/L)) −→ 0 . Since θ is nonzero nilpotent and L ⊂ E is the kernel of θ, it follows immediately that θ is a nonzero section of Hom(E/L, L) ⊗ K X . Therefore, (5.7) is proved. This completes the proof of the theorem. When the rank is more than two, the very last part of the argument in the proof of Theorem 5.2 breaks down -the nonzero nilpotent Higgs field θ no longer implies that (q • δ) * = 0.
2017-03-21T13:12:35.000Z
2017-03-21T00:00:00.000
{ "year": 2017, "sha1": "939e81321fe6e6cde5f99bc426d3d46770584868", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.07203", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "939e81321fe6e6cde5f99bc426d3d46770584868", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
213777946
pes2o/s2orc
v3-fos-license
CHEMICAL REGENERATION OF BONE CHAR ASSOCIATED WITH A CONTINUOUS SYSTEM FOR DEFLUORIDATION OF WATER - Sources of fluoride contaminated water are found around the world and their treatment is required before human consumption. This paper contributes to advances in the use of bone-char as an adsorbent for fluoride, associating steps of chemical regeneration and fluoride adsorption in continuous systems, thereby making feasible the multiple use of the adsorbent. Following the development of low cost treatment of water defluoridation in a fixed bed column, using bone-char, regeneration was carried out with NaOH (0.5 mol/L) solution in subsequent adsorption/desorption cycles. The continuous system was modeled applying Thomas, Yoon-Nelson, Adams-Bohart, Wolborska and Yan models, and the Yan model showed the best adjustment. The adsorption capacity of 6.28 mg/g was obtained from the breakthrough curve. Chemical regeneration of bone-char was feasible, and a reduction in adsorption capacity of 30% was observed only after five adsorption/desorption cycles. INTRODUCTION Fluoride is an essential element for human health that in lower concentrations, between 0.4 and 1.0 mg/L, prevents dental diseases (Ghorai and Pant, 2004;WHO, 1996). The World Health Organization (WHO) recommends the value of 1.5 mg/L as the safe concentration of fluoride in water for human consumption (WHO, 1996). Prolonged intake of water containing fluoride above 2 mg/L may cause dental fluorosis and in extreme cases, serious diseases such as skeletal fluorosis, osteoporosis, arthritis, male infertility, Alzheimer's disease, and liver, kidney or parathyroid lesions may occur (Harrison, 2005;Tripathy et al., 2006;Nur et al., 2014;Xiong et al., 2007). Water contaminated by fluoride has been found naturally and causes serious environmental and health problems in several parts of the world (Nur et al., 2014). At least 25 countries around the world have been reported to be affected due to high fluoride concentrations in groundwaters, above the World Health Organization limit (Davila-Rodriguez et al., 2012;Gupta et al., 2007). Diseases due to fluoride contaminated water ingestion in many cities of the world are widely described in the literature. The occurrence of fluorosis is reported in the Brazilian states of São Paulo, Paraná, Santa Catarina, Rio de Janeiro and Minas Gerais (Santiago and Silva, 2009). Meneasse et al. (2002) pointed out high fluoride concentrations in São Francisco city in the northern part of Minas Gerais State, in the range of 1.17 to 5.2 mg/L. Several methods have been used to reduce the fluoride content in water, to make it suitable for human consumption (Agarwal et al., 2003). Among them, adsorption (Medellin-Castillo et al., 2007), chemical precipitation (Benefield et al., 1982), ion exchange (Kunin, 1990), reverse osmosis (Colla et al., 2016), electrodialysis (Gwala et al., 2011) and nanofiltration (Simons, 1993) are the most common methods. Compared to other techniques, adsorption stands out if its simple operating system is coupled with low cost adsorbents, which makes the application feasible for communities of limited financial resources (Bhatnagar et al., 2011;Tripathy et al., 2006;Vocciante et al., 2014). Several adsorbents have been studied and applied for fluoride uptake from contaminated waters (Bhatnagar et al., 2011;Loganathan et al., 2013). One of them is the bone char, which presents high fluoride removal capacity (Abe et al., 2004;Leyva-Ramos et al., 2010;Medellin-Castillo et al., 2007). Bone char is an association of carbonaceous and inorganic materials containing from 70 to 76% of calcium phosphate, as hydroxyapatite (HAP, Ca 10 (PO 4 ) 6 (OH) 2 ) that has been effectively used to reduce the content of fluoride in water and wastewater (Wilson et al., 2003). In addition, characteristics such as specific surface area and its positively charged surface below pH 8.4 promote better adsorption capacity for bone char than that observed for conventional coals (Brunson and Sabatini, 2009;Medellin-Castillo et al., 2007). Although the mechanisms and studies of bone char application for fluoride and other elements removal have been developed, efforts on spent adsorbent regeneration or its destination have not been adequately studied. There are only a few studies that have looked into these aspects and this may hinder industrial applications. Regeneration methods are necessary to evaluate the application of bone char, its lifetime, which can reduce the cost of the process and decrease the amount of undesired wastes (Sheintuch and Matatov-Metal, 1999). Thermal and chemical regeneration of fluoride saturated bone char have been studied recently (Feng et al., 2012;Kaseva, 2006;Nasr et al., 2011). In the study of Medellin-Castilho et al. (2007), the elevation of pH above 12 provides desorption of fluoride present in bone char. This is because, with increasing pH, the fluoride ion inserted in the hydroxyapatite structure contained in bone char is released into solution, exchanged by the hydroxide ion, an ionic substitution mechanism already described in other studies (Kaseva, 2006;Medellin-Castilho et al., 2014;Sternitzke et al., 2012;Sundaram et al., 2008). Kanyora et al. (2014) verified that the use of NaOH solutions presented better results in the regeneration of bone char when compared to other compounds. An actual application performed by the Catholic Diocese of Nakuru Water Quality (Jacobsen and Müller, 2007) using NaOH presented satisfactory results in bone char regeneration. However, these processes still require knowledge about the reduction in the adsorption capacity through a regenerative process and the extent of regeneration in successive cycles of adsorption-regeneration. In order to recover the bone char adsorption capacity, the mechanisms of fluoride uptake by the bone char must be well known. The hydroxyapatite contained in the bone char exchanges OHgroups for F -, being one of the advantages of bone char compared to conventional activated coals applied to water treatment (Kaseva, 2006). The exchange of groups may be represented by equation 1. Fluoride removal can also occur in carbonous structures and through calcium fluoride (CaF 2 ) precipitation (Sternitzke et al., 2012). However, Medellin-Castilho et al. (2014) highlight that the fluoride removal by bone char is mainly associated with hydroxyapatite. They suggested that fluoride uptake occurs through complex formation on phosphate (≡P-OH) and hydroxyl (≡Ca-OH) sites and that CaF 2 precipitation is possible, since calcium ions are partially dissolved from bone char (hydroxyapatite and calcite) and may supersaturate the water if the calcium fluoride solubility is overpassed. Furthermore, they concluded that the removal of fluoride in the bone char mainly occurs due to electrostatic interactions between the adsorbent surface charges and adsorbate ions and not through ionic interchange, as proposed by Kaseva (2016). Below a pH of point of zero charge, the protonation of sites is favored, forming positively charged complexes on the bone char surface (equations 2 and 3) that interact with fluoride ions. Through all these removal mechanisms, the term sorption is better applied instead of adsorption since adsorption, precipitation and also complexation mechanisms are involved. P OH H P OH This paper aims to present the advances in knowledge of defluoridation by bone char integrating steps of sorbent regeneration and fluoride removal in continuous systems. The life time or regenerative cycles supported by bone char are determined. Bone char supply Bone char was supplied by Bonechar Carvão Ativado do Brasil, a Brazilian company that produces (3) bone char from bovine's bones pyrolyzed at 750°C. Bone char with particle sizes between 1.0 to 1.6 mm was washed with 0.1 mol/L of HCl solution in a ratio of 40 g/L for 1 h at constant stirring. Further, the solid was washed with distilled water three times and dried at 50°C for 24 h based on previous methodology adopted by Nigri et al. (2017a). Fluoride solutions and chemical analysis A stock solution of 100 mg/L of fluoride was prepared through dissolving a weighed quantity of NaF Column set up and modeling Experiments in fixed bed column were performed using a polycarbonate column of 2.3 cm of internal diameter and 13 cm length with an empty bed volume of 54.01 cm 3 . At the base and the top of the column, glass beads, followed by glass wool, were placed to enhance the solution distribution and keep the media intact. The column was packed with 30.36 g of bone char and the experiments were carried out using 10 mg/L fluoride solution previously prepared with NaF. The solution was pumped through the column from the bottom to the top at 5 mL/min using a variable-flow peristaltic pump (Spetec Perimax 121). Columns were packed with wet bone char to ensure consistent packing. Samples were collected periodically at each 180 min for pH and fluoride residual concentration when the relative concentration reached the condition C effluent = 0.9 C influent . The breakthrough curve obtained in this experiment was evaluated through the models of Thomas, Yoon-Nelson, Adams-Bohart, Wolborska and Yan, and the design of the column was carried out in accordance with the best model. The OriginPro8 software was used to obtain the parameters of the breakthrough curve and the fit of the breakthrough curve models, which were inserted in the software. Regeneration experiments The same set up employed for column modeling was applied for regeneration experiments. Bone char was loaded with fluoride until its concentration at the end of the column presented a residual concentration of 1.5 mg/L, a maximum value permitted by the WHO for human consumption. The time to get to this condition was estimated from previous tests and this value was considered in the subsequent adsorption experiment associated with the regeneration step. Regeneration of bone char was carried out in a packed column by pumping 2 L of NaOH 0.5 mol/L from the top to the bottom at a flow rate of 5 mL/min. This sodium hydroxide concentration was the same as applied by Nigri et al. (2017a) and it is in agreement with recent studies by Kanyora et al. (2015) and Kanyora et al. (2014). Fluoride concentration was monitored during the regeneration cycle until the concentration of residual fluoride stabilized (390 min). After that, distilled water was pumped throughout the column to wash the remaining NaOH until the pH was stabilized. Then, another adsorption cycle began by pumping the fluoride solution at 10 mg/L. Details of the bone char characterization and adsorption-regeneration process in batch and continuous operation can be found in Nigri et al. (2017a) and Nigri et al. (2017b). Sorbent characterization techniques The functional groups present on the surface of bone char were identified by Fourier-Transform Infrared Spectroscopy (FTIR) (Bruker Alpha, attenuated total reflectance -ATR, with a diffuse reflectance accessory -DRIFT) in the solid form. The particles were previously crushed for analysis. For measurement of specific surface area and porosity, the data were obtained using a QUANTACHROME device model NOVA-1000, adsorption surface area analyzer and pore size distribution from adsorption condensation of nitrogen gas (N 2 ). Zeta potential measurements were performed using a ZM3-D-G meter, Zeta Meter system 3.0+, with direct video imaging. The morphology and the chemical composition of the bone char were assessed by scanning electron microscopy (SEM) and energy-dispersive x-ray spectroscopy (EDS) microanalysis (Scanning Electron Microscope Model 6360LV Dispersive Spectrometer coupled to an in-EDS wavelength). RESULTS AND DISCUSSION Bone char characterization Table 1 shows the specific surface area and porosity from fresh bone char and saturated bone char. The value of the surface area and pore volume for bone char are similar to values highlighted by other studies with bone char (Cheung et al., 2002;Medellin-Castillo et al., 2007;Rojas-Mayorga et al., 2013;Walker and Weatherley, 2001). After the adsorption process it is possible to observe a reduction in the specific surface area and porosity, which can be caused by fluoride presence on the surface of bone char. The average pore diameter obtained is compatible with a mesoporous structure according to IUPAC (20 to 500 Å) (Burwell, 1976). The surface charge is an important characteristic to explain the ion sorption onto bone char and comes from the interactions between the ions present in the solution and the surface functionalities. It also depends on ion type, surface properties and solution pH (Medellin-Castillo et al., 2007;Nigri et al., 2017a). Fig.1 shows the zeta potential for the fresh bone char. The surface is positively charged when the pH of the solution is below the isoelectric point and negatively charged above the isoelectric point. As shown in Hence, the solution pH directly affects the fluoride sorption, whereas in pH solutions below the pH PZC , the Fanions are attracted by the positively charged surface of the bone char, caused by the protonation of the hydroxyapatite hydroxyl groups, thus favoring the Faccumulation onto the surface (Nigri et al, 2016a). These FTIR spectra (Fig. 2B) of the fresh bone char and saturated bone char, even after the fifth adsorption cycle did not exhibit many differences. However, after the saturation and regeneration process, it is possible to observe that the bands 3539, 3560, 3615, 3625 and 3640 cm -1 are present only in the fresh bone char sample. The bands from 3000 to 3600 cm -1 , correspond to OH groups, and the bands between 3640 and 3610 correspond to free OH group, which suggested the occurrence of the ion exchange process between Fand OHgroups from doped bone char (Nigri et al. 2017b). A similar result was obtained by Rojas Mayorga et al. (2015). SEM/EDS analysis was performed to assess the particle morphology. Various irregularities and porosity variations and discrepancies between grains may be observed (Fig. 3). The EDS (Table 2) analysis showed elevated levels of phosphorus and calcium, owing to the composition of the hydroxyapatite present in the bone char as was already observed by Wilson et al. (2003) and Medellin-Castillo et al. (2007). Through Table 2 it is also possible to observe that the molar ratio Ca/P of the raw bone char was Table 1. Textural characteristics of the fresh bone char and saturated bone char. Continuous column adsorption studies Breakthrough curve and adsorption capacities According to García-Sánchez et al. (2013), the breakthrough curve shows the loading behavior of fluoride ions onto bone char in a fixed bed, which is usually expressed in terms of adsorbed fluoride concentration or relative concentration (C/C 0 ) as a function of time or volume of effluent for a given bed height, where C 0 is the initial concentration of fluoride and C is the concentration of fluoride at time t. Fig. 4 shows the breakthrough curve for fluoride sorption by bone char. The breakthrough point corresponding to C/C 0 = 0.15 was adopted concerning the limit recommended by the WHO (1.5 mg/L). The fluoride breakthrough curve presents an asymmetrical S-shape, slowly approaching to C/C 0 ≈ 0.9, which is a common characteristic of sorption processes in a liquid phase where the pore diffusion phenomena is controlled by the mass transport process The area under the breakthrough curve represents the total amount of fluoride removed from the feed solution (Kundu and Gupta, 2007;Lodeiro et al., 2006;Nur et al., 2014;Rojas-Mayorga et al., 2015). It is represented through equation (4). where q tot is the total amount of fluoride sorbed (mg/g), C ad is the difference between initial and equilibrium concentration of fluoride (mg/L), t is time and Q is the feed flow rate (L/min). The fluoride sorption capacity q (mg/g) of bone char can also be determined by equation (5) where M is the sorbent mass (g). The length of the mass transfer zone -L MTZ , a parameter frequently used to determine the effective height of the sorption column, is defined as the length of the ion exchange zone in the column and can be calculated by equation (6): where L is the bed length (cm), t bt is the breakthrough time (min or h) and t e is the column exhaustion time. Typically, the breakthrough point is determined when the effluent concentration reaches 5% of the feed concentration, while the exhaustion point is reached when the effluent concentration is equal to 95% of the feed concentration (Nigri et al. 2017a). Table 3 shows the adsorption capacities at the breakthrough time for the relative concentrations C/ C 0 = 0.15 and C/C 0 = 0.90 and the length of the mass transfer zone (L MTZ ). (4) An operation time of 944 min (15.7 h) was obtained at C/C 0 = 0.15 and 14640 min (244 h) at C/C 0 = 0.90. The adsorption capacity estimated by the area under the breakthrough curve, using OriginPro8, indicates loads of 1.47 mg/g and 6.28 mg/g at C/C 0 = 0.15 and C/C 0 = 0.90, respectively. The adsorption capacity for C/C 0 = 0.90 is very similar to the value obtained by Nigri et al. (2017a), who obtained a value of 5.96 mg/g. The length of the mass transfer zone was 10.6 cm using a bed depth of 13 cm. This non-used bed fraction may indicate the existence of preferential paths inside the column and/or high intraparticle resistance. Adsorption models A successful adsorption column project requires the prediction of the profile evolution curve for the treated solution (Yan et al., 2001). Several mathematical models have been developed to describe and predict the dynamics of the continuous adsorption process (Aksu and Gonen, 2004). The main applied models are described in the followings sections. Thomas' model is one of the most applied models for a continuous system. The adsorption capacity is determined considering that the diffusion resistance in the liquid film around the adsorbent particles controls the adsorption and the axial dispersion may be neglected (Aksu and Gonen, 2004;García-Sánchez et al., 2013;García-Sánchez et al., 2014;Ghosh et al., 2015;Thomas, 1944). This model can be seen below in both non-linear (eq.7) and linear forms (eq.8). q t ) are determined from a plot of ln[(C o /C) -1] versus time at a known feed flow rate. The Yoon-Nelson model (eq. 9) is one of the simplest models for column applications. It does not require any information about the system characteristics such as type of adsorbent and physical properties of the adsorption bed (Samoraj et al., 2016;Yoon and James, 1984). Additionally, the Yoon-Nelson model is extremely concise in form, supposing that the decrease in the probability of each adsorbate to be adsorbed is proportional to the probability of its adsorption and breakthrough on the adsorbent (Xu et al., 2013;Yoon and James, 1984). Table 3. Adsorption capacities at the breakthrough time at C/C 0 = 0.15 and C/C 0 = 0.90 for column feed flow rate of 5 mL/min and length of mass transfer zone. Column configuration: Bone char mass: 30.36 g; Bed volume: 54.01 cm 3 ; Bed height: 13 cm; Flow rate: 5 mL/min; Temperature: 25ºC; Fluoride initial concentration: In this model k t is the Thomas constant (L/min.mg), q t is the adsorption capacity (mg/g), Q is the feed flow rate to the column (mL/min), M is the adsorbent mass (g), C 0 is the feed concentration of adsorbate and C is the concentration of the column effluent at the time t (min), both expressed in mg/L. Those constants (k t and The linear form of the Yoon-Nelson model is expressed by equation 10. In addition, Yoon-Nelson and Thomas' models also share some similarities that could result in comparable correlation coefficient, R 2 , values (Lau et al., 2016). where C 0 is the initial solution concentration (mg/L), C is the concentration of the solution at time t (mg/L), τ is the time required for 50% adsorbate breakthrough (min), k YN is the rate constant (1/min) and t (min). Adams-Bohart's model (eq. 11 and eq.12) has its significance recognized due to its simplicity. This model assumes that the chemical reaction of adsorbate on the adsorbent surface controls the kinetic process, both solute diffusion into the porous layer and volume diffusion being negligible (Lodeiro et al., 2006). However, this model is better applied to describe an initial breakthrough curve limited to C < 0.5C 0 since the adsorption rate is proportional to the adsorbed fraction (Bohart and Adams, 1920;Ghosh et al., 2015;Han et al., 2009). In these equations, k AB is the kinetic constant (L/ min.mg), V is the linear flow rate (cm/min), Z is the bed height (cm) and N 0 is the saturation concentration (mg/L). Model constants (k AB and N 0 ) are found from a plot of ln[C/C o ] versus time. Based on the results, Wolborska (1989) developed a model to describe the breakthrough in the low concentration region. They observed that the initial segment of the breakthrough curve is controlled by film diffusion with constant kinetic coefficient, the concentration profile of the initial stage moves axially in the column at a constant velocity, and the width of the concentration profile in the column and the final breakthrough curve were nearly constant (Xu et al., 2013). The expression of the Wolborska model (eq. 13 and eq.14) solution is equivalent to the Adams-Bohart relation if the coefficient k AB is equal to β a /N o . This equation can also be linearized to give a relationship between ln[C/C o ] and time from which the model parameters can be calculated (Ushakumary and Madhu, 2014). where C 0 is initial solution concentration (mg/L), C is the concentration of the column effluent at the time t (min), N 0 is the saturation concentration (mg/L), β a is a kinetic coefficient of the external mass transfer (1/ min), H is the bed depth (cm), ν is the migration rate of the solute through the fixed bed (cm/min). Yan et al. (2001) proposed modifications in the Adams-Bohart model to minimize fitted error, especially at the beginning and the end of the curve. Eqs. 15 and 16 represents the new model and the Column configuration: Bone char mass: 30.36 g; Bed volume: 54.01 cm 3 ; Bed height: 13 cm; Flow rate: 5 mL/min; Temperature: 25ºC Fluoride initial concentration: (10±0.15) mg/L. Table 4. Thomas, Yoon-Nelson, Adams-Bohart, Wolborska and Yan's models parameters for the breakthrough curves of fluoride ions by bone char. C 0 being the feed adsorbate concentration, C the adsorbate concentration at time t, both expressed in mg/L, Q t the throughput volume (L). Parameter b denotes the throughput volume (L) that produces a half-maximum response, a is the slope of the regression function, q 0 represents the adsorption capacity at equilibrium and M is the adsorbent mass. Fig. 5 shows Thomas, Yoon-Nelson, Adams-Bohart, Wolborska and Yan's models fitted to column experimental data for fluoride removal by bone char. Table 4 presents the modeling parameters obtained through linear and non-linear fitting. The better fit was found from Yan's model with a correlation factor of 0.94 and 0.98, for linear and non-linear curves, respectively. The adsorption capacities determined from this model (4.26 mg/g -linear and 3.60 mg/g -non-linear). Thomas and Yoon-Nelson model fittings presented lower correlation coefficients (R 2 < 0.9); These observations suggest that there is not just one mechanism controlling the adsorption in a fixed bed system. Essentially, the mechanism for fluoride removal, which is in accordance with the proposal of Sternitzke et al. (2012), comprises three main possibilities: fluoride adsorption in calcium sites, substitution of OHby Fand also precipitation of fluorapatite and calcium fluoride. These associated mechanisms govern the fluoride removal and explain the fact that only one model is not able to represent the process, reflected by the observed results. Error analysis Error analysis was used to evaluate the models employed in the present study. The error of estimation is given by equation 18. error (0.0441) and, consequently, the worst adjustment to the experimental data. Bone char chemical regeneration Chemical regeneration of bone char was conducted for five cycles up to the rupture of the breakthrough curve in C/C 0 = 0.15 in each experiment (Fig. 6). Table 5 shows the amount of water treated, the adsorption capacity and the time needed to reach the relative concentration of C/C 0 = 0.15 in each adsorption cycle. There was an overall reduction of 30% in the adsorption capacity from the first to the last adsorption experiment. Also, the same behavior was observed for treated water volume, which was reduced from 4.515 L to 1.350 L. It is important to note that the adsorption capacity was reduced mainly from the fourth to the fifth cycle. The chemisorption of fluoride by most sorbents is quite strong and not easily reversible. Therefore, stronger acids and bases, as well as a high time of interaction between sorbent and sorbate, are required for efficient Felution (Dey et al. 2004: Loganathan et al. 2013Nigri et al. 2017a). This fact could explain the low regeneration efficiency obtained, highlighting the where C 0 is the initial solution concentration (mg/L), C is the concentration of the solution at time t (mg/L) obtained from the applied models and from the experiment. The model of Yan presented the smallest error (0.0024) which confirms the best fit obtained. The models of Thomas and Yoon-Nelson presented the similar errors (0.0128) and, finally, the models of Adams-Bohart and Wolborska presented the biggest necessity for further studies, possibly using a longer contact time with the sorbent (Nigri et al. 2017a). Fig. 7 shows the fluoride concentration evolution in chemical regeneration with 0.5 mol/L of NaOH. All cycles of regeneration (R1 to R4) present a high initial fluoride concentration, as observed in the first sample that contains 43-52 mg/L. There is also a pronounced decrease in fluoride concentration until 120 min, indicating a significant fluoride removal through sodium hydroxide solution elution in the first few minutes of desorption. CONCLUSIONS Chemical regeneration of bone char in a fixed bed and continuous flow with 0.5 mol/L sodium hydroxide solution is technically feasible during 4 cycles. After the fourth regeneration cycle bone char sorption capacity decreased by 30%, which indicates the need to replace the sorbent in the process. Adsorption behavior of fluoride ions by bone char (particle size between 1.0-1.6mm) in fixed bed was modeled by Thomas, Adams-Bohart, Yoon-Nelson, Wolborska and Yan's models, the last one being the best fit to the curve with an S shape. The experimental results demonstrated a better fit by Yan's model and the adsorption capacity of 6.28 mg/g was obtained from the breakthrough curve. These results are useful for designing a real defluoridation system. Saturation concentration (mg/L) pH PZC pH of zero charge point Q Feed flow rate to the column (mL/min) Q t Throughput volume (L) q t Adsorption capacity (mg/g) q 0 Adsorption capacity at equilibrium time q tot Total amount of fluoride sorbed (mg/g) t Time (
2020-01-16T09:07:36.273Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "ed34d3574778a963c33e22be0445fd51e5d96163", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/bjce/v36n4/0104-6632-bjce-36-04-1631.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e401998c559d39c7c0442456fe7a104e31ff50d4", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
17148611
pes2o/s2orc
v3-fos-license
The coactivator role of histone deacetylase 3 in IL-1-signaling involves deacetylation of p65 NF-κB Histone deacetylase (HDAC) 3, as a cofactor in co-repressor complexes containing silencing mediator for retinoid or thyroid-hormone receptors (SMRT) and nuclear receptor co-repressor (N-CoR), has been shown to repress gene transcription in a variety of contexts. Here, we reveal a novel role for HDAC3 as a positive regulator of IL-1-induced gene expression. Various experimental approaches involving RNAi-mediated knockdown, conditional gene deletion or small molecule inhibitors indicate a positive role of HDAC3 for transcription of the majority of IL-1-induced human or murine genes. This effect was independent from the gene regulatory effects mediated by the broad-spectrum HDAC inhibitor trichostatin A (TSA) and thus suggests IL-1-specific functions for HDAC3. The stimulatory function of HDAC3 for inflammatory gene expression involves a mechanism that uses binding to NF-κB p65 and its deacetylation at various lysines. NF-κB p65-deficient cells stably reconstituted to express acetylation mimicking forms of p65 (p65 K/Q) had largely lost their potential to stimulate IL-1-triggered gene expression, implying that the co-activating property of HDAC3 involves the removal of inhibitory NF-κB p65 acetylations at K122, 123, 314 and 315. These data describe a novel function for HDAC3 as a co-activator in inflammatory signaling pathways and help to explain the anti-inflammatory effects frequently observed for HDAC inhibitors in (pre)clinical use. INTRODUCTION Pathogen-associated molecular patterns or damage/ danger-associated molecular patterns are sensed by specific receptors, which in turn activate signaling cascades to induce the synthesis of inflammatory mediators such as tumor necrosis factor (TNF) or interleukin(IL)-1 and IL-8 (1). Once released, these cytokines in turn stimulate their cognate receptors and thus mediate the rapid amplification of the inflammatory response. One central inducible transcription factor system of major importance for the pro-inflammatory gene expression program is NF-kB (2). It consists of five different DNA-binding subunits that occur in different dimer combinations, but a heterodimer between p50 and the strongly transactivating p65 subunit is the most frequently detected form (3). The DNAbinding NF-kB dimer is retained in the cytosol by association with an inhibitory IkB protein, thus maintaining this transcription factor in an inactive status. A plethora of damage/danger-associated molecular pattern signals ranging from proinflammatory signals to toxic, physical and oxidative stresses leads to the phosphorylation and degradation of IkB and thus allow subsequent nuclear translocation of the DNA-binding subunits and inducible gene expression (2,4). Specificity in NF-kB-triggered gene expression is achieved by several mechanisms mainly occurring in the nucleus. These include the generation of distinct DNAbinding dimers, interplay between NF-kB and other transcription factors, stimulus-specific modification and remodeling of chromatin and by posttranslational modifications (PTMs) of the DNA-binding subunits (4,5). Although PTMs were reported for all five members of the NF-kB family, most information has been gathered for the p65 subunit, which is modified by phosphorylation, monomethylation, degradative and regulatory ubiquitination, as well as acetylation (6)(7)(8). Historically, most of the PTMs were considered to exert stimulatory functions, but the underlying evidence is mainly based on the use of synthetic reporter genes and overexpression systems. Reconstitution of p65-deficient cells with p65 variants mutated at individual sites of modification has recently revealed a more complex picture and showed that the outcome of a given PTM is highly target gene specific. This is exemplified by p65 acetylation, which occurs at several lysines. Although an inhibitory function has been reported for acetylation at lysines 122 and 123 (9), overexpression experiments revealed a stimulatory function for acetylation of lysine 310 (10,11). Reconstitution experiments showed that the recently discovered acetylation sites at lysines 314 and 315 serve to augment or to dampen NF-kB-dependent transcription in a target gene-specific manner (12)(13)(14). HDAC3 deacetylates histones in vitro (24) and has been purified as a catalytic subunit of SMRT-NCoR containing nuclear complexes, which mediate nuclear hormone receptor-dependent transcriptional repression (23,25,26). These initial observations and follow-up studies have firmly established HDAC3 as a transcriptional co-repressor (27). However, HDAC3 is also present in the cytoplasm (28) and was found to deacetylate a variety of protein substrates, suggesting that its physiological functions may reach beyond its role as a co-repressor (27). The high number of HDAC3 target proteins also explains the importance of HDAC3 for several epigenetic and genetic fundamental processes. Targeted deletion of HDAC3 showed its contribution to the maintenance of chromatin structure, genome stability and the cell cycle (29,30). In addition, organ-specific deletion revealed its role in cardiac energy metabolism (31), metabolic transcription networks in the liver (32) and as a critical regulator of S phase progression and DNA damage control (33). Here, we show that the reported anti-inflammatory effects of HDAC inhibitors (34) are at least, in part, owing to the stimulatory role of HDAC3 for IL-1induced gene expression. A variety of biochemical, pharmacological and genetic approaches showed an important role of HDAC3 for efficient expression of IL-1 target genes. This effect is mainly mediated by the HDAC3-mediated deacetylation of NF-kB p65 at lysines 122, 123, 314 and 315. TSA, apicidin, tamoxifen and puromycin were from Sigma. Human recombinant IL-1a was used at 10 ng/ml in all experiments and was a kind gift from Jeremy Saklatvala, London, UK. Recombinant human TNFa was from R&D Systems or Hoelzel. Ni 2+ -NTA agarose was from Qiagen (1018244), and True Blot anti-mouse Ig IP beads were from eBioscience (00-8811-25). Transient transfections by the calcium phosphate method and determination of luciferase reporter gene activity were performed as described previously (40). Equal amounts of plasmid DNA within each experiment were obtained by adding empty vector. For establishing a stable knockdown of HDAC3, HEK293IL-1R cells were transfected with Sure Silencing TM shRNA plasmids against human HDAC3 (clone ID3) (Superarray Biosciences; KH05911P) or an empty control vector (pSuper-Puro) as negative control, using the calcium phosphate method. Twenty-four hours post transfection, selection was started using 0.75 mg/ml puromycin, and stable cell lines were isolated. KB cells were stably transfected with the same plasmids using Lipofectamine (Invitrogen TM ) and also selected in 0.75 mg/ml puromycin. Transiently transfected HEK293IL-1 R cells ( Figures 1D and E, 3B, 5B and C) were harvested after washing in cold PBS and collected by centrifugation. The pellet was directly lysed in b-galactosidase lysis buffer as described (36). For coimmunoprecipitation experiments ( Figure 4B), cells were harvested after washing in PBS and collected by centrifugation. The pellet was directly lysed in cell lysis buffer (50 mM HEPES pH 7.4, 50 mM NaCl, 1% Tween20, 2.5 mM EGTA, 1 mM EDTA, 1 mM NaF, 10 mM b-glycerophosphate, 0.1 mM Na 3 VO 4 , 1mM PMSF, 1 mM DTT, 1Â protease inhibitor cocktail from Roche) and incubated for 20 min on ice. The DNA was sheared by three sonication steps of 20 s, and lysates were cleared by ultracentrifugation for 20 min at 100.000g and 4 C. HDAC3 was immunoprecipitated from 0.5 mg precleared cell extracts using 2 mg mouse monoclonal anti-HDAC3 antibody (AbD Serotec) or 2 mg normal mouse IgG (Santa Cruz) coupled for 2 h to 25 ml True Blot anti-mouse Ig IP beads (eBioscience). After rotating for 2 h at 4 C, the supernatant was discarded, and the beads were washed three times with high salt wash buffer (50 mM HEPES pH 7.4, 450 mM NaCl, 1% Tween20, 2.5 mM EGTA, 1 mM EDTA, 1 mM NaF, 10 mM b-glycerophosphate, 0.1 mM Na 3 VO 4 , 1 mM PMSF, 1 mM DTT, 1Â protease inhibitor cocktail from Roche). The precipitated proteins were eluted by boiling in 2Â Roti-Load for 10 min before analyzing by western blotting. NF-kB-p65-deficient and reconstituted Mefs ( Figure 9A) were harvested after washing in cold PBS and collected by centrifugation. The pellet was directly lysed in cell lysis buffer (50 mM HEPES pH 7.4, 50 mM NaCl, 1% Tween20, 2.5 mM EGTA, 1 mM EDTA, 1 mM NaF, 10 mM b-glycerophosphate, 0.1 mM Na 3 VO 4 , 1 mM PMSF, 1 mM DTT, 1Â protease inhibitor cocktail from Roche) and incubated for 20 min on ice. The DNA was sheared by three sonication steps of 20 s, and, after ultracentrifugation for 20 min at 100.000g at 4 C, the lysates were further used for western blotting. Immunoblotting After electrophoresis on a denaturing polyacrylamide gel, proteins were transferred to a polyvinylidene difluoride membrane (Roth) by semi-dry blotting. After blocking with 5% bovine serum albumin or 5% dried milk in Tris-buffered saline/0.05% Tween for 1 h, proteins were visualized by sequential incubation with primary antibodies for 12-24 h, washed in Tris-buffered saline/ 0.05% Tween and incubated for 1 h with the peroxidasecoupled secondary antibody. Proteins were detected by using enhanced chemiluminescence systems from Pierce, Millipore or GE Healthcare. Data normalization Microarray raw data extracted by the Feature Extraction software were normalized and analyzed in Genespring GX software, version 11.5.1 (Agilent Technologies). Expression values were log2 transformed, normalized to 75th percentile of each array and inter array median centered according to the standard procedure in Genespring GX. HEK293T cells were transfected with expression plasmids encoding His-tagged p65 wild-type or a mutated version thereof where K122, K123, K310, K314 and K315 (K5R) were mutated to arginines along with 0.75 and 1.5 mg of a plasmid for YFP-tagged CBP. One fraction of cells was lysed under denaturing conditions to analyze acetylation of p65 with the indicated acetyl-lysine-specific antibodies. The lower part shows the input control ensuring adequate protein expression. (B) HEK293IL-1 R cells were transfected with the indicated p65 wild-type and acetylation-deficient expression vectors plus the IL-8 promoter reporter gene construct and the pSV40-b-galactosidase construct. Twenty-four hours later, cells were lysed, and luciferase reporter gene activity was determined and normalized for b-galactosidase activity. Normalized mean luciferase activities ± SEM relative to the vector control from five independent experiments (upper graph) are shown. In parallel transfections, cells were transfected to express HA-tagged p65 wild-type or the indicated acetylation-mimicking mutants thereof. The next day, cells were lysed, and one portion of the lysates was analyzed for mRNA expression of the endogenous IL-8 gene by RT-qPCR. Bars show mean ± SEM from two independent experiments (lower graph). Another portion was tested by immunoblotting for correct expression of p65. The positions of a molecular weight marker are shown, and the HA-p65 migrates slightly slower than the endogenous p65. (C) HEK293IL-1R cells were transfected with additional p65 mutants and analyzed exactly as described in (B). IL-8 promoter activity represents the mean ± SEM from three independent experiments normalized for protein concentration (upper graph). IL-8 mRNA expression represents the mean fold change ± SEM relative to the vector control from three independent experiments (lower graph). All samples were analyzed for comparable expression of the constructs in parallel, one representative immunoblot is shown. Identification of TSA-and IL-1-regulated genes in HDAC3 knockdown cells In all, 29 421 probes with EntrezGeneID showed hybridization signals more than the 20% percentile. Of these, 24 755 probes corresponding to 16 412 genes were measurable in at least 4 of the 16 microarray hybridizations. This data set was used for further analyses (Figures 7 and 8). The pS-Puro and pS-HDAC3 data sets were filtered separately for genes showing differential regulation by IL-1 by at least 1.5-fold, revealing a total of 75 probes corresponding to 70 genes with consistent regulation. Expression values for these genes above background had to be measurable in at least the stimulated condition (for Figure 7 was used to calculate the total number of genes, which were regulated in the same direction by TSA by at least 2-fold in both independent experiments compared with the EtOH-treated vehicle control. This revealed 4241 genes whose overlap between control cells and HDAC3 knockdown cells is illustrated by the depicted Venn diagram. (B) The TSA-regulated set of genes identified in (A) was additionally filtered for HDAC3-dependent genes based on a ratio of pS-HDAC3/pS-Puro (lanes 13, 14) of at least 1.5-fold in two independent experiments revealing 130 genes. Similarly, the set was filtered for genes, which were regulated by IL-1 by at least 1.5-fold revealing 24 genes. Depicted are color-coded ratio values of all 154 genes, which were calculated by dividing fluorescence intensity measurements of upregulated genes) or in the unstimulated condition (for downregulated genes). Differential expression of genes in this final set was significant (P 0.01) in at least one out of four comparisons according to feature extraction software algorithms. In all, 5045 probes showed differential hybridization signals of at least 2-fold in TSA stimulated pS-Puro or pS-HDAC3 cells. This data set was filtered for consistent regulation resulting in 4241 genes: The probes were separated into three subsets: (i) TSA-regulated, HDAC3 knockdown dependent, not IL-1-regulated; (ii) TSAregulated, HDAC3 knockdown independent, IL-1-regulated; and (iii) TSA-regulated, HDAC3 knockdown independent, not IL-1-regulated. In case of multiple probes per gene in the same subset, only one probe was selected for further analysis. Probe values were removed from other subsets, if probe values for the same gene occurred in intersections of data sets (i-iii). Probes measuring identical genes but having been assigned to different subsets were removed from the data sets. Identification of IL-1-regulated genes in HDAC3-deficient cells In all, 25 415 probes with EntrezGeneID corresponding to 15 766 annotated transcripts showed hybridization signals more than the 20% percentile, no flags and were measurable in at least four out of eight microarray hybridizations (Supplementary Figure S6). This data set was filtered for genes showing differential regulation by IL-1 by at least 1.5-fold, revealing a total of 112 probes corresponding to 95 genes with consistent regulation. In all, 593 probes corresponding to 490 genes were regulated by tamoxifen treatment by at least 2-fold in both experiments. Seven of these genes (1.4%) were also regulated by IL-1. Identification of p65 K4Q-or p65 K5Q-dependent genes In all, 19 850 probes with EntrezGeneID corresponding to 14 761 annotated transcripts showed normalized hybridization signals more than the 20% percentile and were measurable in at least 8 out of 24 hybridizations ( Figure 9C and D and Supplementary Figure S7). This set of data was filtered for genes showing consistent 2-fold regulation in both experiments based on the ratio p65 wild-type/p65 À/À (416 probes, 341 genes) or IL-1 + p65 wild-type/p65 À/À (908 probes, 836 genes). Only unflagged probes and probes with hybridization signals above 2-fold background signals (>110 fluorescence intensity units) in p65 wild-type or p65 wild-type+IL-1 samples in at least one experiment were included to identify an overlapping set of 975 probes corresponding to 851 IL-1-regulated or p65 target genes. These were further filtered to identify 135 K4Q-dependent or 109 K5Q-dependent genes based on 2-fold ratios (K4Q or K5Q/p65 wild type) and p65 K4Q/p65 K4R !2.0 or p65 K5Q/p65 K5R !2.0. Clustering and graphical visualization of microarray data Log2-transformed ratio values were analyzed by hierarchical cluster analysis using MeV MultiExperimentViewer, version 4.6.2, 2011, (www.tm4.org) with cluster settings 'average linkage' as linkage method and 'Manhattan distance' for distance measure (44). Statistics and quantification Mean and standard errors of the means were calculated using Sigma Plot, version 11.0. Bands detected by immunoblotting were quantified using ImageJ (http:// rsbweb.nih.gov/ij/). HDAC3 is required for IL-1 triggered expression of the human IL-8 gene We previously found that HDAC3 associated with c-Jun and p65 NF-kB at IL-1-regulated target genes (45). To further unravel a functional role for HDAC3 in cytokine signaling, we established stable HEK293IL-1R cell lines with a partial shRNA-mediated downregulation of HDAC3 ( Figure 1A). These cells and vector controls were stimulated with IL-1 either alone or in combination with TSA, a fungal antibiotic, which inhibits class I and II HDACs roughly to the same extent (46,47). Analysis of IL-8 expression by RT-qPCR showed that IL-1-triggered IL-8 mRNA was further stimulated by TSA in a synergistic manner ( Figure 1A), which is in accordance with the general role of HDACs as negative regulators of acetylation-dependent chromatin relaxation (48). In marked contrast, knockdown of endogenous HDAC3 by shRNA strongly inhibited IL-1-induced IL-8 expression, indicating an important role of HDAC3 for productive IL-8 transcription ( Figure 1A). To further investigate this finding in a time-resolving manner, control and HDAC3 knockdown cells were stimulated for various periods with IL-1, followed by analysis of IL-8 mRNA expression by quantitative RT-qPCR ( Figure 1B). These experiments revealed a critical role of HDAC3 for gene expression during the induction and peak phases of the IL-8 response. As some HDAC3-dependent effects are independent from its enzymatic function as a deacetylase (49), it was then important to measure IL-1-induced IL-8 transcription in the presence of the inhibitory compound apicidin, which selectively inhibits HDAC3 and also with a minor efficiency HDAC2 (50,51). Increasing concentrations of apicidin blocked induced IL-8 mRNA expression in a dose-dependent manner in HEK293IL-1R cells ( Figure 1C) and also in the tumor cell lines KB and A549 (Supplementary Figure S1), thus revealing the importance of the enzymatic function of HDAC3 for its stimulatory function on transcriptional regulation of gene expression. As HDACs can regulate gene expression by global effects on chromatin compaction (52), it was interesting to investigate whether HDAC3 inhibition also affects gene expression from an IL-8 promoter that is not embedded into its native chromatin context. To address this question, HEK293IL-1R cells were transfected with a luciferase plasmid under the control of an IL-8 promoter. IL-1 triggered luciferase expression was diminished in HDAC3 knockdown cells ( Figure 1D) or in the presence of apicidin ( Figure 1E), showing that HDAC3-mediated effects directly control transcriptional activation of the IL-8 promoter in a chromatin-independent manner. HDAC3 is required for IL-1 triggered expression of the murine Cxcl2 gene To investigate the contribution of HDAC3 on inflammatory gene expression by an independent experimental approach, we performed further experiments with immortalized Mef lines generated from mice harboring floxed Hdac3 alleles, which were crossed with mice containing a tamoxifen-inducible Cre recombinase gene (33). These cells contain a heterozygous-floxed Hdac3 allele in the wild-type background (Hdac3 fl/+ ) or in the knockout background (Hdac3 fl/À ) and thus become heterozygous for Hdac3 or lack both Hdac3 alleles after Cre-mediated deletion of the floxed gene. To determine the optimal dose of tamoxifen for efficient HDAC3 knockout, cells were treated for 3 days with increasing concentrations of tamoxifen and analyzed for HDAC3 protein levels by immunoblotting. These experiments revealed strong reduction of HDAC3 levels at a concentration of 10 mM tamoxifen (Figure 2A). Ablation of HDAC3 under these conditions also resulted in reduced cell viability (data not shown), which is in line with its reported necessity for maintenance of genome stability and S phase progression (30,33). To study a role of HDAC3 in this genetically altered system after the induction of gene deletion by tamoxifen, these cells remained untreated or were stimulated by IL-1, followed by analysis of Cxcl2 transcription by RT-qPCR ( Figure 2B). Loss of one Hdac3 allele slightly reduced IL-1-triggered Cxcl2 expression, whereas deletion of both alleles strongly impaired the transcriptional response ( Figure 2B). Control experiments ensured that these inhibitory effects are not due to any tamoxifenmediated effects, as in two other Mef lines Cxcl2 expression was not suppressed but rather weakly induced on tamoxifen treatment (Supplementary Figure S2). Moreover, four shRNA oligonucleotides directed against murine HDAC3 also suppressed IL-1-inducible Cxcl2 expression (Supplementary Figure S3). Together with the experiments involving human cells (Figure 1 and Supplementary Figure S1), these data provide several independent lines of evidence for a positive role of HDAC3 for IL-1-mediated gene expression. HDAC3-mediated p65 deacetylation augments NF-iB transcriptional activity The pro-transcriptional activity of HDAC3 raises the question for its molecular target(s) that mediate this effect. The expression of IL-8 (36) and Cxcl2 (53) depend on NF-kB and HDAC3 has been reported to interact with NF-kB p65 (9). Ectopic expression of p65 activated expression of endogenous IL-8 and Cxcl2 genes ( Figure 3). Moreover, co-expression of HDAC3 enhanced p65-mediated expression of both genes, an effect which was pronounced in the presence of p300 ( Figure 3A and B). To identify the molecular basis for this activating effect of HDAC3, we tested whether HDAC3 can lead to the deacetylation of p65. Cells were transfected to express His-tagged p65 and CBP along with an expression vector for HDAC3 and were lysed under denaturing conditions to prevent removal of acetyl groups after cell lysis, and His-tagged p65 was enriched on Ni-NTA matrices. The acetylation status of p65 was analyzed by immunoblotting with pan-acetyl-lysinespecific antibodies or with three different antibodies recognizing site-specific acetylation at lysines 310, 314 and 315 ( Figure 4A). These experiments showed that CBP-induced acetylation of p65 was largely lost in the presence of HDAC3, as revealed by the use of the pan-acetylation antibody and also by the site-specific antibodies recognizing p65 acetylated at lysines 310, 314 or 315. These data revealed HDAC3 as a major and specific p65 deacetylase. Given the reported interaction between p65 and HDAC3 (9), we designed experiments to analyze the possible regulation of this association. As substrate binding of HATs can be regulated by binding to acetylated lysine moieties via bromodomains (21), we compared binding of HDAC3 between wild-type p65 and mutants thereof, which were changed in four (K122, K123, K314, K315) or all five (K122, K123, K310, K314, K315) known acetylated lysines either in a lysine acetylation inactivating way to arginine (p65 K4R, p65 K5R) or in a lysine acetylation mimicking way to glutamine (p65 K4Q, p65 K5Q). Coimmunoprecipitation experiments confirmed the HDAC3/p65 interaction and showed similar HDAC3 binding of all p65 mutants ( Figure 4B). The slightly impaired binding of the p65 K4Q or p65 K5Q mutants is attributable to a lower general expression of HDAC3 in the presence of p65 K4Q or p65 K5Q, which might be due to inhibitory effects on cryptic NF-kB binding sites in the vector sequences driving the expression of the HDAC3 cDNA. To investigate the role of p65 acetylation in more detail, we generated a p65 mutant that is widely refractory to acetylation. As the p65 K4R mutant still contains the acetylated lysine 310 (10), we used a mutant where also this lysine was mutated to arginine (p65 K5R). Although expression of CBP triggered acetylation of wild-type p65, the p65 K5R mutant was completely devoid of any acetylation ( Figure 5A), thus excluding the existence of additional major acetylation sites in p65. This experiment, therefore, revealed the suitability of K5 mutants for further experiments. In a next step, transcriptional activation mediated by p65 wild-type was compared with that of various p65 K/Q mutants mimicking acetylation at one or several lysines. Cells were transfected to express similar amounts of these different p65 variants ( Figure 5B), followed by the analysis of IL-8 transcription by reporter gene assays ( Figure 5B, upper graph) or IL-8 mRNA expression by RT-qPCR ( Figure 5B, lower graph). Although wild-type p65 strongly activated IL-8 mRNA expression and transcription, the p65 K310Q mutant was slightly more active in activating mRNA expression, which is in line with published data on a positive role of acetylation at K310 for NF-kB activity (10). In striking contrast, the p65 K4Q and p65 K5Q mutants had lost their transactivating capacity to induce IL-8 mRNA expression and transcription ( Figure 5B), although in vitro their DNA-binding and dimerization capacities are known to be fully intact (15). To delineate the role of acetylation-mimicking mutations of K122, K123, K314 and K315 in more detail, these residues were mutated individually or in tandem combinations. Analysis of IL-8 transcription and mRNA expression revealed a strong inhibitory effect of the K122Q mutation, whereas mutations of K123, K314 and K315 were less inhibitory ( Figure 5C). HDAC3 is required for IL-1-induced recruitment of p65 NF-iB and RNA polymerase II phosphorylation at the IL-8 promoter Acetylation of K122 and K123 has been previously implicated in reduction of p65 DNA-binding in vitro (9). To assess the effects of HDAC3 knockdown on NF-kB nuclear translocation, overall DNA-binding and recruitment to the endogenous IL-8 gene, we used KB cells that we found previously well suited for chromatin immunoprecipitation (ChIP) analysis of IL-1-induced genes (36). Similar to HEK293IL-1R cells, HDAC3 knockdown in KB cells diminished IL-1-induced IL-8 expression ( Figure 6A). To test the impact of HDAC3 on IL-1-triggered translocation of p65 to the nucleus, cells were stimulated with this proinflammatory cytokine and fractionated. Western blot experiments showed that knockdown of HDAC3 did not affect IL-1-induced accumulation of p65 in the soluble nuclear fraction, whereas the amount of p65 in the insoluble nuclear fraction was reduced by 50% ( Figure 6B). This result suggests that HDAC3 does not modulate cytosolic pathways mediating p65 nuclear translocation but rather is required for stable association of p65 with chromatin. To investigate this possibility further, ChIP were performed, which showed reduced recruitment of p65 NF-kB to the IL-8 promoter ( Figure 6C). Further ChIP experiments revealed that downregulation of HDAC3 also resulted in strongly reduced amounts of RNA polymerase II phosphorylated at serine 5 at the IL-8 promoter, which is in line with its negative effects on IL-8 mRNA synthesis. Control experiments showed no association of p65 binding and RNA polymerase II phosphorylation at a region 940 bp upstream of the IL-8 promoter, demonstrating the specificity of the ChIP analyses ( Figure 6C). Compared with this upstream region, the IL-8 promoter displayed a lower density of histone H3 and higher acetylation of histone H3 at lysine 9, which suits the needs for a more open chromatin structure of this rapidly inducible gene ( Figure 6C). HDAC3 knockdown cells showed no changes of histone H3 loading and acetylation ( Figure 6C) arguing for specific effects of HDAC3 on transcriptional activators of IL-8 rather than on more general changes of the local chromatin structure. These effects seem to be independent from stable association of HDAC3 with chromatin, as we were not able to detect stable HDAC3 recruitment to the IL-8 locus by ChIP ( Figure 6C) or by ChIP followed by deep sequencing of DNA (ChIP-seq) (data not shown) with the antibodies available to us. These data indicate that acetylation of at least one residue of lysines 122, 123, 314 and 315 accounts for inhibition of p65 activity for IL-8 expression and suggest that removal of these acetyl groups by HDAC3 is a necessary positive regulatory step in the nucleus, which facilitates contact of p65 with specific DNA elements for activating p65-driven gene transcription. A general role of HDAC3 for the induction of the IL-1-gene response Previous studies have shown that the effect of NF-kB p65 modifications occur in a highly gene-specific fashion (15,54). It was therefore interesting to investigate the role of HDAC3 in IL-1-induced gene expression at a genome-wide level. HDAC3 knockdown cells and vector controls were stimulated with IL-1 and compared with cells in which global acetylation was boosted on TSAmediated inhibition of all major HDACs. Successful TSA treatment was confirmed by TSA-induced histone H3 acetylation (Supplementary Figure S4). Gene expression was measured by two independent high density microarray experiments, followed by data filtering for genes regulated consistently in the same direction. These experiments allowed the identification of 70 genes regulated by IL-1 by at least 1.5-fold in two independent experiments ( Figure 7A). Only seven genes were still inducible in HDAC3 knockdown cells, showing that HDAC3 is required for the majority of all IL-1-induced genes in HEK293IL-1R stable cell lines ( Figure 7A). A hierarchical cluster analysis of the groups of genes identified in Figure 7A visualizes the individual relevance of HDAC3 for virtually all IL-1-inducible genes ( Figure 7B, yellow and green bars). These findings were confirmed in a time-resolving fashion by RT-qPCR for the selected target genes CXCL1, CXCL2, CXCL3 and NFKBIA (Supplementary Figure S5). To support this conclusion by an independent experimental approach, IL-1-induced gene expression was compared between control Mefs and Mefs treated with tamoxifen to obtain Cre-inducible ablation of the Hdac3 gene. As shown in Supplementary Figure S6A, we found 95 genes that were induced by IL-1 by at least 1.5-fold in two independent experiments. The expression of 40 of them was suppressed by at least 2-fold after tamoxifen-induced deletion of Hdac3. Tamoxifen itself had weak inducing, but no inhibitory, effects on several of these genes (Supplementary Figure S6B). Although 490 genes were regulated at least 2-fold by tamoxifen in both experiments, only seven genes (1.4%) were inducible by IL-1 (Supplementary Figure S6B). In summary, these data show that the broad and co-stimulatory role of HDAC3 for the IL-1-triggered transcriptional program is not restricted to human cells and can also be found in mice. Comparison with the pan-HDAC inhibitor TSA reveals differential and restricted effects of HDAC3 on specific sets of genes In addition, we interrogated the entire data set obtained for HEK293IL-R cells for genes whose expression depends on TSA-sensitive HDACs as compared with HDAC3 alone. Because the group of TSA-regulated genes was very large, we initially used a 2-fold cut-off for identifying 2983 TSA-regulated genes in control cells and 3755 TSA-regulated genes in cells depleted for HDAC3 ( Figure 8A). Of those, 2497 genes were regulated by TSA in both cell lines, which correspond to 83.71% of all genes from control cells and 66.5% of all genes from HDAC3 knockdown cells, revealing that transcriptomewide TSA effects are largely independent from the HDAC3 expression level ( Figure 8A). This interpretation was corroborated by visualizing ratios of expression for individual genes ( Figure 8B). Using an additional 1.5-fold cut-off, we identified a group of 130 TSA-sensitive genes whose expression was also altered in HDAC3 knockdown cells ( Figure 8B, green bar). Of these, 68 were up-and 62 were downregulated constitutively in HDAC3 knockdown cells compared with control cells, indicating that their constitutive expression levels are modulated by HDAC3 ( Figure 8B, green bar, lanes 13, 14). Some genes were elevated by both, TSA or HDAC3 knockdown (e.g. CTGF, CYR61, C2lorf88, LOC100287487, VCX, SOX3), suggesting that their repression is under control of HDAC3. Importantly, in several cases, genes were regulated in the opposite direction by TSA treatment or HDAC3 knockdown (e.g. CGA, HSPB3, INA, PIK3R3, NAPI2, SOX11, PRAMEF3, SPARC), indicating that HDAC3 plays a highly distinct role in their regulation ( Figure 8B). This interpretation is in line with the result that the overall induction or repression pattern of these 130 TSA-regulated genes was almost identical between control and HDCA3 knockdown cells ( Figure 8B, green bar, compare lanes 5, 6 with lanes 11, 12) adding further evidence for a highly selective role of HDAC3 in gene expression. Moreover, as shown by the ratio values indicated by the blue vertical bar, we also found 24 TSA-sensitive genes, which were also inducible by IL-1 ( Figure 8B, blue bar). Although their expression was suppressed by HDAC3 knockdown ( Figure 8B, blue bar, compare lanes 3, 4 with lanes 9, 10), their induction by TSA was independent of HDAC3 ( Figure 8B, green bar, compare lanes 5, 6 with lanes 11, 12). Hence, the data shown in Figure 8A and B suggest that only a small fraction of TSA-regulated genes is controlled by HDAC3. Further support for this interpretation comes from the analysis of the 150 most strongly up-or downregulated TSA-sensitive genes. These highly regulated genes were neither affected by HDAC3 knockdown nor by IL-1 ( Figure 8C). Therefore, the top-ranking group of genes regulated by TSA-sensitive HDACs is clearly distinct from the smaller set of genes, which is regulated by IL-1 and HDAC3. In conclusion, HDAC3 knockdown does not have a general repressive or inducing effect on the transcriptome per se and has little effect on global changes of gene expression compared with genome-wide-induced changes of chromatin structure using pharmacological blockade of HDACs. Our data also show that HDAC3 rather serves as a co-activator for the majority of IL-1-induced genes and thus identify it as a IL-1-specific nuclear signal integration node. Acetylation of p65 down-regulates IL-1-target genes We then investigated whether the stimulatory effects of HDAC3 on gene expression is associated with its ability to control the acetylation status of p65 NF-kB. P65-deficient Mefs stably reconstituted to express comparable amounts of wild-type, acetylation-deficient (K/R) or acetylation-mimicking p65 (K/Q) ( Figure 9A) were Figure 9. Specific inhibitory effects of p65 NF-kB acetylationmimicking mutants on IL-1-induced gene expression. (A) NF-kB p65-deficient Mefs (p65 À/À ) were reconstituted to express p65 NF-kB wild-type (HA-p65) and its mutant forms as described in (15). A representative western blot ensuring expression of comparable amounts of p65 proteins is shown. (B) Mefs were stimulated for various periods with IL-1 as indicated. Expression of Cxcl2 was analyzed by RT-qPCR. Relative changes of mRNA expression compared with untreated cells reconstituted with p65 NF-kB wild-type were calculated. Shown are the relative mean values ± SEM from three independent experiments compared with cells reconstituted with wild-type p65 (set as 1). (C and D) Two RNA preparations from experiments performed as described in (B) were used to prepare cRNA and to hybridize whole genome Agilent microarrays. A set of 851 p65-dependent or IL-1-induced genes was extracted based on 2-fold regulation in both experiments. This set was further filtered for genes affected by p65 K4Q or p65 K5Q. Color-coded ratio values for the top-ranking genes of these two subsets are shown. The entire sets of data are shown in Supplementary Figure S7A and B. Gray colors indicate genes with low hybridization signals on the microarrays. (C) Expression values were used to identify 851 genes, which were regulated by IL-1 (ratio IL-1 p65 wild-type/p65 À/À ) or are dependent on p65 NF-kB (ratio p65 wild-type/p65 À/À ). These genes are sorted according to mean regulation by IL-1 in descending order and are indicated by the green vertical bar. Genes regulated by IL-1 and suppressed by p65 K4Q (blue vertical bars) or by p65 K5Q (brown vertical bars) were identified by ratio values of p65 wild-type+IL-1/p65 K4Q+IL-1 or p65 K5Q+IL-1 ! 2.0. (D) Expression values were used to identify 167 genes, which were dependent on p65 NF-kB (ratio p65 wild-type/ p65 À/À ) and regulated by p65 K4Q (yellow vertical bars) or by p65 K5Q (purple vertical bars) based on the ratio p65 K4Q or p65 K5Q/ p65 wild-type followed by p65 K4Q/p65 K4R and by p65 K5Q/p65 K5R. These genes are sorted according to mean regulation by p65 K4Q in descending order. stimulated with IL-1 and analyzed for transcription of the endogenous Cxcl2 gene by RT-qPCR. These results revealed a p65-dependent robust activation of gene expression and slightly elevated basal Cxcl2 expression in cells expressing acetylation-deficient p65 proteins ( Figure 9B). However, the p65 K4Q and K5Q mutants had lost their ability to induce Cxcl2 expression in response to IL-1 at all time points tested ( Figure 9B). In line with the results of transiently transfected human cells ( Figure 5), the acetylation-mimicking mutant K310Q was not inhibitory, suggesting that acetylation at, at least, one residue of K122, 123, 314 and 315 is mediating the inhibitory effect ( Figure 9B). We also tested the effects of reconstituted p65 K4Q or p65 K5Q mutants on the entire IL-1 response by performing 24 high density microarrays from two independent series of experiments. We filtered these data sets using 2-fold cutoffs for p65 NF-kB-dependent and for IL-1-regulated genes. In this group of 851 genes, the majority of IL-1induced genes was suppressed by the p65 K4Q or K5Q mutants, suggesting that acetylation of p65 is a general negative regulator of inflammatory gene expression programs ( Figure 9C). Of note, the K4R and K5R mutants reverted to a large extent the K4Q or K5Q effects arguing that these mutations inactivate the acetylation phenotype ( Figure 9C). We also used the same data set to explore whether the p65 K4Q or K5Q mutants were completely devoid of activating genes. Interestingly, in unstimulated cells, we found that p65 K4Q or p65 K5Q upregulated 77 genes and repressed 56 genes. As shown in Figure 9D and Supplementary Figure S7, a large part of these genes was repressed in p65-deficient cells and re-expressed by reconstituting p65 wild-type, proving that they are bona fide p65 target genes. In most cases, K-Q and K-R mutants had opposite effects on gene expression ( Figure 9C and D). These data support the conclusion that acetylation-mimicking confers gene-specific properties of p65. Moreover, such mutants had lost their ability to trigger IL-1-dependent gene expression, thus corroborating the concept that inhibitory p65 acetylations need to be relieved by HDAC3 to activate cytokine-responsive genes. Discussion In this study, we provide several lines of evidence that besides its well-defined role as catalytic subunit of co-repressor complexes, HDAC3 can also function as a co-activator in inflammatory signaling pathways. Our data further suggest that a substantial portion of the co-activating property of HDAC3 is mediated by the removal of inhibitory acetyl groups from NF-kB p65, as (i) HDAC3 deacetylates p65 NF-kB at all known lysine residues; (ii) 19 (48%) of 40 HDAC3-dependent IL-1 response genes require p65 for expression (Supplementary Table S1); and (iii) mimicking acetylation of p65 at these sites results in p65 variants with strongly diminished gene-activating properties for inflammatory genes. The p65 5KQ mutant is not generally transcriptionally inactive, as a recently published gene array analysis in reconstituted and TNF-stimulated cells showed that p65 5KQ was able to induce the TNF-stimulated genes Ccl9 and Mmp9 at levels comparable with the wild-type p65 protein (15). Here, we extended this analysis and found a larger group of genes whose constitutive p65 NF-kB-dependent expression was enhanced by either the K4Q or the K5Q mutant ( Figure 9). These data exclude non-specific and general functional defects for the p65 K-Q mutants and underscore their suitability as gain-of-function mutants mimicking p65 acetylations. Of note, data acquired with the p65 K/R mutants are more difficult to interpret, as they reflect defective acetylation and strongly impaired ubiquitination (15). This also relates to findings on the role of K122 and K123, which have been previously implicated in a mechanism involving IkB-mediated removal of p65 from the IL-8 promoter on the basis of KK122/123/RR mutations (9). These mutations may either affect p65:chromatin interactions or alter degradation of promoter bound p65 (55). This may also explain why so far the functional exploration of p65 acetylation (which thus far has only been addressed by the use of K/ R mutants) has not revealed a coherent picture (5,7,8,12). Thus, the data provided in this study using K-Q mutations provide novel evidence for specific gene-regulatory functions of p65 acetylations. Research on p65 acetylation has also been limited, as primarily, the acetylation site K310 was investigated owing to the availability of a specific antibody (10,11,16,56). Here, using two other specific antibodies developed by us (12,14), we show that modifications at K314 and K315 play a different role. Like for K310, acetylation at these sites is removed by HDAC3. But in contrast to acetylation of K310, which activates transcription of the cIAP-2 gene (57), we show here by means of the K4Q mutant and by individual K-Q mutants that acetylation at K314 and K315 in addition to acetylation of K122 and K123 negatively regulate p65 functions at inflammatory target genes. Of note, detection of acetylation of p65 requires overexpression of HATs and p65 (Figure 4), suggesting that acetylation is a low abundant and transient event that affects only a fraction of all p65 protein present in the cell. HDAC3 can be found in the cytosol and in the nucleus (28), but the subcellular compartment for HDAC3mediated regulation of the p65 acetylation status is presently not known. Our data show that most of the HDAC3 protein in epithelial cells is found in the nucleus ( Figure 6B) but does not stably associate with chromatin ( Figure 6C). Moreover, a DNA-binding defective p65 point mutant (p65 E39I) is still deacetylated by HDAC3 (Supplementary Figure S8), arguing that deacetylation must not necessarily occur at chromatin-bound p65. Like most classical HDACs, HDAC3 is contained in large high molecular weight co-repressor complexes of up to 1-2 MDa (21,23,25,26). HDAC5 and HDAC8 did not deacetylate p65 in our assay systems, whereas HDAC1 and HDAC2 also deacetylated p65 at lysines K314 or K315 (Supplementary Figure S9). As knockdown of HDAC1 and HDAC2 also impairs IL-1-inducible Cxcl2 expression (Supplementary Figure S10), it will be interesting to define the individual roles of other regulators of p65 acetylation for gene expression in future studies. Of note, we also observed HDAC3-regulated genes, which are not p65-dependent (Supplementary Table S1), thus showing that HDAC3 operates also by further p65-independent mechanisms. During the course of this study, we also noted that IL-1-induced genes were more strongly affected by HDAC3 suppression than TNF-induced genes (Supplementary Figure S11). IL-1 and TNF activate overlapping sets of proinflammatory genes and share many signaling components, but emerging evidence also suggests specific pathways as exemplified by usage of different E2 and E3 ubiquitination enzymes in their NF-kB activating pathways (58). It will therefore be interesting to clarify a specific role of HDAC3 in cytokine pathways in future studies. In line with emerging functions of HDAC3 in innate immunity, deletion of the enzyme in macrophages was recently shown to affect histone acetylation leading to the expression of a set of IL-4-regulated genes. These findings demonstrate a specific role of HDAC3 in controlling macrophage polarization involving epigenetic mechanisms (59). Our study now reveals an unexpected role of HDAC3 for the control of IL-1-triggered pro-inflammatory gene expression. Apparently, the IL-1 pathway directs the recruitment of HDAC3 into gene-regulatory complexes distinct from the known co-repressor complexes. At present, the nature of these co-activator complexes is unknown and may not require stable association with chromatin. This non-epigenetic effect is of biomedical relevance, as non-selective HDAC inhibitors are used as anti-cancer drugs and are currently tested in a number of clinical trials for the treatment of further diseases (46,60). As an example, clinical data show anti-inflammatory properties of HDAC inhibitors that can be used for the treatment of graft versus host disease (61). In addition, animal models have revealed strong anti-inflammatory effects by pharmacological modulators of HDAC activity (62). HDAC inhibitors show antiphlogistic efficacy in models of inflammatory bowel disease (63) and of rheumatoid arthritis (64). Chemokines such as IL-8 or CXCL2 mediate leukocyte trafficking, leukocyte infiltration, angiogenesis or metastasis (65)(66)(67). Hence, IL-1-driven signaling mechanisms, which control the magnitude and kinetics of expression of these chemokines are important modulators of disease processes. By demonstrating a general role of HDAC3 in positive regulation of IL-1 responses at the cellular level, this study contributes novel insights into the molecular mechanisms underlying anti-inflammatory effects of HDAC inhibition. SUPPLEMENTARY DATA Supplementary Data are available at NAR Online: Supplementary Tables 1-3, Supplementary Figures 1-11 and Supplementary Materials & Methods.
2017-04-14T05:08:40.602Z
2012-10-19T00:00:00.000
{ "year": 2012, "sha1": "8f3c457b005c230cceb85fccf197add29800effa", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/41/1/90/3995309/gks916.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f3c457b005c230cceb85fccf197add29800effa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234494220
pes2o/s2orc
v3-fos-license
Tumor spread or siege immunity: dissemination to distant metastasis or not ABSTRACT Metastasis is the leading cause of cancer mortality. We have investigated the tumor microenvironment at all metastatic cascade steps (early-metastasic dissemination, synchronous metastasis, metachronous metastasis) to delineate the impact of tumor and immune parameters to this process. Tumors with and without signs of early metastasis invasion (venous-emboli, lymphatic-invasion, perineural-invasion, collectively, VELIPI) had similar levels of inflammatory and immunosuppressive molecules. Cancer mutations, gene expression levels or chromosomal instability did not significantly differ in primary tumors from patients with or without metastasis. In contrast, tumors without early metastasis invasion were highly infiltrated with Th1 and memory T cells and were associated with a good outcome. A cytotoxic immune signature, Immunoscore and increased lymphatic vessels at the invasive margin of tumors, protected against the generation of distant metastases. The metastatic landscape was highly heterogeneous, each of the metastases of a patient bearing diverse tumor-cell clones and diverse immune-microenvironments. The Immunoscore within a random metastasis significantly predicted major differences in patient’s survival, and Immunoscore from the least immune-infiltrated metastasis was the most associated with patient long-term survival. We proposed an alternative theory of tumor evolution, where an immune selection model best-described tumor evolution in humans. Metachronous metastasis revealed that immunoedited tumor clones are eliminated, while the immune privileged clones progress underlines relationships between clonal seeding and immune surveillance and advances the understanding of cancer evolution. A strong intratumoral immune infiltrate and Immunoscore prevent the metastatic invasion at all its steps and it is associated with prolonged survival. T-cells; Immunoscore; clonal evolution; dissemination; metastasis; venous emboli; tumor microenvironment; prognosis; survival; immunity Importance of the immune contexture against metastatic spread Despite the major clinical importance of metastasis, the metastatic process remains by large unclear. The tumor develops in a complex microenvironment comprising fibroblasts, bloodvessels, lymph-vessels, and many immune cell types. The astounding complexity of multifactorial diseases such as cancer poses significant challenges to the development of precision therapies. We previously performed a clinical study on human colorectal cancer showing that intratumoral effector-memory T-cells may control the early steps of the metastatic process. We further developed an analysis of the in situ immune reaction based on the location of memory T cells within distinct tumor regions. We showed that tumor recurrence and overall survival times were dependent on the presence of cytotoxic and memory T cells within the tumor. 1 The type, density, quality and location of immune cell within the tumor site predicted patients' survival better than the classical TNM system. [1][2][3][4] This led to the novel concept of cancer immune contexture 1,3 and to the development of a consensus assay to measure the anti-tumor adaptive immune response, called "Immunoscore." [4][5][6] We highlighted the continuum of cancer immunosurveillance, 1 from precancer lesion, 7 to locally advanced, 4,6 to metastasis. 8 Genomic data, using whole-genome sequencing of tumors, did not reveal the selective pressures within the primary carcinoma that has led to the formation of mutations associated with progression into metastasis. Therefore, an appealing hypothesis was that the selective pressure was related to the microenvironment, especially to the immune response. We addressed three major questions. First, which primary tumor-related genes affect distant metastasis? Second, which factors among blood and lymphatic vascularization and immune reaction are associated with distant metastasis? And third, is distant metastasis a cause or a consequence of an alteration of such factors? Integrative cancer immunology approaches allowed us to have a comprehensive view of the tumor chromosomal instability, gene expression pattern and of the immune system's evolution along with tumor dissemination to distant metastasis. We perform a comprehensive analysis of both tumors and microenvironment factors, including angiogenesis (blood and lymphatic vessels) and many immune cell subpopulations, in relation with synchronous distant metastasis. This represents the most complete analysis of the tumor microenvironment in human cancer. 9 We analyzed three large independent cohorts of patients with colorectal carcinoma (total of 570 patients). Our analysis of the tumor-related gene expression and of chromosomal instability did not reveal factors over-expressed or amplified implicated in tumor spread. In fact, each tumor had a unique, different set of amplifications, deletions and a particular tumor-related gene expression profile. No mutation in cancerassociated genes or pathways were associated with M-stage. Instead, mutations of FBXW7 gene were associated with the absence of metastasis (M0) and correlated with increased expression antigen presentation related-genes and of T cell proliferation. In contrast, our comprehensive analysis of the tumor microenvironment revealed the importance of the immune contexture, lymphocyte cytotoxicity and of the lymphatic vessel densities on the metastatic process. Our data show that distant metastasis is a consequence rather than a cause of the decrease of lymphatic vessels and lymphocyte cytotoxicity in colorectal tumors, and that the immune response might be a major determinant preventing the synchronous spread of tumor cells to distant organs (Figure 1). Our comprehensive study on large cohorts of patients provides a totally novel understanding of the metastatic process in human. Parameters associated with early-metastatic dissemination Integrative analyses of the tumor microenvironment at all the steps of the metastatic cascade, starting with the local tumor cell invasion, the vasculature invasion, followed by the colonization at the distal sites allowed us to draw a comprehensive view the metastatic spread and to delineate the contribution of tumor and immune parameters to this process. The early metastatic invasion is recognized by the presence of the vascular emboli (VE), lymphatic invasion (LI), and perineural invasion (PI). The VELIPI-negative tumors were characterized by increased levels of Th1 effector cells, memory T cells and Immunoscore and were associated with a good outcome. Such strong intratumoral immune presence prevents the metastatic invasion, and it is associated with prolonged survival. The main parameters associated with dissemination to distant metastasis are in fact immune and not tumor-related. Any of the known cancerassociated genes or pathways were associated with the metastasis and no significant differences in known cancer gene expression levels, chromosomal instability, or key cancer-associated mutations were observed. In contrast, the Immunoscore, a cytotoxic immune signature, and increased marginal lymphatic vessels, protected against the generation of distant metastases, regardless of genomic instability. 9 The in situ T cell infiltrate can now be quantified with the 'Immunoscore,' an immune-based assay that is superior to the AJCC/UICC TNM classification for colorectal cancer patients. 3 The heterogeneous metastatic landscape, described as the number and size of metastatic lesions, their mutational pattern as well as their immune cell infiltrate, evolves under the immune pressure that sculpts the evolution of its clones. During metastatic progression, the immunoedited clones are eliminated, while the immune privileged clones persist and progress underlining relationships between clonal seeding and immune surveillance. The immunoediting score was associated with an active immune response, implying a predictive Figure 1. Prevention of metastasis circle. Parameters associated with an absence of dissemination and absence of metastatic spread. Lymphatic vessels at the invasive margin of the tumors, cytotoxicity within the core of the tumor, generation of a memory T-cell response, and high Immunoscore were associated with the generation of a productive immune contexture preventing the dissemination of tumor clones through immunosurveillance and immunoediting process. potential for immunotherapy. Based on the evolvogram, we have proposed a tumor clone development model, called parallel immune selection model, that, in contrast with existing tumor-cell centric models, is linked to the intra-metastatic immune microenvironment via the immunoediting process ( Figure 1). 8 An evolutionary maps of metastasis that guides clinical decisions require the investigation of primary tumors and matched metastatic lesions, as well as of the immune microenvironment, which sculpture their evolution. 8 The successes of immunotherapies boosting natural T-cell response against cancer have generated tremendous enthusiasm and combination immunotherapies will likely become in the future the standard for cancer treatment.
2021-05-15T05:29:16.054Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "857513860296f10c2d745372ee2cf00650cced60", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2162402X.2021.1919377?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "857513860296f10c2d745372ee2cf00650cced60", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220250675
pes2o/s2orc
v3-fos-license
Polarization Independent Ground State Optical Transitions in Closely Stacked InAs/GaAs Columnar Quantum Dots This work presents an analysis of the electronic and optical properties of InAs/GaAs columnar quantum dots (QDs) by performing multi-million-atom tight-binding simulations. The plots of the polarisation-dependent ground state optical transition strengths predict that a nearly zero degree of polarisation can be achieved at 1550 nm emission/absorption wavelength by engineering the number of QD layers in a columnar QD. These results are promising for the design of optical devices requiring polarisation insensitive optical response such as semiconductor optical amplifiers. INTRODUCTION In the last couple of decades, single and stacked InAs quantum dots (QDs) have been extensively studied for their potential application in the design of optical devices working at the telecommunication wavelengths (1300-1500 nm).Recent focus has been to achieve polarization insensitive optical transitions from InAs quantum dots by exploiting strain interactions between the QD layers in multi-layer quantum dot stacks.Experimental measurements [1,2] have shown that the ground state optical transitions with isotropic polarization response can be achieved by closely stacking InAs quantum dots in the form of a 'columnar' quantum dot.This work analyses the polarization response of the columnar quantum dots consisting of up to 30 quantum dot layers.Our calculations indicate that a nearly isotropic polarization response can be achieved for about 11 quantum dots layers, which is in agreement to the recent experimental study where TM dominant optical spectra was measured for similar columnar QD stacks containing more than 9 layers [1]. METHODOLOGIES The modelling and simulation of columnar quantum dots is a significant computational challenge.The anisotropy of charge density distribution |ψ| 2 inside the closely stacked quantum dots is significantly affected by the underlying crystal symmetry.The strain and piezoelectric potentials further lower the symmetry and needs to be incorporated at the atomistic level.The continuum modelling techniques such as k•p or effective mass approximation therefore fundamentally lack in sufficient physics to quantitatively model such devices.Furthermore, these very large size of quantum dot stacks (more than ten QD layers) require simulation domains containing of millions of atoms. NEMO 3-D software package has been developed to handle large quantum dot systems with atomistic resolution [3].It has shown the capabilities to simulate strain in large QD systems consisting of up to 64 million atoms [3] based on a valence force field (VFF) method [4].The electronic structure calculations are performed by solving a twenty band sp 3 d 5 s* empirical tight binding (TB) Hamiltonian [5].The linear and quadratic piezoelectric potentials are included in the calculations.The interband optical transitions are computed using Fermi's golden rule by the squared magnitude of the optical matrix elements summed over degenerate electronic states [6,7]. RESULTS AND DISCUSSIONS Figure 1 (a) shows a schematic diagram of the simulated system.An InAs columnar QD consisting of 'N' (1 ≤ N ≤ 30) quantum dot layers is placed inside a GaAs buffer.The columnar QD consists of 'N' 1.5 nm tall InAs QDs placed on top of 1 monolayer (ML) thick InAs wetting layers.The base diameter 'B' of each QD is 17 nm and the height 'H' is N*1.5 nm.The size of the GaAs buffer (60x60x100 nm 3 , containing around 23 million atoms) is chosen to be large enough to properly accommodate the long-range effects of strain and piezoelectric fields.The atomistic relaxation is performed with realistic boundary conditions i.e. the bottom fixed, the lateral dimensions periodic, and the top free to relax.The electronic structure calculations have closed boundary conditions in all directions. Figure 1 (b) plots the ground state optical transition wavelength (E1-H1) as a function of the number of QD layers N. As N increases, the wavelength also increases, until it saturates for N>10.Similar results were shown in the experimental study [1].The saturation of the wavelength at ~1550 nm is due to the change in the orientation of the electronic states from 2-D (xy) symmetry to 1-D (z) symmetry inside the columnar QD (see figure 2). The polarization response of QD samples is characterized in terms of degree of polarization (DOP) [1,2] of the optical transitions.The DOP is defined as the ratio (TE-TM)/(TE+TM) [1]. Figure 1 (c) presents the plots of the DOP and the aspect ratio (AR=H/B) as function of N. For relatively flat QDs (AR<0.6), the (001) confinement is very strong.The topmost valence band state is dominantly a heavy hole (HH) like state with a weak light hole (LH) contribution; therefore, the TE mode coupling is much stronger than the TM mode coupling.As the number quantum dot layers N increases, the (001) confinement is relaxed.For large values of the AR, the in-plane strain becomes strong and hence the electronic states start becoming more confined along the x-y directions.The reversal of biaxial strain sign increases the LH contribution and as a result, the TE mode decreases and the TM mode increases [8].For N~11, the magnitudes of the and TM modes become nearly equal (TE/TM ~ 1.07) and as a result, the DOP is ~ 0.036.Further increase in N results in the TM mode becoming stronger than the TE mode and hence the DOP takes up negative values. Figure 2 shows the wave function plots of the highest valence band state H1 and the lowest conduction band state E1 for N=1, N=11, and N=21.For N=1, the strong (001) confinement results in nearly flat wave functions in the xy plane.For N=11 and N=21, the elongation of the wave functions along the (001) direction is quite evident. In summary, our calculations show that the desirable isotropic polarization response (DOP~0) can be achieved from the columnar InAs QDs for N~11.The computed results are in good agreement with the reported measurements.The demonstration of the isotropic polarization response of the ground state optical transition at ~1550 nm wavelength clearly shows a promising potential of such columnar QDs for the design of optical devices such as semiconductor optical amplifiers (SOAs). Figure 1 : Figure 1: (a) Schematic of the simulated system.(b) Plot of ground state optical wave length (E1-H1) as a function of the number of quantum dot layers N. (c) Plot of the degree of polarization (DOP) and aspect ratio (H/B) as a function of the number of quantum dot layers N. Figure 2 : Figure 2: Plots of the wave functions for the highest valence band state H1 and the lowest conduction band state E1 are shown for N=1, N=11, and N=21.
2020-06-30T01:00:36.747Z
2020-06-20T00:00:00.000
{ "year": 2020, "sha1": "d44d6c83c3d5648a42df004994f5df0ec67efe0b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d44d6c83c3d5648a42df004994f5df0ec67efe0b", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
168499124
pes2o/s2orc
v3-fos-license
Judicial Intervention to Corporate Governance: Causes and Approaches A corporation is a legal person with group personality created by the law and is considered to have the freedom of autonomy as a subject of private law. A corporation is also considered as shareholders’ tool to pursue profit just like a kite flown by shareholders especially substantial shareholders. A corporation is considered as a contractual combination, a gathering place of stakeholders and constituting a cell of society. A corporation is described as a Utopia. …With different interest claims, subjects such as management, shareholders, directors, creditors, employees and others who coexist in a corporation through the certain formal structure of corporate governance. Introduction 1. The need to protect the rights of minority shareholder under shareholder autonomy. The traditional theory of company law insists on company law theory which considers that a company's legal personality is just its superficial characteristic and shareholders are undertakers of owner's equity and therefore the fundamental corporate nature is that of a shareholders' investment tool. Shareholders occupy the core positions as company owners in corporate autonomy and hence modern company autonomy is shareholder's autonomy in essence. The emergence and development of a company and company systems take place with shareholders as the core pushing power, and the function performance of the company system mainly depends on the realization of shareholder autonomy. However, the corporate governance with shareholder autonomy at its center indicates that the majority shareholders' opinions will be adopted to deal with company business and management affairs. Therefore, the majority shareholders in a company have the final decision-making right and their decision on any matter made through an ordinary or special resolution shall be binding on all shareholders. As to the matter decided by majority voting, the minority shareholders have to comply with the majority shareholders' will. 3 According to theoretical study and practical observation, the majority shareholders take advantage of their controlling position to suppress the minority shareholders, to avoid certain items in articles of association or to gain the maximum interest and wealth from the company for themselves. When the majority shareholders abuse the so-called democratic rights arbitrarily the company will not ensure an even bargain and shareholders' individual rights will not be respected. The abuse of majority rule shows more obviously in a non-listed company, because "In a closed setting, the corporate norms of centralized control and majority rule can become instruments of oppression. Some decisions vitally important to participants, such as their employment and salary, are left to the board of directors to make. When harmony between participants disappears, the minority participants may find that the majority interest can manage the affairs of the corporation in unexpected ways. The minority dominating the board can terminate minority shareholders' employment as officers, thereby diminishing the return on their investments. The corporation may not pay dividends to any shareholders to avoid double taxation, yet the majority shareholders will continue to receive a return on their investment in the form of salary or perhaps rent or interest on money loaned to the corporation. Indeed, these amounts may increase after the minority shareholders are excluded. 理员上以及员践员察所员,多数股员会利用其控制地位,员制少数股员、员避公司章程中的特 Traditionally, the minority shareholders have had no way to protect themselves against such an occurrence. If minority shareholders attempt to establish a contract for protection against this possibility, such as by agreements that the minority shareholder retain a corporate office and a salary, courts earlier this century struck down the agreement as an unlawful interference with the unfettered discretion of directors. The performance of the corporate form further compounds the minority shareholder's dilemma. Without a job and in the absence of dividends, the minority shareholder may face an indefinite future with no return on the capital he or she contribution to the enterprise. The majority may even be able to deny the minority shareholders any return in the long run by siphoning off corporate assets in the form of high salaries or rents, insulated from judicial review by the business judgment rule. In addition, the majority may force the minority to leave the company with unjust excuses." 4 在中国大员,国有企员以及家族企员在上市公司中普遍存在,员员的集中员员型体制以及文化 氛员造就了上市公司高度集中的股员员构。又由于上市公司股员人数众多且多员散员,员繁员 股,集体行员"搭便员"消极参与公司治理,员些因素员致"内部人控制"普遍存在于上市公司。 在员些公司中控制股员员有员员超员中小股员的员员特员,不员员员有股员员,员操员控制员 事、董事甚至于员员者的员利行使。大股员利用其员股员大会、董事会的控制形成决策,员达 成自己的特殊目的而漠员中小投员者的利益,甚至不惜侵犯中小股员利益。员员员员在上市公 司中员出不员,如关员交易、侵吞公司员员等。董事会的独立地位员失很员形成独立的员员决 策,与此同员更员以员员它的员控员能。 5 公司法存在短板:股员会决员代替股员共益员具有 合理性,但代替、员制股员自益员,缺少正当性基员。 There are many state-owned firms and family businesses among listed firms in China mainland. The traditional centralized economic system and cultural atmosphere have resulted in a highly centralized ownership structure in the listed companies. Moreover, the listed firms have a large number of shareholders who are retail investors. A majority of these can exchange shares frequently that is their collective "hitch-hiking" impacts on a corporation's share price, and negatively on corporate governance. All of these factors lead to the ubiquity of "insider control" in listed firms. In these firms, the controlling shareholders enjoy far more practical privileges than the minority shareholders, and they not only own the shareholders' rights but also manipulate and control the practice of the rights of supervisors, directors, even operators. By taking advantage of their control on the general meeting of shareholders and the board of directors, the big shareholders reach resolutions so as to achieve their special purposes while disregarding, even infringing the rights and benefits of small investors. These sorts of disputes frequently occur in the listed companies in respect to affiliate transactions, and annexation of firms' properties. The board of directors loses its independent status, which makes it difficult to achieve a decision independently and at the same time, fails to perform its function of supervision. 5 There is an obvious shortcoming of company law; in response, it is reasonable to use resolutions of the shareholders' meetings to replace the common-interest rights of shareholders when the law lacks a legitimate basis to replace and suppress the self-interest actions of shareholders. The shortcomings of company law are: since the relationships between controlling shareholders and minority shareholders, companies and directors, shareholders and directors, debtors and companies, shareholders and directors, and companies and staff are very complicated, and the habitual thought of governing companies centers on shareholders, company law neither emphasizes the rights or problems of many shareholders, nor sets an effective relief system. It is necessary to depend on and learn from some concepts and values in Anglo-American Law such as "fiduciary obligation" "judgment of operators" and "business judgment", so that judges can discretionarily judge the legitimacy of shareholders and protect their rights and benefits in certain cases. In the Chinese mainland judicial system,it may be through the way of a Supreme Court case to guide the Primary Court firstly, and then by way of judicial interpretation, and ultimately by modifying the Companies Act, to determine the principles of judicial intervention on corporate governance, ways and necessary restrictions. The realization of lawful order in company governance is through unification between company autonomy and national enforcement (legislative regulation and judicial intervention). The judicial intervention in company management substantially reflects the state's will to correct company autonomy. Cheffins, a Canadian expert on company law analyzes judicial intervention in the operation of company from the dual goals of promoting efficiency and realizing fairness 8 . When it comes to the goal of efficiency, firstly, it is necessary for the judiciary to get intervention since the problem of incomplete information, including the problem of systematic information and unbalanced information, may cause the existence of a gap in a company contract, fraud in contract system, and a "lemons market", resulting in the waste of company resources; secondly, the cost of private conclusion to a contract is very huge, and the contracting cost can be reduced through mandatory law texts of the state; and thirdly, national enforcement can solve the problem of negative "externalization" as well as collective actions (game theory explains the reason why an individual trader practices in a way that can maximize his own benefits but obtain a result without efficiency). As for a fair goal, national enforcement can appropriately prevent those fraudulent, misleading, and coercive conduct while paying more attention to the underprivileged ones. Moreover, the actual controllers of the company will have to shoulder strict fiduciary duties so that fair treatment to small and big shareholders and balanced interests among them can be realized. Furthermore, national enforcement can also restrict disordered competition and create a market mechanism with fair competition, ensuring that market participants observe the basic moral codes. Feasibility of judiciary intervention into company management (一)司法功能包含着员公司治理的外部保障 Judiciary function including an exterior safeguard for company management 公司治理是一个复员系员,涵盖了公司内部和外部不同的制度员构。司法与行政共同构成公司 治理的外部员控机制,司法机构及其代表的法律制度,又是公司治理的外部保障机制。司法机 构代表国家意志,员判公司自治(私人)行员,员正非正员的、利益失衡的自治行员,保障相 关当事人的员利,员员公司自治的良好秩序。司法员通员民事员员的途径介入公司自治,可以 保障员利,员员公司内部利益关系和公司与外部的关系,促员公司自治的有序有效地员行。 As a recheck system, company management covers different system structures inside and outside companies. The judiciary and administration together constitute an exterior monitoring mechanism of company management. Judiciary organizations and legal systems represented by the organizations are also an exterior safeguard mechanism of company management. Judiciary organizations represent the will of the state to judge company autonomy (private) conducts, correct autonomy conducts of injustice and imbalanced interests, safeguard the rights of relevant parties, and maintain a sound order of company autonomy. Jurisdiction intervention to company autonomy in the way of civil action can safeguard the rights, coordinate benefits relationships inside companies and the relationship between companies and the outside, and promote company autonomy in ways that are both orderly and efficient. 1.司法的员利保障功能(员利保员救员型介入)。 (1) Rights safeguard function of jurisdiction (intervention to a rights safeguard and relief type) Rights are divided into substantial rights and procedural rights (relief rights). "Where there are rights, there is protection; there will be no rights if remedies do not exist". The purpose of civil action lies in protection for the substantial rights by offering necessary remedies to the substantial rights. 9 On the other hand, the purpose of civil action also lies in settling disputes, so the system of civil action will become meaningless if disputes between individuals fail to be settlrd. 10 Disputes are objects of the jurisdiction function and safeguarding rights is the function of judiciary. 公司员利生员平衡的外部手段。公司法上相关主体员员员是否解决,员利有否得到保障,是衡 量司法员介入公司治理有效性的基本员准。 There is a rights chain in companies, and shareholders' rights are the base for other rights in the companies. It is through shareholders' conduct that a group personality of companies is established. The separation of the shareholders' rights and companies' rights maintain their own independent personality. The best organization and structure mode for maintaining companies' independent personality should be the establishment of a powerful board of directors not a board of shareholders. The members in a board of directors do not have to be shareholders themselves. Only by an operations group which is not comprised of shareholders completely, can the other personality subject-companies' benefits---be taken into consideration independently. 11 Besides some main linking points in the rights chain of companies such as shareholders, companies, and directors, there are debtors, staff in companies and other rights-holders. The "ecological balance" of companies' rights will be destroyed if the rights of each chain are too strong or too weak. Therefore, the remedy on rights is an exterior practice to maintain the ecological balance of companies' rights. The fundamental standards on evaluating the effectiveness of the jurisdiction's intervention to company management are whether disputes among relevant parties in company law get settled, and whether the rights get safeguarded. 2.司法的秩序员持功能(秩序员员型介入)。 (2).Order maintenance function of jurisdiction (intervention in an order maintenance type) Civil action enables parties to obtain safeguards when successful, so that the parties can predict the result of planning and performing in accordance with the norms and regulations of substantial law. Self-discipline in compliance results in an overall stable social life. 12 Company autonomy is a private order inside and outside companies supported by a company management structure. Good company management is a fair, orderly, and effective autonomous structural system formed by multiple rightsholders such as companies, shareholders, directors, managers, debtors, staffs. Jurisdiction's intervention in company management can correct the malfunction and disorder of company autonomy besides settling disputes and safeguarding rights. 司法机构作员公司治理系员的员成部分,有效及员干员公司自治活员,引员公司完善公司的治 理员构,使公司自治回员正员、公平和有效的公司治理目员。 As a part of a corporate governance system, the judicial body gives effective and timely intervention to corporate autonomy, and leads to a perfect corporate governance structure so as to achieve the goal of justice, fairness, and effectiveness for corporate governance. about public policy, among other factors, to guide their decisions, usually suggesting that adherents of this philosophy tend to make constitutional violations and are willing to ignore precedent." 13 There exists a trade-off relationship between judicial activism and judicial restraint. The application of judicial activism is a confirmation on judicial discretion, which allows judges to extract rules and bridge law gaps to ensure the integration of specific application of laws and justice. Lord Denning had explained the complementary function of judicial discretion for legal limits, and described the basic principle to be followed by judicial discretion as "Judges must not change the texture of laws but ironing the folds". 14 员持司法能员主员,需要警惕保守司法员念。保守司法员念之一员员,公司内部管理事员法院 不员干员,法院不员介入公司的员员决策活员,不涉及公司商员方面的员员判断。在私法员域 ,奉行"有员员即裁决"的司法准员,"法院不得以法律未员定员由拒员裁判民事员员"是民事司 法的基本原员。司法员员不介入公司事员的员念,员然不符合司法最员裁决的原员。员有一种 保守司法员念员员,司法介入员公司内部事员的员员,即等同于干员企员,犹如大员员划员员 年代"政企不分"和政府的员度管控。司法员尊重公司的员员,不员极主员介入公司,但并不意 味着司法要像行政一员离公司"越员越好",司法与行政员竟担员着员员的员能。 Conservative judicial concepts need to be alert while judicial activism is followed. One of the conservative judicial concepts is that the court should not intervene in corporate internal management, corporate operations and decision-making activities, or corporate business judgments. Jurisdiction pursues a judicial criterion of "ruling follows disputes". "The court shall not reject to adjudicate civil disputes on the ground of no provision provided in law" is the basic norms of civil justice. The conception that justice never intervenes in corporate affairs does not accord with the principle of final judicial adjudication. Another conservative judicial concept is that a judicial view on corporate internal affairs is equal to intervening in corporate management, which is similar to the governmental over-controlling of companies in China's planned economy era. Jurisdiction plays a different role from that of administration. Jurisdiction should respect corporate operations, which does not mean jurisdiction needs to be away from corporate operations such as administration. In the US judicial activism is mostly used by the legal realism school created by Holmes. When a judge deals with a specific dispute, a balance needs to be made between codes tightly held in his left hand and some integrated factors such as specific case contents, social influences, mortality, ethics, policies, and legal principles touched by his right hand. Based on this balance, a final decision is made. Therefore, when a judge is involved in corporate governance affairs in the name of judicial review, he /she has to adhere to the principles of legality review first while a rationality review is second, and a formal examination first while a substantial examination second, so as to be wary of corporate affairs, respect corporate autonomy, and maintain corporate ordering. 员,很多当事人可能就以此作员员取其他不正当利益的筹员,以达到敲员公司的目的! Judicial intervention to corporate governance could prevent some shareholders' opportunism. If the cost of suing, or judicial remedies without suing, is low, it may cause the disadvantages associated with opportunism while easing the rights protection of minority stockholders. Once justice intervenes in corporate disputes, the corporate management must make efforts to deal with the intervention, causing some damages to the whole corporation's benefits. Considering this risk, many shareholders may make use of this risk to blackmail a corporation in exchange for improper benefits. The Company Law has provisions on protection of substantive rights. In those cases heard through ordinary civil procedure and involving active judicial intervention in the internal decisions of the company and the policymaking of the operator, however, the judicial bodies shall specify and fulfill the mechanism of judicial remedies on substantive rights protection stipulated in the Company Law through the approach of judicial interpretation. The protection of shareholders' rights is an example. In the existing Company Law, considerable protection has been given to the rights of shareholders, but it is relatively conservative and rigid. It lacks a broader remedy and protection mechanism towards those behaviors that are harmful to the rights of shareholders and are not listed in the Company Law. Examples are lacking a mechanism or policy to remove invalid decisions that could damage parts of the shareholders' rights; to confirm or forbid the stock right transfer that could damage the rights of other shareholders; to force the company or other shareholders to purchase the stock of the shareholder who is treated with unfair prejudice from by a resolution of a shareholder meeting. In the framework of existing Company Law, the following situations shall be handled through the ordinary civil procedure by judicial interpretation: the expansion of shareholder derivative action, the confirmation and valuation of targeted share repurchases and the disputes in corporate governance such as the examination of the legality to the rules, regulation and articles of incorporation. The unfair predjudice remedy mechanism in British Company Law is worth drawing on. Unfair conducts violating the by-laws of the company usually infringe the personal rights of the shareholders. 英国公司法上的不公平员害之救员制度员得借员。当不公平行员员足员公司章程员反同员,通 In the Common Law, shareholders exercise their personal rights endowed by the by-laws of the company in a limited way. They cannot file a lawsuit against internal irregularity. British scholars believe that the personal rights of a shareholder shall not be changed or removed by other shareholders and they advocate the protection of shareholder's rights by expanding their rights endowed by the by-laws to other rights within reasonable anticipation. In 1962, the Company Law Committee also known as Jenkins Committee put up a remedy to unfair predjudical actions and proposed to expand the judicial power of the court so as to enable the court to interfere with the affairs of the company by offering remedies to unfair prejudicial actions based on the principle of equity. This proposal was not adopted by the British Company Law until 1980. Article 994 in British Company Law stipulates that a member of a company may apply to the court by petition for an order on the grounds that the company's affairs are being or have been conducted in a manner that is unfairly prejudicial to the interests of members generally or of some part of its members (including at least himself), or that an actual or proposed act or omission of the company is or would be so prejudicial. 16 The Jenkins Committee's report points out that unfairness obviously deviates from fair trade standards, violating the fair game rule set by the shareholders who invest money into the company. 17 The legal precedent considers the test criterion of an unfair prejudice as objective rather than subjective, which means that the applicant is not required to testify whether the behavior of the defendant is out of malice. That is to say the court will affirm the establishment of an unfair prejudice when the result leads to unfair prejudice no matter whether the behavior of the defendant is out of malice or not. The most commonly used remedies for unfair prejudice are command and writ. When the rights of a shareholder are infringed by unfair conduct, the court will order the company or other shareholders in prejudicial to the interests of members generally or of some part of its members (including at least himself), or (b) that an actual or proposed act or omission of the company (including an act or omission on its behalf)is or would be so prejudicial. the company to buy the stock of the shareholder so as to help the shareholder get rid of the awkward situation in which he/she is forced to leave the company. But it is fair only when the stock of the shareholder is not sold at a low price with a discount. 18 With regard to the base date, the court may choose one from the following options: the date of occurrence of unfair behavior, the date of filing the application, the date of issuing the stock purchase warrant or the date of evaluating the price warrant. The judges' opinions vary from one to another when it comes to choosing the date in a specific case. The judge Vinelott J believed that the evaluation date shall be the date of filing the application on the grounds that on this day the applicant decides to file a lawsuit against the unfair prejudice and the cooperative foundation of the two sides no longer exists on the same day. 19 The judge Nourse J believes that the base date shall be the date of issuing the stock purchase warrant on the grounds that it is appropriate to evaluate the basic interest when it is decided to be purchased. 20 当不公平员害行员是公司本身行员的失当员,员于公司正在员行的行员或将要员行作员或不作 员有悖股员员益的案件,法院将会做出要求公司从事某一行员的令状。从员员救员申员的提出 到法院员行令状需要员日,会员生行员上的员延和耽员,造成因阻止不及员使不公平行员员原 告造成了员员的员害,如原告被员逐出公司股员之列等。在此情形下,原告需要申员法院员行 员员令状,法院会在充分考员令状员原告员益员员和公司正常运员秩序的影响后作出适当的决 定。更多员候,法院员会员行员持员状的令状,员一令状旨在固定公司员状保员申员人利益不 受员害。 When the unfair infringement is caused by the misconduct of the company itself, the judicial court will issue a writ that the company should perform an act based on the case in which the company is conducting or will conduct an act, or even an omission, if it infringes the shareholder's rights. After the petition for litigation relief, there will be a period of time before the court issues the writ, causing delay of actions and untimely prevention from actual harm of unfair practices to the claimant, like being expelled from the shareholders. Under such a circumstance, the claimant should apply for a temporary command, which will take into full consideration in the decisions, the command effects on the operation order of the company and the protection of the claimant rights. More often, the court will also issue a present-situation-maintenance command for the purpose of guarding the petitioner's rights under a fixed situation. The adoption of an unfair infringement remedy system, in ordinary civil procedural lawsuits, and granting the court the authority of command (writ) to intervene in the phenomenon of "tyranny of the majority" in the company and protect the property rights of a few shareholders from unfair infringement. As to stock price, the principle should be "it is fair only when one's stock is not sold at a low price with discount"; and as to the date of evaluation, from the perspective of equity, from the day the claimant institutes the legal proceedings to the day he/she submits the application, he/she could not continue to require sharing the profits gained from the proper management of the accused or other managers caused by the breach of trust deeds between the claimant and other shareholders or directors of the company; on the contrary, the claimant shall not assume unfavorable consequences would be caused by a reduction of management vigor, business hours or the company's reputation resulting from the proceeding. Accordingly, we should fix the date of evaluation on the second day of the proceeding, which is a price indeterminate for both sides. In this way, the value target of ensuring stock equality can be better realized. However, adopting ordinary civil procedure to solve every company dispute has been obstructed by enormous difficulties and challenges. Existing types of company disputes possess at least the following cases that are available to special procedure when the company fails to obtain favorable judgment through ordinary civil procedure. In terms of function, the special procedure focuses on prevention while the usual procedure is on remedy afterwards and supervision. The special procedure does not aim to make a judgment on the dispute of rights and obligations between parties, but takes some detailed measures to intervene in the company operation. By means of judicial intervention, it is possible to promote the effective operation, prevent unfair behaviors and avoid the infringement of company's as well as the related parties' interests The cases of special business procedure related to company are as follows: 1.股员员利表彰性案件,如股员员求公司填写并在公司置员股员名册,员求公司员员出 员员明员,员求公司员股员员更登员等案件。 (1). The case of rights recognition of shareholders such as shareholders request the company to prepare and fill in a roster of the shareholders, sign and issue an investment certificate, and register the modification of shares. 公司披露董事、员事、高员管理人员的员酬,公众公司股员员求知悉公司员员活员和重大并员 、员员等信息。 (2). (3). The case of dispute on the convening of shareholders' assembly, board of directors and board of supervisors; including the cases like irregular convening, negative omission, and if the entitled convener appeals to judicial support to conducting a temporary board meeting in accordance with the law. 4.董事、员事、高员管理人员的任员与解任员员案件。 (4).The case of dispute on appointment and dismissal of director, supervisors and senior Managers; This paper argues that China should learn the concepts, institutions and measures of judicial intervention to company governance from western countries and build a non-litigation intervening mechanism paired with China's civil action system to solve company disputes. Courts should be entrusted some rights to punish violating and unfair conduct by company staff, including the right of judicial curb (ban), the right of invalidation, the right of judicial selection, the right of judicial dismissal and the right of judicially convening stockholders' meeting. Thereby, it is possible to defuse injustice and unfairness, maintain the normal order and enhance the efficiency of corporate governance.
2019-05-30T13:15:54.006Z
2015-12-22T00:00:00.000
{ "year": 2015, "sha1": "1a5b4a1b5f419cfb47669380c44398ae0bd40c58", "oa_license": "CCBY", "oa_url": "https://jbsge.vu.edu.au/index.php/jbsge/article/download/842/1094", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3eaf515a1e451d3cc1b38ad7adc0806684e98280", "s2fieldsofstudy": [ "Business", "Law" ], "extfieldsofstudy": [ "Business" ] }
757224
pes2o/s2orc
v3-fos-license
CSF and Blood Levels of GFAP in Alexander Disease Abstract Alexander disease is a rare, progressive, and generally fatal neurological disorder that results from dominant mutations affecting the coding region of GFAP, the gene encoding glial fibrillary acidic protein, the major intermediate filament protein of astrocytes in the CNS. A key step in pathogenesis appears to be the accumulation of GFAP within astrocytes to excessive levels. Studies using mouse models indicate that the severity of the phenotype correlates with the level of expression, and suppression of GFAP expression and/or accumulation is one strategy that is being pursued as a potential treatment. With the goal of identifying biomarkers that indirectly reflect the levels of GFAP in brain parenchyma, we have assayed GFAP levels in two body fluids in humans that are readily accessible as biopsy sites: CSF and blood. We find that GFAP levels are consistently elevated in the CSF of patients with Alexander disease, but only occasionally and modestly elevated in blood. These results provide the foundation for future studies that will explore whether GFAP levels can serve as a convenient means to monitor the progression of disease and the response to treatment. Introduction Alexander disease (AxD) is a progressive and generally fatal neurogenetic disorder, with ages of onset ranging from fetal through late adulthood, resulting from heterozygous dominant mutations in the astrocyte intermediate filament protein glial fibrillary acidic protein (GFAP; Brenner et al., 2001;Messing et al., 2012b). The hallmark feature of the pathology, cytoplasmic aggregates known as Rosenthal fibers within astrocytes, are composed of GFAP (Johnson and Bettica, 1989) along with a number of other proteins such as plectin (Tian et al., 2006), and the small stress proteins ␣B-crystallin and Hsp27 (Iwaki et al., 1989;Iwaki et al., 1993). Studies in mouse models have led to the conclusion that, unlike most of the other intermediate filament disorders, GFAP mutations act in a gainof-function fashion, and that elevations of total GFAP levels may be a major factor in pathogenesis Tanaka et al., 2007). Several strategies are now being pursued as potential treatments for AxD, chief among them being pharmacologic approaches for suppressing the expression of GFAP below its toxic threshold or interfering with other downstream effects of GFAP toxicity (Cho et al., 2010;Messing et al., 2010). How one might assess the efficacy of any potential treatment is a major question. Although AxD is genetically homogeneous, the clinical presentations are quite varied, with age of onset ranging from fetal through late adulthood. The most widely used classification system is based simply on age of onset, and divides patients into early, juvenile, and adult categories (Russo et al., 1976). More recently, two different classification systems have been proposed that are more reflective of lesion location and symptoms (Prust et al., 2011;Yoshida et al., 2011). No consensus clinical scoring system exists for evaluating disease severity or monitoring progression, no clear genotype-phenotype relationships have been identified, and no prospective natural history studies have been performed. While existing MRI criteria are highly reliable as diagnostic tools (van der Knaap et al., 2001, they are not suitable for quantifying disease severity or monitoring disease progression. Estimates of survival are also changing; , in an early review of 44 patients, calculated a median survival time from onset of 3.9 years for the early-onset group, while Prust et al. (2011), based on a much larger cohort of 215 patients, found a median 14-year survival time for the type I patients. Given the wide spectrum of clinical presentations and courses for AxD, there is a clear need to identify and evaluate biomarkers that can serve as surrogate indicators of the potential response to therapy. One potential biomarker is GFAP itself. GFAP, despite being a cytoplasmic protein, is normally present at low levels in CSF and blood, and the levels in these fluids are increased in the context of a wide variety of injuries and diseases of the CNS (Liem and Petzold, 2015). Following traumatic brain injury, particularly ones that are acute and focal, serum GFAP levels rise, and serum GFAP level was recently adopted as a key biomarker for the TRACE-TBI study on common data elements (Yue et al., 2013). In one study of three AxD patients, Kyllerman et al. (2005) reported that CSF levels of GFAP were elevated in each one. GFAP levels could conceivably serve as a biomarker in AxD in two distinct ways. First is the known association of GFAP release with injury or damage, as cited above; and, indeed, the lesions in AxD range from very subtle, without evident leukodystrophy, up to large cavitating lesions with loss of nearly all tissue elements (Messing and Goldman, 2004). Second is the proposed link between toxic accumulation of GFAP and pathogenesis-GFAP elevation is found in both human patient samples and mouse models Tang et al., 2008;Walker et al., 2014), with the mice displaying a clear correlation between levels of GFAP expression and severity of phenotype . To determine whether GFAP levels can serve as a useful biomarker for AxD requires replication of the CSF findings from Kyllerman et al. (2005) in a larger cohort of patients, as well as measurement in more conveniently collected biopsy fluids such as blood. AxD patient samples The sole inclusion criterion for participation in this study was genetic confirmation of the diagnosis by sequencing of GFAP. Informed consent for studies of CSF was obtained following protocols approved by the institutional review boards (IRBs) at the University of Wisconsin-Madison, the Children's National Medical Center, and the Mayo Clinic. Only leftover samples from a previous clinical use were permitted for study. Informed consent for studies of blood was obtained following protocols approved by IRBs at the University of Wisconsin-Madison, Children's National Medical Center, Massachusetts General Hospital, Washington University in St. Louis, and the Mayo Clinic. Again, consents were obtained either through in-person interview or by telephone, with written confirmation. For samples collected in The Netherlands, Italy, and Germany, the principles outlined in the Declaration of Helsinki were followed. Control samples Controls for CSF studies consisted of 24 deidentified samples collected by lumbar puncture (LP) for various purposes but considered within the range of normal for protein, glucose, and cell counts. The CSF controls were exempted from the requirement for consent. Additional information about the CSF control group (age, sex, and reason for LP) is given in Table 1. Controls for blood were obtained from apparently healthy adults of both sexes (27 males, 84 females), who were asked to exclude themselves if they had specific conditions (neurologic or psychiatric disorders, head or brain trauma within the past 12 months, type 1 diabetes, inflammatory bowel disease) or were taking specific medications within the past 3 months (clomipramine, amitriptyline, prednisone, dexamethasone, or tamoxifen). The exclusion criteria for plasma control samples were based on a literature review of conditions known or hypothesized to influence GFAP levels in CSF and blood (Liem and ) and a study specifically aimed at identifying pharmacological modifiers of GFAP expression (Cho et al., 2010). Plasma preparation Fresh samples of venous blood were collected into lavender-topped tubes that contained K 2 -EDTA as an anticoagulant to allow the preparation of plasma. The samples were centrifuged within 60 min of collection at 2500 relative centrifugal force for 15 min at room temperature, and the supernatant was immediately placed in a polypropylene tube and stored either on dry ice for shipping or in a Ϫ20°C freezer until shipping could be arranged. Upon arrival at the central laboratory, the samples were then thawed, divided into aliquots, and stored at Ϫ80°C until further analysis. Three blood samples were collected as serum rather than plasma, and so were considered nonstandard. Statistical analyses were repeated with or without these samples, and the results were the same. Quantitation of GFAP GFAP levels in CSF and blood were quantitated using a sandwich ELISA, as previously described (Jany et al., 2013). The capture antibodies consisted of a cocktail of monoclonal antibodies (SMI-26R, Covance) diluted in PBS (catalog #BP3994, Fisher). Plates were blocked with 5% milk in PBS before the addition of samples or standards diluted in PBS with 0.05% Tween 20 and 1% BSA (catalog #A7030, Sigma-Aldrich), with each sample analyzed in triplicate. Antibody incubation steps were performed in 5% milk-PBS, and washing steps were performed in PBS-Tween 20. Standard curves were generated using bovine GFAP (catalog #RDI-PRO62007, Fitzgerald Industries International), and reaction volumes consisted of 100 l/well. CSF and plasma samples were initially diluted 1:1 with ELISA buffer, though in some cases higher dilutions were necessary to bring the values within the linear range of the assay. GFAP values were expressed as ng/L. Under these conditions, the lower limit of detection was 11 ng/L, and the biological limit of de- Information regarding each patient who contributed blood and/or CSF samples is shown, including gender, GFAP mutation, age of illness onset, age at sample collection, and age at death (if relevant), and sorted by GFAP mutation. For some patients, the age of illness onset was estimated ‫)ء(‬ or the patient was asymptomatic but had a familial history of AxD (NA). All ages are given in years. References to prior publications containing additional clinical details about particular patients are also given, if available. F, Female; M, male. ‫ء‬Age of onset was estimated. †Parent-child duos are shown together on consecutive lines (19-20, 24-25, and 44-45). ‡The pathogenicity of the R105 mutation is uncertain. §The D157N mutation is considered a benign variant, but its impact in a compound heterozygote is not known. tection (BLD; after accounting for the minimum 1:1 dilution with reaction buffer) was 46 ng/L. The intra-assay coefficient of variation, determined using the bovine GFAP standard at 33 ng/L in 10 sets of triplicate wells, was 13%. The interassay coefficient of variation, determined using pooled CSF samples taken from GFAP Tg mice that overexpress wild-type human GFAP, was 11%. CSF and plasma samples from Gfap-null mice gave readings that were below the BLD in this assay (data not shown), thus validating its specificity. In addition, plasma samples from Gfap-null mice were spiked with known concentrations of purified bovine GFAP to verify that the 1:1 dilutions of plasma used here did not interfere with the sensitivity of the assay (data not shown). All animal procedures were approved by the Animal Care and Use Committee for the Graduate School at the University of Wisconsin-Madison. Statistics GFAP levels obtained from CSF and blood samples were summarized in terms of medians and ranges. To determine a reference range for GFAP levels in blood samples from healthy control samples, we took into account the BLD of the assay (46 ng/L). Samples yielding values lower than this limit were treated as censored values at 46 ng/L in the analysis. Due to the presence of censoring, semiparametric quantile regression was conducted (Richardson and Ciampi, 2003). Overall, the quantile regression analysis indicated that there was no age or gender effect on the GFAP values in the control subjects. The nonparametric Wilcoxon rank sum test was used to compare GFAP levels between AxD case patients and control subjects. Analogously, within AxD case patients, the nonparametric Wilcoxon rank test was used to conduct comparisons among subgroups. The Sidak correction was applied when performing multiple comparisons. Ninety-five percent confidence intervals (CIs) for the differences in medians between AxD case patients and control subjects were constructed using the nonparametric bootstrap method. All p values were two sided, and p Ͻ 0.5 was used to determine statistical significance. Statistical analyses were conducted using SAS version 9.3 software (SAS Institute). Patient population Samples were collected from AxD patients with confirmed mutations in GFAP. Those patients for whom leftover clinical CSF samples were available included five females and seven males, ranging in age from 3.7 to 46 years. Those patients from whom blood samples were collected included 26 females and 22 males, ranging in age from Ͻ1 to 65 years of age. Both blood and CSF samples were available for 10 of these patients. Information for each patient regarding gender, specific mutation, age of first symptom, age at collection of samples, and age at death (if relevant) is provided in Table 2. GFAP levels in CSF We established a reference range for CSF controls in our assay using a set of 24 samples, and found the mean GFAP level to be 249 ng/L (median, 133 ng/L; range, 46-1386 ng/L, with one value falling below the BLD). The highest values in the control group came from one child with lymphoma (site unspecified) and one child with an "acute life-threatening event" (type unspecified). CSF samples were initially available for 10 AxD patients, and in a single assay all of the AxD patient samples were run alongside 12 samples from the control subjects (the control subjects chosen to span the full range as previously identified; Fig. 1). Considered as a group, GFAP levels in the AxD patients were significantly elevated compared with the control subjects (patients: median, 4292 ng/L; range, 387-24272 ng/L; control subjects: median, 103 ng/L; range, 46-1386 ng/L; p Ͻ 0.001 a ). Considered as individuals, GFAP levels in 9 of the 10 samples from AxD patients were elevated compared with those from the control population. Because of the paucity of samples, this experiment was replicated once, with similar results. Two additional samples were subsequently received (from patients 35 and 50) and analyzed separately, and yielded values of 2721 and 1749 ng/L, respectively. Individual GFAP values for each AxD patient in relation to age, gender, genotype, and duration of illness are shown in Table 3. GFAP levels in blood We established a reference range for plasma controls in our assay using a set of 111 samples obtained from healthy volunteers. Run in their entirety in a single assay, 65 of these samples yielded values in the detectable range, with a median of 61 ng/L and a 95% CI of 46-861 ng/L. This experiment was replicated three times on sub- The GFAP concentrations in the subset of samples from control subjects compared with the full set of 111 plasma samples from control subjects revealed no significant GFAP concentrations (in ng/L) in CSF and blood of individual AxD patients. Patient 25 was asymptomatic at the time of collection. Duration of illness is defined as the age at sample collection less the age at illness onset, using the values shown in Table 2. Parent-child duos are shown together (19-20, 24-25, and 44-45), as in Table 2. All blood samples are plasma, except for three (9, 16, and 50), which are serum. N/A, not applicable. ‫ء‬Age at illness onset and duration of illness were estimated. †Parent-child duos are shown together on consecutive lines (19-20, 24-25, and 44-45). ‡The pathogenicity of the R105 mutation is uncertain. §The D157N mutation is considered a benign variant, but its impact in a compound heterozygote is not known. differences (data not shown). Considered as a group, GFAP levels in the AxD patients were significantly elevated in the AxD case patients with infantile (p ϭ 0.002 b ) and juvenile (p ϭ 0.025 c ) onsets compared with those in the control subjects (Fig. 2). This assay was repeated three times with similar results. Individual GFAP values for each AxD patient are shown in Table 3. Subsequent to the experiments described above on the large set of AxD patients, 12 additional samples were received, which were analyzed on three separate occasions (with smaller numbers of control subjects to verify the consistency of the assay). In these latter patients (Tables 2, 3, patients 1 , 8, 13, 22, 29, 31, 35-36, 38, 40, 49-50), plasma values ranged from 64 to 1314 ng/L, only patient 36 being higher than the 95% CI established for control subjects. Both CSF and blood samples were available for 10 AxD patients (though usually collected at different ages; Table 3). For each of these patients, CSF values were significantly higher than the blood values. Although the number of patients was too low for valid statistical analysis, it appears that the blood levels may only exceed the 95% CIs of control subjects when the CSF values are above a certain threshold (Fig. 3). GFAP levels in relation to clinical features We analyzed the individual blood and CSF values of GFAP in relation to several aspects of the clinical and genetic histories. Additional clinical information about each AxD patient is given in Table 4. With respect to specific genotype, the 12 CSF samples came from patients with 10 different mutations, and only 2 each for the R79C and R88C mutations, so no conclusions were possible. For blood samples, five mutations were represented by four to six patients (R79C, R79H, R88C, R239C, and R416W), and two samples were available for the one genotype that is generally considered severe (R239H); but the values for GFAP were all highly variable in these groups. Hence, there does not appear to be any direct connection between genotype and the levels of GFAP detected in these assays. In addition, when gender was considered as a variable, there was no significant difference between males and females. The number of CSF samples was so small that we could not establish any particular connection to age of onset of illness, age at sample collection, or duration of illness (i.e., the difference between the two time points). For blood, the GFAP values were significantly correlated with the age of illness onset and age at sample collection (which is likely linked to the age of onset), but not duration of illness. Four blood samples were from patients sampled within months of their deaths (patients 9, 23, 30, and 31), but only one of these had a value that was above the 95% CI. If the data are analyzed according to the clinical classification scheme of Russo et al. (1976), which is explicitly tied to age of illness onset, then the GFAP values in patient CSF samples were significantly higher than those in control subjects in all three groups (infantile, juvenile, and adult onset of illness), whereas the values in blood were higher in just the infantile and juvenile groups but not in the adult group. The recent classification scheme of Prust et al. (2011) relies more on clinical features (and hence anatomy) rather than age of illness onset, but still shows a marked age difference. While we did not have sufficient information to allow precise classification according to the criteria of Prust et al. (2011), if one uses an age of AxD onset of Ͻ4 or Ͼ4 year as a surrogate for classifying patients as either type I or type II, respectively, then it is clear that the CSF values for AxD patients are significantly elevated in both groups, whereas the Within-subject comparison of GFAP levels in CSF and blood. GFAP levels in blood are shown as a function of the levels in CSF for the ten patients for whom both types of samples were available. CSF values were consistently higher than blood values. However, as indicated by the ages at collection as given in Table 1, the samples were not contemporaneous. No statistical analysis was performed due to the low number of samples. Additional information about the statistical tests used throughout and the confidence intervals for each set of comparisons is given in Table 5. Discussion In an earlier study of three AxD patients, Kyllerman et al. (2005) found that GFAP levels in CSF were elevated compared with a previously analyzed reference range of control subjects. We here confirm the consistent elevation of GFAP levels in CSF samples in a larger cohort of patients, analyzed at the same time as control subjects. We further report that GFAP levels are also elevated above the control range in blood samples, but only in a subset of patients. Our results provide the necessary foundation for future studies aimed at developing appropriate biomarkers of disease severity, progression, and response to treatments. Why GFAP levels in CSF (and occasionally blood) rise in AxD patients is not clear. Previous work in mouse models suggests a linear relation between the levels in parenchyma (presumably largely intracellular) and that in the fluids (Jany et al., 2013), and it is known that GFAP levels are elevated in brain lysates from AxD patients (Tang et al., 2008;Walker et al., 2014). That the infantile-and juvenile-onset groups display higher fluid levels than adult-onset patients may reflect underlying differences in their parenchymal levels, but such direct correlations remain to be established. The precise origin and form of extracellular GFAP are also not certain, but presumably result from either astrocyte death or sublethal injury that increases the permeability of the plasma membrane (Liem and Petzold, 2015). GFAP is not known to be secreted, as has been reported for vimentin from macrophages (Mor-Vaknin et al., 2003), though unpublished studies suggest that it might be exported via exo-somes (K. Glebov, personal communication). Recent studies of the newly discovered "glymphatic" pathway for solute flow in the brain suggest that GFAP may reach the blood through paravascular channels and olfactory pathways rather than a leaky blood-brain barrier (Plog et al., 2015). However, it is possible that glymphatic clearance itself, which is dependent on the astrocytic water channel AQP4, is also affected in AxD patients. Individuals with AxD experience several associated conditions that plausibly could influence GFAP expression and levels in biofluids. Seizures are seen in most infantile or type I AxD patients Prust et al., 2011), and kindling has been shown in rat models to transiently increase the expression of GFAP even in the contralateral hippocampus (Steward et al., 1991). Nevertheless, in our sample we found no obvious connection linking GFAP levels with the co-occurrence or degree of control of seizures. Conditions involving more obvious destruction of brain parenchyma, such as hydrocephalus (Tullberg et al., 1998;Beems et al., 2003;Petzold et al., 2004) or cavitating lesions (Rosengren et al., 1994), might also be expected to contribute to increased spillage of GFAP into the extracellular space, and subsequently into CSF and blood. Nevertheless, GFAP blood levels were elevated in only one of the five patients noted to have hydrocephalus. Whether certain features of MRI findings in individual AxD patients, such as cavitation or simple contrast enhancement (reflecting breakdown of the blood-brain barrier), predict changes in GFAP levels in CSF and blood is a topic for future study. We recognize a number of limitations to our study. First is the problem of age matching in our samples, particularly with respect to blood. Our IRB protocol limited the collection of blood from healthy control subjects to those Ͼ18 years of age, whereas 27 of our 48 AxD patient samples were below this age. However, previous studies of GFAP in CSF and blood found no relation to age or sex Information regarding each patient is shown including age of onset, nature of first symptom, highest cognitive level, highest motor level, major deterioration (if any), and age at loss of unassisted walking (if it occurred). All ages are given in years. ID, Intellectual disability; N/A, not applicable; ND, not determined or unknown; F, female; M, male. ‫ء‬Age of onset was estimated. †Parent-child duos are shown together on consecutive lines (19-20, 24-25, and 44-45). ‡The pathogenicity of the R105 mutation is uncertain. §The D157N mutation is considered a benign variant, but its impact in a compound heterozygote is not known. (Petzold et al., 2004;Mayer et al., 2013), though with a small sample Rosengren et al. (1992) found that adult CSF values were slightly higher than those in children. Second, while the processing of the blood samples in our study was well defined and standardized, for CSF we could access only leftover clinical samples, which may have varied considerably with respect to the interval between collection and freezing. Our assays did not distinguish between full-length protein and forms that are truncated through degradation, but previous studies (Missler et al., 1999;Petzold et al., 2004) have shown that GFAP is stable for several days at 4°C and is not affected by multiple freeze-thaw cycles. In addition, circadian rhythms influence the production rate of CSF (Nilsson et al., 1992), and the levels of at least some biomarkers (e.g., amyloid ␤) are known to vary depending on the time of collection (Bateman et al., 2007). We note that the quantitation of proteins involved in aggregation disorders may be compromised by the "hook" effect, but the impact would be to underestimate the degree of elevation, particularly when assaying tissues where the aggregates typically reside (Lu et al., 2011;Petzold, 2015). Finally, we have examined only samples taken at one time point during the disease process; for many of the AxD patients who contributed CSF samples, they were taken at different times than the blood samples. Hence, our data cannot be used to establish direct connections between the values in blood and CSF, nor can we reach any conclusions about how GFAP levels might change in an individual during progression of disease. Ultimately, a panel of biomarkers may prove more informative than focusing on just one. Several candidates for consideration in such a panel could include markers of neuronal or oligodendroglial injury (e.g., neuron-specific enolase, UCH-L1, neurofilament-H, MBP). A recent proteomic study of CSF in one mouse model of AxD (Cunningham et al., 2013) suggested additional potential biomarkers that may be of interest, including cathepsins and creatine kinase M.
2016-10-08T01:47:31.943Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "dcf117971b1e31ef62d7f0dfc8af02faac2ab17c", "oa_license": "CCBY", "oa_url": "http://www.eneuro.org/content/eneuro/2/5/ENEURO.0080-15.2015.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf5819fe38912399dd28049a0c34468b434012e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267919438
pes2o/s2orc
v3-fos-license
Silencing CD36 gene expression results in the inhibition of latent-TGF-β1 activation and suppression of silica-induced lung fibrosis in the rat Background The biologically active form of transforming growth factor-β1 (TGF-β1) plays a key role in the development of lung fibrosis. CD36 is involved in the transformation of latent TGF-β1 (L-TGF-β1) to active TGF-β1. To clarify the role of CD36 in the development of silica-induced lung fibrosis, a rat silicosis model was used to observe both the inhibition of L-TGF-β1 activation and the antifibrotic effect obtained by lentiviral vector silencing of CD36 expression. Methods The rat silicosis model was induced by intratracheal injection of 10 mg silica per rat and CD36 expression was silenced by administration of a lentiviral vector (Lv-shCD36). The inhibition of L-TGF-β1 activation was examined using a CCL-64 mink lung epithelial growth inhibition assay, while determination of hydroxyproline content along with pathological and immunohistochemical examinations were used for observation of the inhibition of silica-induced lung fibrosis. Results The lentiviral vector (Lv-shCD36) silenced expression of CD36 in alveolar macrophages (AMs) obtained from bronchoalveolar lavage fluid (BALF) and the activation of L-TGF-β1 in the BALF was inhibited by Lv-shCD36. The hydroxyproline content of silica+Lv-shCD36 treated groups was significantly lower than in other experimental groups. The degree of fibrosis in the silica+Lv-shCD36-treated groups was less than observed in other experimental groups. The expression of collagen I and III in the silica+Lv-shCD36-treated group was significantly lower than in the other experimental groups. Conclusion These results indicate that silencing expression of CD36 can result in the inhibition of L-TGF-β1 activation in a rat silicosis model, thus further preventing the development of silica-induced lung fibrosis. Background Silicosis is a form of occupational lung disease caused by inhalation of crystalline silica dust. The pathogenesis of silicosis involves alveolar cell injury and activation followed by cytokine signaling and cell recruitment in the areas of silica dust deposition [1,2]. The cytokine transforming growth factor-β1 (TGF-β1) plays a critical role in the progression of lung fibrosis [3][4][5][6], and it has been widely studied with respect to its vital role in the development of fibrosis after injury to the lung [7][8][9][10]. TGF-β1 is synthesized by virtually all cell types in an inactive form referred to as latent TGF-β1 (L-TGF-β1) consisting of the mature TGF-β1 and latent-associated peptide (LAP). Due to the noncovalently association of mature TGF-β1 with LAP, the L-TGF-β1 is unable to be recognized by cell-surface receptors and to trigger biological responses [5,7,8]. In fact, one of the primary mechanisms of TGF-β1 regulation is the control of its conversion from a latent precursor to the biologically active form [11]. CD36, as a receptor of thrombospondin-1 (TSP-1), plays an important role in the processes of L-TGF-β1 activation. A number of studies have demonstrated that L-TGF-β1 associates with TSP-1 to form the TSP-1/L-TGF-β1 complex via the specific interaction between LAP and TSP-1. The TSP-1/L-TGF-β1 complex associates with CD36 on cell surface via the specific interaction between the YRVR-FLAKENVTQDAEDNC (93-110) sequence of CD36 and the sequence CSVTCG (447-452) of TSP-1. Then, L-TGF-β1 is held at the cell surface by a TSP-1/CD36 interaction and is processed by plasmin generated by activated alveolar macrophages to produce active TGF-β1. The CD36-TSP-1/L-TGF-β1 interaction appears critical to the activation process [12,13]. We presume that silencing expression of CD36 could inhibit activation of L-TGF-β1 and result in prevention of the development of lung fibrosis. A lentiviral vector expressing short hairpin RNA (shRNA) specific for rat CD36 (Lv-shCD36) was constructed and shown to suppress expression of CD36 and inhibit the activation of L-TGF-β1 in a rat alveolar macrophage cell line called NR8383 (data not shown and will be presented in another manuscript). In the current study, a rat silicosis model was generated by intratracheal instillation, and the inhibitory effects of Lv-shCD36 on the activation of L-TGF-β1 and the resulting antifibrotic effects were examined. Experimental animals and design Equal proportions of male and female Wistar rats at 9 weeks of age, weighing 220-240 g, were obtained from the Center of Experimental Animals, China Medical University (Shenyang, China) with a National Animal Use License number of SCXK-LN 2003-0009. The animals were housed at an environmental temperature of 24 ± 1°C and a 12/12 h light/dark cycles, with free access to food and water. SiO 2 was purchased from Sigma (St., Louis, MO, USA). The silica content of the SiO 2 was >99%, the dust particle size was 0.5-10 μm, and 80% of the particles were 1-5 μm. Lv-shCD36, a lentiviral vector expressing shRNA specific against rat CD36, was developed for a prior study, and it suppressed the expression of CD36 (data not shown and will be presented in another manuscript). All experiments and surgical procedures were approved by the Animal Care and Use Committee at the China Medical University, which complies with the National Institute of Health Guide for the Care and Use of Laboratory Animals. Animals were divided randomly into the following four experimental groups (n = 24 per group): (1) saline control group: instillation of 0.5 ml sterile physiological saline; (2) silica group: instillation of a suspension of 10 mg silica dust in a total volume of 0.5 ml sterile physiological saline; (3) silica+Lv-shCD36 group: instillation of a mixed suspension of 10 mg silica dust and 5 × 10 8 transducing units (TU) of Lv-shCD36 in a total volume of 0.5 ml sterile physiological saline; (4) silica+Lv-shCD36-NC group: instillation of a mixed suspension of 10 mg silica dust and 5 × 10 8 TU Lv-shCD36-NC(non-silence control lentivirus) in a total volume of 0.5 ml sterile physiological saline. Rats were anesthetized with an intraperitoneal injection of 10 mg/rat pentobarbital sodium. The skin of the neck was opened and blunt dissection exposed the trachea. Either physiological saline, silica in physiological saline, or silica with Lv-shCD36 or Lv-shCD36-NC in physiological saline, was instilled into the lungs using a 14-gauge needle inserted into the trachea through the epiglottis of the larynx. The site of surgery was sutured and the rats were allowed to recover until they were sacrificed. At 7, 21 and 28 days post-instillation, eight rats of each group were anesthetized with anesthetic ether, sacrificed by decapitation, and the lungs were removed. Bronchoalveolar lavage fluid (BALF) was obtained by cannulating the trachea, injecting and retrieving 3 ml aliquots of sterile physiological saline that was centrifuged at 1000 rpm for 1 min at 4°C. The cells were incubated in 1640 medium for 2 h at 37°C in 5% CO 2 , and the adherent cells were mostly alveolar macrophages (AMs). After detection of green fluorescent protein (GFP) by fluorescence microscopy, AMs were collected for real-time PCR analysis. The BALF supernatant was centrifuged at 3000 rpm for 10 min at 4°C, and stored at -80°C for later determination of TGF-β1. Quantitative real-time PCR analysis Total RNA was isolated from AMs using the Trizol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. The sequences of primers specific for CD36 (sense: 5'-GAAGCACTGAAGAATCTGAAGAG-3'; antisense: 5'-TCCAACACCAAGTAAGACCATC-3'), and β-Actin (sense: 5'-CGGCATTGTCACCAACTG-3'; antisense: 5'-CGCTCGGTCAGGATCTTC-3'), were synthesized by Genechem (China). Real-time quantitative PCR (qRT-PCR) analysis was performed as previously described. Each PCR reaction mixture (20 μl) contained 10 μl of 2 × SYBR Green Master Mix (Takara, Japan), 1 μl of forward and reverse primers (5 μmol/μl), 1 μl of cDNA product and water. The PCR reactions were run on iQ5 (Bio-Rad) using the following program: 95°C for 15 s, and 40 cycles of 95°C for 5 s and 60°C for 30 s. Following PCR ampli-fication, the reactions were subjected to a temperature ramp to create a dissociation curve, measured as change in fluorescence as a function of temperature, which allows detection of non-specific products. qRT-PCR data were analyzed using the two standard curve method and β-Actin was used as an internal control to normalize gene expression level. CCL-64 mink lung epithelial growth inhibition assay for TGF-β1 The CCL-64 cell line was grown in Dulbecco's Modified Eagle's Medium (DMEM, Gibco, USA) with 10% fetal bovine serum (FBS, Gibco, USA) at 37°C, in 5% CO 2 . To detect quantities of TGF-β1 in BALF, CCL-64 cells were plated at 5 × 10 3 cells/well in 96-well plates and cultured in FBS-free DMEM at 37°C in 5% CO 2 for 4 h. Ten μl of untreated sample, equivalent to the quantity of TGF-β1 representing active TGF-β1, or treated samples acidified with HCl and subsequently neutralized with NaOH which were equivalent to the quantity of TGF-β1 representing the total TGF-β1 of the same sample, were added to the wells. The standard curve contained concentrations ranging from 31.25 to 2000 pg/ml of porcine TGF-β1 (R&D Systems, Minneapolis, USA). The plates were incubated at 37°C in 5% CO 2 for 24 h, then added 10 μl MTT reagent (5 mg/ml final concentration) to each well for 4 h of incubation. The plates were added 100 μl DMSO to dissolve the precipitate before analysis at 570 nm using a microplate reader (Bio-Rad 550) [14]. Determination of hydroxyproline content The lung samples were measured for hydroxyproline content using a hydroxyproline kit from Nanjing Jian Cheng Institute (China) following instructions of the manufacturer. The results were calculated as micrograms of hydroxyproline per gram of wet lung weight using hydroxyproline standards. Pathological examination Following gross inspection of each mouse, small pieces of lung tissue from the middle of the lobes, in addition to the hilar lymph nodes, were fixed with 4% paraformaldehyde, embedded in paraffin, and sectioned at 5 μm. The tissue sections were stained with hematoxylin and eosin (HE) and van Gieson's stain (vG) for collagen fibers. Silicotic nodules were graded as following: cellular nodules as Stage I; fibrotic cellular nodules as Stage II; cellular fibrotic nodules as Stage III; fibrotic nodules as Stage IV. Immunohistochemical staining For immunohistochemical examination, all sections were deparaffinized in xylene followed by 100% ethanol and then placed in a freshly prepared methanol plus 3% H 2 O 2 solution for 30 min to block endogenous peroxidase activity. After overnight incubation at 4°C with rabbit pol-yclonal anti-collagen I and III antibodies (Santa Cruz Biotechnology, Santa Cruz, CA, USA) diluted 1:100 in phosphate-buffered saline (PBS), antigen-antibody complexes were detected using Streptavidin/Peroxidase (SP) Histostain™-Plus Kits (Beijing Zhongshan Golden Bridge Biotechnology Ltd., China). Peroxidase activity was revealed using a 3, 3'-diaminobenzidine tetrahydrochloride Substrate Kit (Beijing Zhongshan Golden Bridge Biotechnology Ltd., China). The sections were counterstained with hematoxylin for 3 min, rinsed and mounted with glycerin gelatin for histological examination. Brown particles in the cytoplasm or the cellular membrane were considered a positive reaction. The collagen I and III proteins were analyzed quantitatively using MetaMorph/DP10/ BX41-type image analysis software (UIC/OLYMPUS, US/ JP). In 10 × 40 fields, three to five fields were randomly selected for each section. The integrated optical density (IOD) average represented the quantitative expression of collagens I and III. SPSS 13.0 software was used to conduct statistical analyses. The differences between values were evaluated through one-way analysis of variance (ANOVA) followed by pair-wise comparison with the Student-Newman-Keuls test. P < 0.05 was considered statistically significant. Lv-shCD36 could silence the expression of CD36 in AMs AMs were obtained from BALF in each experimental group at 7 days after instillation. The AMs infected with either Lv-shCD36 or Lv-shCD36-NC expressed GFP fluorescence, which was detected by fluorescent microscope [see Additional file 1]. Real-time PCR was performed to determine the silencing effect of CD36 in AMs in the sil-ica+Lv-shCD36 group. The results demonstrate that expression of CD36 mRNA in the silica+Lv-shCD36 group was significantly lower than in the saline control, silica, and silica+Lv-shCD36-NC groups (P < 0.05) at 7 days ( Figure 1). Inhibition of L-TGF-β1 activation by Lv-shCD36 in BALF The activation of L-TGF-β1 was determined by detecting the quantity of TGF-β1 in BALF using the CCL-64 growth inhibition assay. The quantities of total TGF-β1 and active TGF-β1 from BALF in silica group, silica+Lv-shCD36 group and silica+Lv-shCD36-NC group were significantly higher than those of the saline control group (P < 0.05) at 7 days after the instillations. The quantity of active TGF-β1 from BALF in the silica+Lv-shCD36 group was significantly lower than in the silica group or the silica+Lv-shCD36-NC group (P < 0.05) (Figure 2A-a). The percent of active TGF-β1 in BALF from each sample was derived using active TGF-β1 as the numerator and total TGF-β1 as the denominator. The percent of active TGF-β1 from BALF in the silica+Lv-shCD36 group was significantly lower than in the silica group or the silica+Lv-shCD36-NC group (P < 0.05), and it was significantly higher than that of the saline control group (P < 0.05) (Figure 2A-b). At 21 days after instillation, the quantity of total TGF-β1 and active TGF-β1 from BALF in the silica group, the silica+Lv-shCD36 group and the silica+Lv-shCD36-NC group was decreased compared with the results at 7 days, and they were significantly higher than the results from the saline control group (P < 0.05). There were no significant differences among the silica group, the silica+Lv-shCD36 group and the silica+Lv-shCD36-NC group ( Figure 2B). Lv-shCD36 could reduce hydroxyproline content in lung Hydroxyproline content is an important indicator of lung fibrosis. In this study, no significant differences in the hydroxyproline content of the four treatment groups were observed at 7 days after instillation when measured using a hydroxyproline kit. However, the hydroxyproline contents of the silica group and the silica+Lv-shCD36-NC group were significantly higher than those of the saline control group (P < 0.05) at 21 and 28 days after instillation. The hydroxyproline content of the silica+Lv-shCD36 group was significantly lower than in the silica group and the silica+Lv-shCD36-NC group (P < 0.05), and significantly higher than that of the saline control group (P < 0.05) at 21 and 28 days after instillation (Figure 3). Lv-shCD36 could inhibit silica-induced lung fibrosis The lung tissues of rats were observed by light microscope to monitor pathological changes. No obvious abnormali-ties were observed in the lungs of rats that received physiological saline. However, in the silica and silica+Lv-shCD36-NC groups, there was a large infiltration of inflammatory cells and alveolar septal thickening in the lung, and occasionally a small amount of cellular nodules (Stage I) and tiny collagen fibers were observed, at 7 days after instillation. There were less cellular nodules (Stage I) in the lungs of rats in the silica+Lv-shCD36 group, and the vG stain was weakly positive for collagen fibers. At 21 days after instillation, primarily cellular nodules and fibrotic cellular nodules (Stage I and II) were observed in the silica and the silica+Lv-shCD36-NC groups. Some nodules arranged close and some nodules had loosely distributed collagen fibers. There were mainly cellular nodules (Stage I) and tiny collagen fibers in the lung of rats in the sil-ica+Lv-shCD36 group. Compared to the silica and the sil-ica+Lv-shCD36-NC groups, the number of nodules in the lungs of rats in the silica+Lv-shCD36 group was fewer, and they were smaller. In the silica and the silica+Lv-shCD36-NC groups, fibrotic cellular nodules (Stage II) and loosely distributed collagen fibers were observed at 28 days after the instillation. There were still mostly cellular nodules (Stage I) and tiny collagen fibers in the lungs of rats from the silica+Lv-shCD36 group, but the number of nodules was increased ( Figure 4, Table 1). Lv-shCD36 could inhibit the expression of the collagen I and III in lung To further observe the degree of fibrosis, immunohistochemical examination of collagen I and III was performed in the lung tissue. The results showed a weakly positive reaction for scattered collagen I and collagen III in the mesenchymal tissue of the saline control group. The expressions of collagen I and collagen III of the silica+Lv-shCD36 group were weaker than those of the silica and silica+Lv-shCD36-NC groups [see Additional file 2 and Additional file 3]. At the three time points after the instillation, the IOD average of collagen I in the silica and the silica+Lv-shCD36-NC groups was significantly higher than that of the saline control groups (P < 0.05). The IOD average of collagen I in the silica+Lv-shCD36 group was higher than that of the saline control group (P < 0.05), but was significantly lower than that of the silica and sil-ica+Lv-shCD36-NC groups (P < 0.05) ( Figure 5A). The collagen III results were concordant with those of collagen I ( Figure 5B). Discussion Lung fibrosis is the most important pathological change in silicosis. An experimental animal model of silicosis was induced by an intratracheal administration of silica dust, resulting in varying degrees of fibrotic silicosis [15]. Following silica-induced lung injury, AMs were stimulated and they secreted large quantities of biologically active TGF-β1, which plays a critical role in the development of lung fibrosis [16][17][18][19]. Recent research suggests that a pol-CD36 mRNA levels from the AMs of each group were detected by realtime-PCR Figure 1 CD36 mRNA levels from the AMs of each group were detected by realtime-PCR. The expression of CD36 mRNA in the silica+Lv-shCD36 group was significantly lower than in the saline control, silica, or silica+Lv-shCD36-NC groups at 7 days. Each bar represents the mean ± SEM. *P < 0.05, as compared to saline control group; Δ P < 0.05, as compared to silica group; and # P < 0.05, as compared to the silica+Lv-shCD36-NC group. Data was repeated twice (n = 3) and similar results were obtained. yclonal anti-TGF antibody or the proteoglycan decorin, a TGF-β1 binding protein, could block TGF-β1 and markedly reduce extracellular matrix accumulation [20][21][22]. We hypothesized that it would be also effective to inhibit the activation of TGF-β1 in an attempt to prevent development of silica-induced lung fibrosis. CD36 may be involved in silica-induced lung fibrosis, because of its specific combination with TSP-1, which is a critical factor in the activation of L-TGF-β1 [12]. Accord-ingly, in our previous work RNAi technology was used to construct a recombinant lentiviral vector, Lv-shCD36, which expresses shRNA specific against rat CD36. Lv-shCD36 was demonstrated to inhibit the activation of L-TGF-β1 in vivo using the rat alveolar macrophage cell line NR8383 (data not shown and will be presented in another manuscript). In the current study, Lv-shCD36 was used to test for inhibition of L-TGF-β1 activation and an antifibrotic effect in a rat silicosis experimental model. To determine effect of Lv-shCD36 on CD36 in the lungs of Quantity of TGF-β1 and the percentage of active TGF-β1 in BALF Figure 2 Quantity of TGF-β1 and the percentage of active TGF-β1 in BALF. (A-a) The quantity of total and active TGF-β1 in the silica, silica+Lv-shCD36, and silica+Lv-shCD36-NC groups were significantly higher than in the saline control group at 7 days after instillation. The quantity of active TGF-β1 in the silica+Lv-shCD36 group was significantly lower than in the silica and silica+Lv-shCD36-NC groups at 7 days. (A-b) The percentage of active TGF-β1 in the silica+Lv-shCD36 group was significantly lower than in the silica and silica+Lv-shCD36-NC groups, and it was significantly higher than in the saline control group at 7 days after instillation. (B-a) The quantity of total and active TGF-β1 in the silica group, the silica+Lv-shCD36 group and the silica+Lv-shCD36-NC group were significantly higher than in the saline control group at 21 days after instillation. (B-b) The percentage of active TGF-β1 in the silica group, the silica+Lv-shCD36 group and the silica+Lv-shCD36-NC group were significantly higher than in the saline control group at 21 days after instillation. Each bar represents the mean ± SEM. **P < 0.05, as compared to the quantity of total TGF-β1 in the saline control group; *P < 0.05, as compared to the quantity of active TGF-β1 in the saline control group; Δ P < 0.05, as compared to the quantity of active TGF-β1 in the silica group; # P < 0.05, as compared to the quantity of active TGF-β1 in the silica+Lv-shCD36-NC group. The data represent the means from experiments done in six rats. rats, AMs were isolated from BALF seven days after instillation to detect the expression of CD36 mRNA by real time-PCR. The result suggests that Lv-shCD36 can suppress CD36 mRNA expression in the AMs. Examination of total and active TGF-β1 in the BALF in the early phase of the experimental silicosis demonstrated that Lv-shCD36 could depress the quantity and percentage of active TGF-β1. Therefore, we believe that Lv-shCD36 could inhibit activation of L-TGF-β1 by decreasing expression of CD36 on the membrane of AMs to further reduce the combination of CD36 with TSP-1/L-TGF-β1. Activated TGF-β1 can bind its receptor on membrane of lung fibroblast to regulate collagen synthesis and degradation that ultimately results in lung fibrosis [8,17]. Inhibiting activation of L-TGF-β1 could suppress development of silica-induced lung fibrosis. This study shows that the hydroxyproline content of the silica+Lv-shCD36 group was significantly lower than the silica and silica+Lv-shCD36-NC groups at 21 and 28 days after instillation. The results of immunohistochemical examination of collagen I and III showed that the IOD average of both collagens of the silica+Lv-shCD36 group were significantly lower than in the silica and silica+Lv-shCD36-NC groups. Furthermore, the pathological examination revealed an obviously lighter degree of fibrosis in the silica+Lv-shCD36 group than in the silica and silica+Lv-shCD36-NC groups. We conclude that Lv-shCD36 could reduce pathological tissue fibrosis and collagen accumulation in the rat model of silicosis, and therefore, it could inhibit the development of silicosis. In the experimental lung fibrosis model, the AMs generate maximal quantities of L-TGF-β1 at 7 days after instillation of the early phase of lung fibrosis. After L-TGF-β1 was processed to become active form, the active TGF-β1 starts the occurrence and development of lung fibrosis. In the mid and late phases of the experimental lung fibrosis model, the amount of TGF-β1 released by the AMs declines gradually [23,24]. In the experimental silicosis model, there are primarily inflammatory changes and cellular nodules in the early phase. With the development of fibrosis, there are fibrotic cellular nodules, cellular fibrotic nodules, even fibrotic nodules in the mid and late phases of the experimental silicosis [25]. This study also shows that in the early phase, the quantities of L-TGF-β1 were obvious high compared with those at 21 days after instillation, and the quantity and percentage of active TGF-β1 were depressed by Lv-shCD36 at 7 days after instillation. At 21 and 28 days after instillation, the degree of silicosis was inhibited obviously by Lv-shCD36. Accordingly, CD36 may participate in the activation process of L-TGF-β1 in the early phase of the experimental silicosis. Furthermore, silencing expression of CD36 prevented the development of silicosis via inhibiting the activation of L-TGF-Hydroxyproline content of rat lungs Figure 3 Hydroxyproline content of rat lungs. There was no significant difference in the hydroxyproline content of the four groups at 7 days after instillation. The hydroxyproline content of the silica, the silica+Lv-shCD36, and the silica+Lv-shCD36-NC groups was significantly higher than that of the saline control group at 21 and 28 days after instillation. The hydroxyproline content of the silica+Lv-shCD36 group was significantly lower than the silica and silica+Lv-shCD36-NC groups at 21 and 28 days after instillation. Each bar represents the mean ± SEM. *P < 0.05, as compared to the saline control group; Δ P < 0.05, as compared to the silica group; and # P < 0.05, as compared to the silica+Lv-shCD36-NC group. The data represent the means from experiments done in eight rats. β1. Silicosis patients usually have exposed to low dose of silica dust for a long time. Silicosis is a chronic and progressive pathologic reaction. The pathological process of silicosis occurs and develops repeatedly. So, we presume that CD36 also repeatedly participates in the activation of L-TGF-β1 and the pathological process of silicosis. These results provide a new molecular basis for understanding latency and activation of L-TGF-β1, which should aid in the design of novel strategies to suppress silica-induced lung fibrosis through modulating inappropriate levels of TGF-β1 activity. Conclusion We have shown that silencing expression of CD36 inhibits activation of L-TGF-β1, which results in reduced hydroxyproline, collagen synthesis, and further prevention of the development of lung fibrosis. These effects may be through the suppression of the association of the TSP-1/L-TGF-β1 complex with CD36. Our data support the view that CD36 may contribute to the control of the activation of L-TGF-β1 and, therefore, silencing expression of CD36 could inhibit development of silica-induced lung fibrosis. Competing interests The authors declare that they have no competing interests. The IOD averages of collagen I of the silica, the silica+Lv-shCD36 and the silica+Lv-shCD36-NC groups were significantly higher than those of the saline control group at three time points. The IOD averages of collagen I of the silica+Lv-shCD36 group were significantly lower than those of the silica and the silica+Lv-shCD36-NC groups. (B) The IOD averages of collagen III of the silica, the silica+Lv-shCD36 and the silica+Lv-shCD36-NC groups were significantly higher than those of the saline control group at three time points. The IOD averages of collagen III of the silica+Lv-shCD36 group were significantly lower than those of the silica and the sil-ica+Lv-shCD36-NC groups. Each bar represents the mean ± SEM. *P < 0.05, as compared to the saline control group; Δ P < 0.05, as compared to the silica group; and # P < 0.05, as compared to the silica+Lv-shCD36-NC group. The data represent the means from experiments done in six rats.
2018-04-03T02:26:37.581Z
2009-05-13T00:00:00.000
{ "year": 2009, "sha1": "939c59689595873576451365e7217a4edf397016", "oa_license": "CCBY", "oa_url": "https://respiratory-research.biomedcentral.com/counter/pdf/10.1186/1465-9921-10-36", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d32a7e0b8d15885b6bef6e5fe11c8be87de608a4", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
251334036
pes2o/s2orc
v3-fos-license
THE INTERACTION BETWEEN TEACHING COMPETENCIES AND SELF-EFFICACY IN FOSTERING ENGAGEMENT AMONGST DISTANCE LEARNERS: A PATH ANALYSIS APPROACH Purpose - Distance learners are expected to actively participate in online learning settings to improve their cognitive level and promote more meaningful learning. However, without specific teaching skills and competencies from the instructors together with belief and capability of the distance learners themselves, their engagement in online learning would not be achieved. Limited studies have examined the extent of self-efficacy in encouraging student engagement in learning, especially within online learning settings. Thus, this study examined self-efficacy as a moderator to test its influence on the relationship between online teaching competencies and student engagement. Methodology - This quantitative research was conducted using the purposive sampling technique. This study involved 321 distance learners from a Malaysian public university. The questionnaire was created using SurveyMonkey, and the measurement items were adopted from past research with acceptable reliability. This study utilised partial least squares (PLS) 3.0 to test the hypothesis via correlation and path analysis. Findings - Contrary to expectations, the findings contributed to the literature that online teaching competencies and self-efficacy were not significantly related to student engagement. The association between online teaching competencies and student engagement was shown to be moderated by self-efficacy. This finding is aligned with Bandura’s (2001) social cognitive theory which states that personal factors such as self-regulation, self-efficacy, and interest are impacted in the distance education context. Significance - The results of this study can benefit online course instructors in Malaysian distance educational institutions to develop courses that enhance online learners’ self-efficacy. Online teaching competencies employed by online distance learners can be the primary objective when developing faculty development programmes that aim to coach online instructors to be competent in online teaching. Moreover, institutions are encouraged to introduce other online learning platforms to facilitate training of its practitioners in order to accelerate successful online teaching and learning experiences. adopted from past research with acceptable reliability. This study utilised partial least squares (PLS) 3.0 to test the hypothesis via correlation and path analysis. Findings -Contrary to expectations, the findings contributed to the literature that online teaching competencies and self-efficacy were not significantly related to student engagement. The association between online teaching competencies and student engagement was shown to be moderated by self-efficacy. This finding is aligned with Bandura's (2001) social cognitive theory which states that personal factors such as self-regulation, self-efficacy, and interest are impacted in the distance education context. INTRODUCTION There has been significant growth in distance education in recent years as evidenced in the rise in registration rates (Bigatel et al., 2012). This indicates a necessity to design a flexible learning setting to fulfil students' needs in online learning. The growth in this sector has substantiated the superiority of online learning experience in learners' achievement. Students' rating of teaching quality and effectiveness is assumed as a vital indicator to discourage a high attrition rate (Bigatel et al., 2012). Online instructors are encouraged to be proficient in various skills and abilities to successfully teach in multifaceted technology-incorporated settings and ensure learner achievement. Hence, teaching behaviours must be determined and emphasised so as to sufficiently inform instructors the necessary abilities and competencies in ensuring positive online teaching. The online learning setting varies significantly from the traditional learning context. Online learners are not required to attend classrooms physically and are not given opportunities to participate in physical interactions with peers and course instructors. Online learners have to be independent as they have to manage the learning pace (Lasfeto, 2020). Hence, self-efficacy and the ability to maximise online learning technology are crucial for online course completion. Among the vital skills are, for instance, the application of e-mail, discussion boards and awareness in utilising Internet browsers. When students develop an apprehension for computer technologies, they may feel confused, anxious, and frustrated over losing control, leading to technology withdrawals. Several essential skills are discussion boards, proper Internet search, and use of e-mail. The indicators that exemplify students' fear of computer technology include feeling confused, losing grasp of personal control, feeling anxious and frustrated, and removing oneself entirely from computer technology (Broadbent & Poon, 2015). Student engagement is an essential facet of student learning and satisfaction towards online programmes. This aspect has been thoroughly examined in the distance online learning research field for many years. Student engagement can be described as "the student's psychological investment in an effort directed towards learning, understanding, or mastering the knowledge, skills, or crafts that academic work is intended to promote" (Ahmed et al., 2018). It is vital to maintain student engagement in online learning settings as online learners are not provided as many opportunities to communicate with their institutions as compared to learners in classroom settings. Therefore, there is a necessity to create numerous opportunities to promote students' online engagement. According to Martin and Bolliger (2018), the necessity for engagement has led to the creation of guidelines in designing successful online courses. The purpose of the engagement technique is to deliver constructive experiences that consist of active learning opportunities encompassing team collaboration, resource sharing, discussions and presentations, case studies and reflection, and completing assignments with handson components. Furthermore, to respond to the worrying issues of online learning such as students' seclusion, dropout, students' withholding and declining graduation rate, engagement is seen as the answer to mitigate the issues mentioned (Banna et al., 2015). Meyer (2014), Banna et al. (2015), and Britt et al. (2015) have highlighted the significance of engagement in successful online learning. These researchers showed that student engagement is effective in increasing learners' significant effort required to attain cognitive development and knowledge construction, contributing to higher achievement scores. Banna et al. (2015) stated that if past content was a key emphasis, then engagement has a critical function in motivating online learning. Four fundamental engagement techniques of online learning have been identified to encourage student engagement: emotions, skills, participation, and performance. Communication with peers, course instructors, besides content encourages active learning and higher student engagement (Lear et al., 2010). Interactivity and a sense of community generate effective instruction and learning outcomes. In Malaysia, the focus of studies in distance education is shown in students' online reading strategies (Jusoh & Abdullah, 2015), adult distance learners' learning difficulties in improving their English language proficiency (Sai et al., 2013), building a social existence in the online environment (Zaini & Ayub, 2013), the effect of platforms for online writing on the scores of learners' narrative writing (Annamalai et al., 2013), and studies that look into numerous personalities of adult students in distance education (Mat Zin, 2012). Nevertheless, there are not many studies which determined the interaction between online teaching competencies and self-efficacy to boost student engagement. The need for self-efficacy to moderate the relationship is critical because a person's level of engagement in a task depends on the learner's self-belief in his/her capabilities to excel in the given task (Bandura, 1997). Moreover, self-efficacy is seen to encourage behaviour, particularly in determining whether a student will attempt a behavioural task and accumulate the effort to continue the task that signals engagement; in contrast to self-regulated learning which are the motivational and learning strategies that students employ to attend to desired goals (Zimmerman, 1989). The superiority of online teaching competency in promoting student engagement is a critical subject since enrolment has gradually increased over the years. Thus, it is beneficial to conduct an investigation to examine whether online teaching competencies and self-efficacy have a significant relationship with online student engagement, whether self-efficacy moderates the relationship between online teaching competencies and online student engagement amongst distance learners and to determine whether online teaching competencies is significantly related to online student engagement. Furthermore, it is useful to observe the role of self-efficacy to moderate the relationship between online teaching competency and online student engagement. This study has a significant contribution by guiding Malaysian distance educators to develop their courses that can enhance self-efficacy among online distance learners. The online teaching competencies employed by online distance learners can be the main objective in faculty development programmes that aim to coach online instructors to be competent in online teaching. Moreover, institutions should set up other online learning platforms to assist practitioners to achieve successful online teaching and learning experiences. Research Questions To investigate this issue further, the following are the research questions: 1. Do online teaching competencies have a significant relationship with online student engagement? 2. Does self-efficacy have a significant relationship with online student engagement? 3. Does self-efficacy moderate the relationship between online teaching competencies and online student engagement? Online Teaching Competencies Currently, numerous online teaching competencies demonstrate online teachers' finest exercises (Albrahim, 2020;Gurley, 2018). Nevertheless, a review of these identified competencies has revealed certain irregularities present amongst them that exist in models focusing on online teachers. This is not surprising because online learning exists in a significantly different context. Baran et al. (2013) stated that the vital roles and competencies expected by online teachers often clashed in the literature, and they depended on the online teaching context. Thus, the unpredictable learning environment necessitates educators to possess various competencies. Thomas and Graham (2017) reported that previous research had assessed different online teaching competencies and identified that course design was the most widely focused factor regarding competency. Nonetheless, Bigatel et al. (2012) listed online teaching competencies that emphasised solely on teaching behaviours. The scholars elucidated every task concerning extensive instructional elaboration with evaluators, course developers, online learning instructors, academicians consisting of 64 items of online teaching behaviours which they assigned as tasks that online instructors performed. The study included 197 respondents in rating their agreement with each assessment using a 7-point Likert scale, and the participants were asked to rate the tasks they felt to be most critical in an online teaching course. The study used exploratory factor analysis to cluster the tasks into seven groups of competencies, which were: (1) administration/leadership, (2) active learning, (3) multimedia technology, (4) active teaching/responsiveness, (5) technological competence, and (6) policy enforcement, (7) classroom decorum. Bigatel et al. (2012) introduced a model to explain educators' teaching behaviours during course delivery. The model did not emphasise the factor of course design. Hence, this model became the basis for this current study. This model may have limitations, and thus its accuracy can be determined via validity check, or ways to improve it can be suggested. Self-Efficacy Yavuzalp and Bahcivan, (2020) explained self-efficacy as a critical competence belief in self-regulatory control processes. Bandura (2001) elucidated that perceived self-efficacy was the individual's belief in his or her capability in designing a plan of action necessary to accomplish a potential situation. Hence, self-efficacy is trusting in the capability of an individual in achieving a particular domain. Furthermore, it affects learning, motivation in learning, and achievement (Van Dinther et al., 2011). This viewpoint stipulates, that learners who possess positive self-efficacy towards online learning possess higher chances of attaining achievement and stronger learning motivation. Other than self-efficacy, mastering online learning technology skills is also essential. For example, a person must be proficient in using e-mails, discussion boards, and Internet searches. Learners who possess lower proficiency in computer-based devices could experience perplexity, anxiety, frustration, loss of personal control, and intention to give up (Broadbent & Poon, 2015). Nevertheless, literature has shown varying results on the link between technology self-efficacy, online programme satisfaction, and students' achievement. Technological self-efficacy is viewed as a weak factor in online programmes and final performance (Kuo et al., 2014;Puzziferro, 2008). Conversely, technology self-efficacy has been described to have a positive connection with online learning performance (Bradley et al., 2017;Olson, & Appunn, 2017). Online Student Engagement Student engagement is vital to minimise learner withdrawal and reduce the rate of learner dropout. It is also an essential facet in maintaining online learners and improving graduation rates (Banna et al., 2015). Ahmed et al. (2018) stated that student engagement was "the student's psychological investment in an effort directed towards learning, understanding, or mastering the knowledge, skills, or crafts that academic work is intended to promote." Generally, students interact with instructional content, classmates, and course instructors. Bolliger and Halupa (2018) listed three student engagement domains: cognitive, emotional, and behavioural. The cognitive domain encompasses the learners' beliefs and principles about themselves and their styles of learning. Next, the emotional domain includes factors like feelings and motivation. The behavioural domain discusses habits, for example, procrastination and learning skills like reading, learning and writing. Dixson (2010Dixson ( , 2015 constructed the Online Student Engagement (OSE) scale which involved significantly connected variables: skills, emotions, performance, and participation. Martin and Bolliger (2018) recorded seven best practices regarding engagement: (1) student/faculty contact, (2) active learning, (3) cooperation, (4) prompt feedback, (5) emphasis of time on task, (6) developing high student expectations, and (7) respecting diversity in online classrooms. Dixson (2015) asserted that for various students, learning was a social activity. This researcher discovered that learners thought of engagement, which reduced transactional distance, as an application of learned materials. Dixson claimed that just by reading posts, e-mails, content, etc., would not be adequate to be engaged in an online course. Numerous online courses have been conducted asynchronously. Nonetheless, these courses can be made effective via discussion forums and e-mails. Teaching instructors can encourage respect for diversity and collaboration when a safe learning environment is developed for learners. Instructors who can precisely determine the amount of time their online learners will use to engage and assimilate content can increase expectations and guarantee that they will succeed. Kuh (2009) suggested that these principles must be continually employed in online learning. Underlying Theory The theoretical framework of this study is inclined towards Bandura's (2001) social cognitive theory. This research proposes that online teaching competency is an essential predictor of engagement. This emphasises the significance of integrating online activities to promote student engagement. It is linked with social cognitive theory, as explained by Bandura (2001) that looks at human learning in triadic reciprocal interactions consisting of personal, behavioural, and environmental factors. Personal factors such as self-regulation, selfefficacy, and interest can affect the context of distance education. These factors are influenced by the learners' behaviour and the learning environment through reciprocal communication. As such, online activities offer a chance to interact in an online environment and improve students' drive and engagement. Figure 1 shows the research framework, which includes an independent variable (online teaching competencies), dependent variables (online student engagement) and moderating variable (self-efficacy). Figure 1 Research Framework Based on the research framework, the hypotheses proposed in this study are as follows: H 1: Online teaching competencies have a positive influence on online student engagement. H 2 : Self-efficacy has a positive influence on online student engagement. H 3 : Self-efficacy moderates the relationship between online teaching competencies and online student engagement. Research Design Schwarz and Oyserman (2001) stated that asking questions or requesting a person to react to statements about their preferences or their behaviour seemed to be one of the top choice approaches in the behaviour and psychology field. Moreover, the approach of asking people for direct responses regarding information for a construct is well-documented in social science studies (Schwarz & Oyserman, 2001). It is essential to note that individuals are the greatest origin of information regarding themselves. As such, to investigate their beliefs and their feelings is to enquire into them. Thus, the self-report approach was used in this research, and the variables were measured at the individual level. Sampling Technique This research was conducted using the purposive sampling technique as it examined online teaching competencies, self-efficacy, and student engagement amongst distance learners. Sekaran and Bougie (2010) explained purposive sampling in selecting participants by looking at the most suitable stance to give appropriate information because they possessed the required information or adhered to the specified sampling criteria. In this investigation, the researcher chose distance learners who (1) had active student status during the academic session 2018/2019 and (2) had progressed to at least a minimum of the second year of the programme. The inclusion criteria assisted in improving the potential that online teaching competencies was a critical matter to students and improved accuracy concerning the student engagement. The demographic questions included gender, age, tenure with organisation, current employment status, current year in the programme, and the number of online courses taken to better comprehend the sample. Population and Sample Size The target population of the study were undergraduate distance learners from a Malaysian public university. A total of 500 students were selected to participate in the online survey as they were taking a major course taught by the researcher. The guideline as detailed by Hair et al. (2010) was referred to set the minimum sample size required during the data collection process. According to the guideline, the minimum number of samples recommended is at least five times more than the number of measured variables. Nevertheless, a more suitable sample size is 10 participants to one variable item. The sum of items to measure all variables in this investigation was 57. Therefore, the study concluded that the acceptable minimum sample size was 285 participants (5 × 57). Data Collection Procedure The questionnaire was created using SurveyMonkey.com, an online survey collection tool. Those who refused to participate in this study was not penalised by the lecturer. A pre-test was not needed as each measurement item used was taken from past empirical research. However, it was important to conduct a pilot test to determine the clarity of the statements in the questionnaire and to facilitate the questionnaire distribution process (Maholta et al., 2006). A total of 14 sets of questionnaires were e-mailed to the finalised students. They were chosen to participate in the pilot test as they took a major course with the researcher. Despite this, they were omitted from the final sample since they had seen the earlier version of the questionnaire which could cause them to respond differently in the actual study. The questionnaire was improved and finalised based on the responses gathered in the pilot test. A minor modification to the measurement items was made by the researcher to provide a specific example of the terminologies used in the questionnaire so as to fit in with the academic and student context and to further enhance their understanding. During the actual data collection, a total of 372 sets of the questionnaires of the study were e-mailed to students from major courses taught by the researcher. From the 372 questionnaires, only 321 questionnaires were used. A total of 51 questionnaires were removed due to: (1) incomplete data (44 cases), and (2) participants were first-year students in the programme (7 cases). Hence, only 321 cases were analysed, and these were more than the minimum number of cases needed (285) as recommended by Hair et al. (2010). Therefore, the criterion of having an acceptable sample size, i.e., 5 respondents per variable item was fulfilled. Research Instrument The measurement items in this study were sourced from past investigations with acceptable reliability. A 30-item scale (Bigatel et al., 2012) was utilised to assess online teaching competencies. A sample of the items included, "The instructor encourages students to interact with each other by assigning team tasks and projects, where appropriate" and "The instructor monitors students' adherence to academic integrity policies and procedures." In the current research, Cronbach's alpha was valued at 0.91. Artino and McCoach's (2008) eight-item scale was used for self-efficacy. A sample of the items included, "I believe I will receive an excellent grade in this class" and "Considering the difficulty of this course, the teacher, and my skills, I think I will do well in this class." Cronbach's alpha was valued at 0.86 in this current investigation. The 19-item scale by Dixson (2010Dixson ( , 2015 was used to study student engagement in the context of online learning. A sample of the items included "Making sure to study on a regular basis" and "Doing well on tests/quizzes." Cronbach's alpha in this research was valued at 0.89. Each item used a 5-point Likert scale format ranging from 1 = strongly disagree to 5 = strongly agree. Data Analysis To inspect the research model, the study used partial least squares (PLS) analysis. The analysis technique used was adapted from the two-step approach by Anderson and Gerbing (1988). The first step involved verifying the measurement model (reliability and measurement validation). Next, the structural model was verified to examine the hypothesis relationship. Two-step analysis and Smart PLS M2 version 2.0 were used for data analysis. The bootstrap approach (resampling of 500) was also applied in this study to identify the weight, significance level of loading, and path coefficient. Demographic Results Of the 321 respondents, 62 percent were female, while 38 percent were male. The average age of the respondents was 31 years with a five-year average tenure with their respective company. Each respondent was employed in a full-time job, and they were in their second year or more in the programme. Besides, the average number of courses taken by the respondents was 30. The respondents held remarkable careers, and were from different organisational levels (managerial, supervisory, and operations) including from several types of organisations-services, manufacturing, and government/ non-profit. Descriptive Statistics of the Latent Constructs Based on Table 1, the mean values of the three latent variables ranged between 4.33 and 4.78, and the standard deviation was valued between 0.77 and 0.95. These results were generated based on a 5-point Likerttype scale. All mean values were found to surpass the midpoint value of 2.50. The highest mean value was self-efficacy (4.79), while online teaching competencies showed the lowest value (4.33). Conversely, dispersion values through standard deviation showed that the highest and lowest values were achieved in online teaching competencies with 0.95 and student engagement with 0.77. Common Method Variance In calculating the extent of common method bias, this study moved to Harman's single factor test. The first factor was accounted for lower than 50 percent of the total variance explained as indicated by the principal component factor unrotated analysis. Podsakoff et al. (2003) inferred the nonexistence of common method bias in this study. Assessment of the Measurement Model The measurement model was measured using discriminant and convergent validity. Convergent validity was investigated by identifying composite reliability (CR), indicator loadings, and average variance extracted (AVE). The indicator loadings and CR were found to be above 0.7, whereas AVE was valued above 0.5, which conformed to the recommended value presented in the literature (Table 2). Next, discriminant validity was reviewed. Subsequently, the discriminant validity was tested. The literature found that the Fornell-Larcker (1981) criterion was unsuitable in determining the nonappearance of discriminant validity in a typical research context (Henseler et al., 2015). Therefore, the heterotrait-monotrait (HTMT) ratio of correlations based on the multitrait-multimethod matrix was deemed a suitable alternative for assessing discriminant validity. Monte Carlo simulation was utilised to illustrate the superiority of this technique. Hence, this study used this new method (Table 3) to investigate the discriminant validity of the model. There are two current strategies for employing HTMT in determining the presence of discriminant validity, which are statistical test and criterion. One of the strategies concludes that discriminant validity exists when the HTMT value surpasses the HTMT.85 value of 0.85 (Kline, 2015) or the HTMT.90 value of 0.90 (Gold et al., 2001). Conversely, weak discriminant validity is reported if the second strategy is based on Henseler et al. (2015) to investigate the null hypothesis (H0: HTMT ≥ 1) against the alternative hypothesis (H1: HTMT < 1) and if the confidence interval value is one (i.e., H0 holds). Table 3 shows that each value has surpassed the HTMT.90 (Gold et al., 2001) and HTMT.85 (Kline, 2015) values. Moreover, the HTMT Inference established that the confidence interval value was not 1 for every construct, indicating that discriminant validity was recognised. Assessment of the Structural Model The findings revealed that only one hypothesis out of three was accepted. Self-efficacy moderated the relationship between online teaching competencies and student engagement (β = 0.328, p < .01). DISCUSSION This paper aims to test the association between self-efficacy, teaching competencies, and student engagement. Furthermore, the effect of self-efficacy as a moderator was measured between the association of online teaching competencies and student engagement. This research contradicted the proposed hypothesis and revealed that online teaching competencies was not significantly related to student engagement. Therefore, online teaching competencies did not affect student engagement in any way. The non-significant relationship between the teaching competencies constructs and student engagement construct could be attributed to the profile of the respondents who were primarily distance learners. A total of 69 percent of the respondents were in the 3rd year of their programme. Thus, this finding implies that distance learners are competent in using the e-learning portal as a practical learning tool and are adept in online communication (Rahim, 2020). Additionally, since pursuing distance education is via the e-learning portal, the distance learners have been regularly posting the latest updates in their discussion forum and actively participating in small group discussions via other medium of communication such as WhatsApp (Tsai et al., 2021). Hence, in line with previous studies by Rajabalee and Santally (2021), even though they are overwhelmed with the overload of knowledge and information, with their experience in the online programme and their determination to succeed, they may feel that having to read a lot and having numerous modules with a high amount of information will not be a problem to them. As a result, this outcome has enriched the literature by recognising that online teaching competencies does not influence student engagement. Next, the findings have also demonstrated that self-efficacy is not significantly linked to student engagement. In other words, self-efficacy does not affect student engagement in any way. The insignificant association between self-efficacy and student engagement can be found in the respondents of this study, i.e., the distance learners. All the respondents were working adults; thus, supported by Rahim (2019), this finding implies that the distance learners find the distance education programme attractive since they can combine their work experience and the new knowledge learnt to do an excellent job on assignments and to achieve an excellent grade. Also, as the distance education programme is related to their current work, this finding is in line with Landrum (2020) which indicated that the programme has given them a practical understanding of their work, besides strengthening their motivation to enhance their current career and the direction of their profession once they have completed their studies. Thus, supported by Prifti (2020), even though they have to struggle with bandwidth and connectivity limitations, which may result in annoyance and apathy amongst distance learners and influence the ease of learning, with their high level of self-motivation, they may calmly migrate from the conventional learning approach to the latest e-learning approach. Therefore, these findings add to the current body of literature by proving that self-efficacy does not impact student engagement. The results from the moderation analysis have revealed a compelling finding, whereby self-efficacy moderated the relationship between online teaching competencies and student engagement. This result suggests that distance learners can control and manage their online learning environment, actively participate in online activities, and persistently complete online tasks. This finding concurs with earlier researchers, such as Ong et al. (2019) who indicated that distance learners with self-efficacy could gain a better engagement in their learning; supported by Sökmen (2021) that self-efficacy may facilitate learner engagement. Moreover, self-efficacy in online learning technology is positively associated with student motivation in using online learning technology (Wang et al., 2013). This finding is aligned with Bandura's (2001) social cognitive theory which states that personal factors such as self-regulation, self-efficacy, and interest are of relevance in the distance education context. Therefore, with 69 percent of respondents in the 3rd year of their programme, it is conceivable to accept that with better experience in online learning, the distance learners are more motivated and confident during online learning, and thus, will be more enthusiastic in their online courses. Consequently, they are highly engaged in online learning. CONCLUSION The findings from this study have contributed to the literature through the moderating effect of self-efficacy on online teaching competencies and student engagement. These findings were unlike past studies as this study was conducted amongst distance learners within the Malaysian context. Furthermore, the results have established a vital implication in improving distance learners' engagement which focuses on the development of self-efficacy as an essential personal psychological resource. One implication is that distance learners with high self-efficacy could engage in challenging online activities, in the online learning environment. In contrast, distance learners with low self-efficacy could become socially isolated and eventually disengage. Therefore, distance learners are encouraged to employ the strategy(s) mentioned in managing online courses as compared to traditional classes. For example, assigning a particular venue or time to focus on assigned tasks and learning materials when engaging in the learning process. Furthermore, this study suggests that online instructors develop courses that encourage high self-efficacy amongst distance learners. Thus, it is critical for instructors to acquaint themselves with the learning atmosphere and platform in order to assist distance learners. Instructors can offer introductory workshops that encompass knowledge students need to comprehend during the initial stages of online classes and to offer constructive feedback. Based on this study, it can be stated that online learning still has its shortcomings, especially in terms of student engagement. Therefore, the recommendations based on these findings must be conveyed to the online instructors. Moreover, instructors are encouraged to upgrade their online teaching competencies by: allowing students to contribute their knowledge and expertise amongst the learning community; delivering supplementary resources that inspire students to make more meaningful connections with the course content; practising respect for students during communication; creating visibility during grading to help students keep track of their own progress; giving prompt, constructive feedback on assignments and exams that enhance learning, showing consideration and concern to ensure students learn the course content such as helping to sort out issues which arise during teamwork/group assignments, and efficiently managing course communication through appropriate, acceptable conduct. Distance education institutions are believed to be vital as they integrate learner support, learner activities, and learning resources in online learning settings. They need to develop strategies to ensure that students are prepared to optimise instruction-related technologies and identify how introductory exercises can be constructed to improve learning efficacy and mitigate anxiety levels. Furthermore, these institutions are encouraged to offer conducive and convenient online learning platforms to enhance learners' intention and self-efficacy to participate in online learning programmes. Training should also be conducted regularly to improve the familiarity of learners and instructors with available online learning platforms. This research has several limitations. One of the limitations is that this study utilised self-reported data that requires Harman's single factor test to estimate the potential risk for result interpretation. Secondly, as this research used a cross-sectional method, the outcomes may vary as compared to using the longitudinal method. The third limitation is that a low number of samples was utilised. Thus, it is beneficial if future research utilised comparative research designs to investigate the relationships between teaching competencies, self-efficacy, and student engagement from various online distance learning institutions in Malaysia. Interviews and focus groups with distance learners are recommended to be conducted in the future to identify other elements that could improve teaching competencies, self-efficacy, and student engagement in Malaysian private higher educational institutions. In terms of research model, longitudinal studies provide solid inferences and better indications, and thus would be more beneficial. Future studies could also compare private and public higher educational institutions to identify dominant cultures in online learning. In addition, future research could also look into identifying the generalisability of the results of this study to other settings in Malaysia.
2022-08-05T15:01:55.977Z
2022-01-31T00:00:00.000
{ "year": 2022, "sha1": "dd4c1b00c14944f58be096be2addca3d139b05e3", "oa_license": "CCBY", "oa_url": "https://e-journal.uum.edu.my/index.php/mjli/article/download/11253/3390", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "813861a6ff9bd74fa6eb3020591945cf4696dca6", "s2fieldsofstudy": [ "Education", "Business" ], "extfieldsofstudy": [] }
118883993
pes2o/s2orc
v3-fos-license
Gas Accretion in Star-Forming Galaxies Cold-mode gas accretion onto galaxies is a direct prediction of LCDM simulations and provides galaxies with fuel that allows them to continue to form stars over the lifetime of the Universe. Given its dramatic influence on a galaxy's gas reservoir, gas accretion has to be largely responsible for how galaxies form and evolve. Therefore, given the importance of gas accretion, it is necessary to observe and quantify how these gas flows affect galaxy evolution. However, observational data have yet to conclusively show that gas accretion ubiquitously occurs at any epoch. Directly detecting gas accretion is a challenging endeavor and we now have obtained a significant amount of observational evidence to support it. This chapter reviews the current observational evidence of gas accretion onto star-forming galaxies. Introduction Cosmological simulations unequivocally predict that cold accretion is the primary growth mechanism of galaxies. The accretion of cold gas occurs via cosmic filaments that transports metal-poor gas onto galaxies, providing their fuel to form stars. Observationally, it is quite clear that galaxy gas-consumption timescales are short compared to the age of the Universe, therefore galaxies must acquire gas from the surroundings to continue to form stars. However, observational data have yet to conclusively show that gas accretion flows are ubiquitously occurring inand-around galaxies at any epoch. Since the first discoveries of circumgalactic gas around star-forming galaxies (Boksenberg & Sargent , 1978;Kunth & Bergeron, 1984;Bergeron, 1986), we have wondered where does this gas come from and if/how it drives galaxy growth and evolution. Glenn G. Kacprzak Swinburne University of Technology Center for Astrophysics & Supercomputing,Victoria 3122,Australia, This Chapter reviews the current observational evidence and signatures of cold accretion onto star-forming galaxies. In the following sections, we review the data that suggest accretion is occurring and cover the main topics of circumgalactic gas spatial distribution, kinematics and metallicity. In a few cases, there exists the combination of all of the above which provides the most tantalizing evidence of cold accretion to date. This Chapter primarily focuses on observations using background quasars or galaxies as probes of the circumgalactic medium around intervening foreground galaxies. The Spatial Distribution of the Circumgalactic Medium In order for galaxies to continuously form stars throughout the age of the Universe, they must acquire a sufficient amount of gas from their surroundings. In fact, roughly 50% of a galaxy's gas mass is in the circumgalactic medium (Zheng et al., 2015;Wolfe et al., 2005). Using background quasars as probes of gas surrounding foreground galaxies, we have discovered that galaxies have an abundance of multiphased circumgalactic gas. Circumgalactic Gas Radial Distribution The quantity and extend of the circumgalactic medium has been traced using a range of absorption features such as Ly α (Tripp et al., 1998;Chen et al., 2001b;Wakker & Savage, 2009;Chen & Mulchaey, 2009;Steidel et al., 2010;Stocke et al., 2013;Richter et al., 2016), Mg II (Steidel et al., 1994;Steidel, 1995;Guillemin & Bergeron, 1997;Zibetti et al., 2007;Kacprzak et al., 2008;Chen et al., 2010;Nielsen et al., 2013a), C IV (Chen et al., 2001a;Adelberger et al., 2005;Steidel et al., 2010;Bordoloi et al., 2014a;Liang & Chen, 2014;Burchett et al., 2015;Richter et al., 2016), and O VI (Savage et al., 2003;Sembach et al., 2004;Stocke et al., 2006;Danforth & Shull, 2008;Tripp et al., 2008;Wakker & Savage, 2009;Prochaska et al., 2011;Tumlinson et al., 2011;Johnson et al., 2013;Stocke et al., 2013;Johnson et al., 2015;Kacprzak et al., 2015). These studies all have shown that regardless of redshift (at least between z = 0 − 3), galaxies typically have hydrogen gas detected out to ∼500 kpc with "metal-enriched" gas within 100-200 kpc. Furthermore, the data show an anticorrelation with equivalent width and impact parameter, with the covering fraction being unity close to the galaxy and declining with increasing distance. This is demonstrated in Figure 1, which shows the ∼200 kpc extent of Mg II absorbing gas around typical galaxies from the MAGIICAT catalog (Nielsen et al., 2013b) along with well fit anti-correlation between equivalent width and impact parameter (logW r (2796) = 0.015 × D + 0.27). Note also that the gas covering fraction, for an equivalent width limit of 0.3Å, is roughly unity near the galaxy and decreases to about 20% beyond 100 kpc. Top: the rest-frame Mg II equivalent width as a function of impact parameter for the MAGI-ICAT catalog for "isolated" galaxies (Nielsen et al., 2013a,b). The closed symbols are detections while open symbols are 3σ are upper limits. The fit, and 1σ confidence levels are shown (logW r (2796) = [0.015 ± 0.002] × D + [0.27 ± 0.11]). Note the large extent of gas surrounding galaxies, which begs the question of what is the origin of this gas and is it some combination of gas outflows and accretion. Bottom: the radial decline of the gas covering fraction as a function of impact parameter for an equivalent detection limit of 0.3Å. Image courtesy of Nikole Nielsen. Interestingly, Richter (2012) demonstrated, using the distribution of high-velocity clouds around the Milky Way and M31, that high-velocity clouds could give rise to the majority of the absorption systems seen around other galaxies. The accretion rate of high-velocity gas at z = 0 is almost equivalent to the star formation density of the local Universe and thus, at least at low redshifts, high-velocity clouds could provide a significant fraction of the gas mass accreted onto galaxies (see chapter by Philipp Richter for further discussion of gas accretion onto the Milky Way). Simulations predict that gas accretion should occur via a "hot" or "cold" mode, which is dependent on a galaxy being above or below a critical halo mass ranging between log(M h )=11-12 (Birnboim & Dekel, 2003;Kereš et al., 2005;Dekel & Birnboim, 2006;Ocvirk et al., 2008;Brooks et al., 2009;Dekel et al., 2009;Kereš et al., 2009;Stewart et al., 2011;van de Voort et al., 2011). A repercussion of these models is that the covering fraction of cool accreting gas should drop significantly to almost zero for massive galaxies (Stewart et al., 2011). However, observational evidence shows that the covering fraction of cold gas is constant over a larger range of halo masses 10.7 ≤ log(M h ) ≤ 13.9 within a given impact parameter (or impact parameter normalized by the virial radius) and that gaseous galaxy halos are self-similar (Churchill et al., 2013a,b). This is suggestive that either outflows and/or other substructures contribute to absorption in high-mass halos such that low-and high-mass gas halos are observationally indistinguishable or the data indicate that predictions of a mass dependent shutdown of cold-mode accretion may require revision. This area needs to be examined further in order to address the discrepancy between observations and simulations. Although the average radial gas profiles around star-forming galaxies are well quantified, it does not provide much insight into the nature of the circumgalactic gas. Thus, it is critical to determine the geometric distribution of gas relative to its host galaxy to help improved our understanding of its origins. Circumgalactic Gas Spatial Distribution In the mid-90s, the exploration of circumgalactic medium geometry started with Steidel (1995) who acquired a large sample of 51 galaxy-quasar pairs. The data were suggestive that, independent of galaxy spectroscopic/morphological type, Mg II gas resided within a spherical halo with unity gas covering fraction. The data fit well to a Holmberg-like luminosity scaling between a characteristic halo radius and galaxy K-band luminosity. However, even Steidel noted that spherical halos were likely a tremendous over-simplification of the true situation, however, the data did not disprove it. Using simple geometric models, Charlton & Churchill (1996) determined that both spherical halos and extended monolithic thick-disk models could be made consistent with the current data. They suggested that the kinematic structure of the absorption profiles could be used to further constrain the gas geometry, which we further discuss in Section 3. Cosmological simulations commonly show that gas accretion should occur along filaments that are co-planar to the galaxy disk, whereas gas outflows are expected to be expelled along the galaxy projected minor axis (e.g., Shen et al., 2012;Stewart et al., 2013). Reminiscent of Charlton & Churchill (1996), Kacprzak et al. (2011b) reported that the Mg II equivalent width measured from high resolution quasar spectra was dependent on galaxy inclination, suggesting that the circumgalactic medium has a co-planer geometry that is coupled to the galaxy inclination. It was noted however that the absorbing gas could arise from tidal streams, satel-lites, filaments, etc., which could also have somewhat co-planer distributions. By stacking over 5000 background galaxies to probe over 4000 foreground galaxies, Bordoloi et al. (2011) found a strong azimuthal dependence of the Mg II absorption within 50 kpc of inclined disk-dominated galaxies (also see Lan et al., 2014). They found elevated equivalent width along the galaxy minor axis and lower equivalent width along the major axis. Their data are indicative of bipolar outflows with possible flows along the major axis. Later, Bordoloi et al. (2014b) presented models of the circumgalactic medium azimuthal angle distribution by using joint constraints from: the integrated Mg II absorption from stacked background galaxy spectra (Bordoloi et al., 2011) and Mg II absorption from individual galaxies as seen from background quasar spectra (Kacprzak et al., 2011b). They determined that either composite models consisting of a bipolar outflow component plus a spherical or disk component, or a single highly softened bipolar distribution, could well represent data within 40 kpc. Bouché et al. (2012), using 10 galaxies, first showed that the azimuthal angle distribution of absorbing gas traced by Mg II appeared to be bimodal with half of the Mg II sight-lines showing a co-planar geometry. Kacprzak et al. (2012a) further confirmed the bimodality in the azimuthal angle distribution of gas around galaxies, where cool dense circumgalactic gas prefers to exist along the projected galaxy major and minor axes where the gas covering fraction are enhanced by 20%-30% as shown in Figure 2. Also shown in Figure 2 is that blue star-forming galaxies drive the bimodality while red passive galaxies may contain gas along their projected major axis. The lower equivalent width detected along the projected major axis is suggestive that accretion would likely contain metal poor gas with moderate velocity width profiles. The aforementioned results provide a geometric picture that is consistent with galaxy evolution scenarios where star-forming galaxies accrete co-planer gas within a narrow streams with opening angles of about 40 degrees, providing fuel for new stars that produce metal-enriched galactic scale outflows with wide opening angles of 100 degrees, while red galaxies exist passively due to reduced gas reservoirs. These conclusions are based on Mg II observations, however both infalling gas and outflowing are expected to contain multi-phased gas. Mathes et al. (2014) first attempted to address the azimuthal angle dependence for highly ionized gas traced by O VI and found it to have a spatially uniform distribution out to 300 kpc. Using a larger sample of O VI absorption selected galaxies, Kacprzak et al. (2015) reported a bimodality in the azimuthal angle distribution of gas around galaxies within 200 kpc. Similar to Mg II, they found that O VI is commonly detected within opening angles of 20-40 degrees of the galaxy projected major axis and within opening angles of at least 60 degrees along the projected minor axis. Again similar to Mg II, weaker equivalent width systems tend to reside on along the project major axis. This would be expected for either lower column density, lower kinematic dispersion or low metallicity (or a combination thereof) gas accreting towards the galaxy major axis. Different from the Mg II results, non-detections of O VI exist almost exclusively between 20-60 degrees, suggesting that O VI is not mixed throughout the circumgalactic medium and remains confined within the accretion filaments and the gas outflows. Further supporting this bimodality accretion/outflow picture is the recent work using H I 21-cm absorption to probe the circumgalactic medium within impact parameters of < 35 kpc around z < 0.4 galaxies. (Dutta et al., 2016) found that the majority of their absorbers (nine) exists along the projected major axis and a few (three) exists along the projected minor axis. The data are supportive of high column density co-planer H I thick-disk around these galaxies. In addition, the three minor axis absorption systems all reside within 15 kpc, therefore they conclude that these low impact parameter minor axis systems could originate from warps in these thick and extended H I disks. Although gas geometry is highly suggestive of (and consistent with) our exception of gas accretion onto star-forming galaxies, it alone is not sufficient enough to determine if gas is actually fact accreting onto galaxies. Gas and relative gas-galaxy Fig. 2 Binned azimuthal angle mean probability distribution function (PDF) for Mg II absorbing galaxies (solid line). The binned PDFs are normalized such that the total area is equal to unity, yielding an observed frequency in each azimuthal bin. Absorption is detected with increased frequency toward the major (Φ = 0 degrees) and minor (Φ = 90 degrees) axes. Also shown is the galaxy color dependence of the distribution split by a B − R ≤ 1.1 representing late-type galaxies (dashed blue line) and B − R > 1.1 representing early-type galaxies (dotted red line). The data are consistent with star-forming galaxies accreting gas and producing large-scale outflows while quiescent galaxies have much less gas activity. kinematics can provide additional data that can be used to address whether we are detecting gas accretion or not. Circumgalactic Gas Kinematics Absorption systems produced by the circumgalactic medium hold key kinematic signatures into unlocking the the behavior of gas around galaxies. High resolution spectroscopy of the background quasars is critical to resolving the velocity substructures within these complex absorption systems. These data can be used to differentiate between scenarios of gas accretion, disk-rotation and outflows. Internal or Intrinsic Gas Kinematics For the most part, metal-line absorption systems are not composed of a uniform velocity distribution of "clouds", but tend to exist in groupings closer together in velocity with occasional higher velocity clouds offset from the groupings. For a handful high-resolution Mg II absorption profiles, Lanzetta & Bowen (1992) inferred that their velocity structure was dominated by coherent motions as oppose to random. They further showed that the absorption profiles are consistent with a rotating ensembles of clouds similar to a co-rotating disk. With a larger sample of high-resolution absorption profiles, Charlton & Churchill (1998) applied statistical tests for a variety of kinematic models and concluded that pure disk rotation and pure accretion models are likely ruled out. However, models with contributions from both a rotating disk and infall/halo can reproduce velocities that are nearly consistent with the observed kinematics. Similar work focusing on the low-ion transitions (such as Si II) associated with damped Ly α systems, Prochaska & Wolfe (1997) examined if a range of models such as rotating cold disks, slowly rotating hot disks, massive isothermal halos, and a hydrodynamic spherical accretion models could explain the observed absorption kinematics. They determined that thick rapidly rotating disks are the only model consistent with the data at high confidence levels. Their tests suggest that disk rotation speeds of around 225 km s −1 are preferred, which is typical for Milky Way-like galaxy rotation speeds. Furthermore, the gas is likely to be cold since ratio of the gas velocity dispersion over the disk rotation speed must be less than 0.1. These data are suggestive of thick, cold and possibly accreting disks surrounding galaxies. All these studies are based on absorption-line data alone and therefore it is important to quantify how the absorption kinematics changes with galaxy properties. Using primarily Lyman limits systems, Borthakur et al. (2015) found Ly α absorption from the circumgalactic medium has similar velocity spreads to that of their host galaxy's interstellar medium as observed via H I emission. The combination of the correlation between the galaxy gas fraction and the impact-parameter-corrected Ly α equivalent width is consistent with idea that the H I disk is fed by circumgalactic gas accretion (Borthakur et al., 2015). Furthermore, they find a correlation between impact-parameter-corrected Ly α equivalent width and the galaxy specific star formation rate suggesting a link between gas accretion driving star formation (Borthakur et al., 2016). For all azimuthal angles, the TPCFs are shown for blue and red face-on galaxies (left) and edge-on galaxies (right). Note the dramatic differences between the absorber velocity dispersions for blue and red galaxies for face-on orientations (left). The larger velocity spread is suggestive of of high velocity outflows being ejected from the galaxy, while red galaxies are less active. On the other-hand, there is no difference between blue and red for edge-on orientations (right). The velocity spread is comparable to the rotation speed of galaxies, which is consistent with gas accretion and/or gaseous disk co-rotation around the galaxies. Nielsen et al. (2015) has quantified the Mg II absorber velocity profiles using pixel-velocity two-point correlation functions and determined how absorption kinematics vary as a function of the galaxy orientation and other physical properties as shown in Figure 3. While they find that absorption profiles with the largest velocity dispersion are associated with blue, face-on galaxies probed along the projected minor axis, which is suggestive out outflows, they find something different for edge-on galaxies. For edge-on galaxies probed along the major axis, they find large Mg II absorber velocity dispersions and large column density clouds at low velocity regardless of galaxy color, which is used as a tracer of star formation rate. It is suggested that the large absorber velocity dispersions seen for edge-on galaxies (Figure 3 -right) may be caused by gas rotation/accretion where the line-of-sight velocity is maximized for edge-on galaxy inclination. Furthermore, the large cloud column densities may indicate that co-rotating or accreting gas is fairly coherent along the line-of-sight. The only way to test this scenario is to compare the absorption velocities relative to the rotation velocity of the host galaxies. In addition, some evolution in the circumgalactic medium has also been observed. Blue galaxies do not show an evolution in the velocity dispersions and cloud column densities with redshift (between 0.3 ≤ z ≤ 1), while red galaxies have a circumgalactic medium that becomes more kinematically quiescent with time (Nielsen et al., 2016). This is suggestive that the gas cycle in blue star-forming galaxies is active, be it via accretion or outflows, while red galaxies exhibit little-tono gas activity. This is consistent with the little-to-no O VI found around z ∼ 0.25 quiescent galaxies from the COS−Halos survey . Relative Gas-Galaxy Kinematics It is interesting that the internal velocity structure of absorption systems are reflective of their host galaxy type, orientation and redshift, however the question then arises of how/if the circumgalactic medium is kinematically connected to their host galaxies. The most direct measure of gas accretion is observing it down-thebarrel by using the host galaxy as the background source. This method is ideal since there are no degeneracies in the line-of-sight direction/velocity. Although this method does have its difficulties too since metal-enriched outflows and the interstellar medium will tend to dominate the observed absorption over the metal-poor accreting gas. These down-the-barrel gas accretion events have has been observed in a few cases with absorption velocity shifts relative to the galaxy of 80 − 200 km s −1 (Martin et al., 2012;Rubin et al., 2012). These down-the-barrel gas accretion observations are discussed in detail in the chapter by Kate Rubin. It is still unknown if these observations are signatures of cold accretion or recycled material falling back onto galaxies, yet they are the most direct measure of accreting gas to date. Using quasar sight-lines, we find that the distribution of velocity separations between Mg II absorption and their host galaxies tends to be Gaussian with a mean offset of 16 km s −1 and dispersion of about 140 km s −1 (Chen et al., 2010). Although there are much higher velocity extremes, typically expected for outflows, it is interesting that the velocity range is more typical of galaxy rotation speeds having masses close-to or less-than that of the Milky Way. When galaxy masses are known, one can compare the relative galaxy and absorption velocities to those of the escape velocities of their halos. Figure 4 shows escape velocities, computed for spherically symmetric Navarro-Frenk-White dark matter halo profile (Navarro et al., 1996), as a function of halo mass. It can be seen that very few absorption velocity centroids exceed the estimated galaxy halo escape velocities. This is surprisingly true for a full range of galaxy masses from dwarf galaxies probed by C IV absorption (Bordoloi et al., 2014a) to more massive galaxies probed by O VI . Consistently, it has been found that the vast majority of absorption systems that reside within one galaxy virial radius are bound to the dark matter halos of their hosts Mathes et al., (Bordoloi et al., 2014a) and O VI absorption velocity centroids with respect to the systemic redshift of their host galaxies as a function of the inferred dark matter halo mass for star-forming (blue squares) and passive (red diamond) galaxies. The range bars indicate the maximum projected kinematic extent of each absorption system. The histogram represents the distribution of individual component velocities. The dashed lines show the mass-dependent escape velocities at R = 50, 100, and 150 kpc, respectively. Note that all absorption-line systems appear to be bound to their halos and have velocities (and velocity ranges) comparable to galaxy circular velocities. This means that both outflowing and accreting gas could give rise to the observed kinematics. Image courtesy of Jason Tumlinson, Rongmon Bordoloi and the COS−Halos team. 2014; Tumlinson et al., 2011;Bordoloi et al., 2014a;Ho et al., 2016). It is worth noting that some of the velocity ranges covered by the absorption profiles are comparable to the escape velocities but are typically for the far wings of the profiles where the gas column densities are the lowest. Therefore two possible scenarios, or a combination thereof, can be drawn: 1) gas that is traced by absorption can be driven into the halo by star formation driven outflows and eventually fall back onto the galaxy (known as recycled winds) and/or 2) the gas is new material accreting from the intergalactic medium. With these data alone, we likely cannot distinguish between these scenarios. To test how the circumgalactic gas is kinematically coupled to their galaxy hosts, Steidel et al. (2002) presented the first rotation curves of five intermediate-redshift Mg II selected absorbing galaxies. Interestingly, they found that for four of the five cases, the absorption velocities lie entirely to one side of the galaxy systemic redshift and consistent with the side expected for rotation. Using simple thick disk-halo models, they concluded that the bulk of Mg II gas velocities could be explained by an extension of disk rotation with some velocity lag (Steidel et al., 2002). This was further confirmed by Kacprzak et al. (2010a) who also showed that infalling gas or lagging rotation is required to explain the gas kinematics. Using cosmological hydrodynamical galaxy simulations to replicate their data allowed them to concluding that coherently rotating accreting gas is likely responsible for the observed kinematic offset. There have now been over 50 galaxies/absorbers pairs that have been compared this way and the vast majority exhibit disk-like and/or accretion kinematics (Steidel et al., 2002;Kacprzak et al., 2010aKacprzak et al., , 2011aBouché et al., 2013;Burchett et al., 2013;Keeney et al., 2013;Jorgenson & Wolfe, 2014;Diamond-Stanic et al., 2016;Bouché et al., 2016;Ho et al., 2016) and some show outflowing wind signatures (Ellison et al., 2003;Kacprzak et al., 2010a;Bouché et al., 2012;Schroetter et al., 2015;Muzahid et al., 2016;Schroetter et al., 2016) while some exhibiting group dynamics (Lehner et al., 2009;Kacprzak et al., 2010b;Bielby et al., 2016;Péroux et al., 2016). Even for systems with multiple quasar sight-lines (Bowen et al., 2016) or for multiply-lensed quasars near known foreground galaxies provide the same kinematic evidence that a co-rotating disk with either some lagging rotation or accretion is required to reproduce the observed absorption kinematics. One caveat is that above works have a range of galaxy inclinations and quasar sight-line azimuthal angles, which could complicate the conclusions drawn. It is likely best to select galaxies where the quasar is located along the projected major axis where accreted gas is expected to be located. Ho et al. (2016) designed an experiment where they selected Mg II absorbers associated with highly inclined (i > 43 degrees) star-forming galaxies with quasars sight-lines passing within 30 degrees of the projected major axis. Presented in Figure 5 are the rotation curves and the velocity spread of the absorption shown as a function of distance within the galaxy viral radius. It is clear that there strong correlation between the Mg II absorption velocities and the galaxy rotation velocities. The majority of the Mg II equivalent widths are detected at velocities less than the actual rotation speed of the dark matter halo (blue squares), while the Keplerian falloff from the measured rotation curve provides lower limits on the rotation speed of the circumgalactic medium (dashed, black line). The cyan curves illustrate constant Rv rot (R) and show that the infalling gas would have specific angular momentum at least as large as that in the galactic disk, for which some of the gas has comparable specific angular momentum. The Mg II absorption-line velocity widths cannot be generated with circular disk-like orbit and a simple disk model with a radial inflowing accretion reproduced the data quite well (Ho et al., 2016). We have shown that lagging or infalling gas appears to be a common kinematic signature of the circumgalactic gas near star-forming galaxies from 0.1≤ z ≤ 2.5 and is consistently seen for a range of galaxy inclination and position angles. These observations are consistent with current simulations that show that large co-rotating gaseous structures in the halo of the galaxy that are fueled, aligned, and kinemati-cally connected to filamentary gas infall along the cosmic web. (Stewart et al., 2011;Danovich et al., 2012;Stewart et al., 2016;Danovich et al., 2015;Stewart et al., 2013). The predictions and results from simulations are discussed further in the chapters by Kyle Stewart and Claude-André Fauchèr-Giguere. Stewart et al. (2013) demonstrated that there is a qualitative agreement among the majority of cosmological simulations and that the buildup of high angular momentum halo gas and the formation of cold flow disks are likely a robust prediction of Λ CDM. These simulations naturally predict that accreted gas can be observationally distinguishable from outflowing gas from its kinematic signature of large one-sided velocity offsets. Thus, it is plausible we have already observed gas accretion through these kinematic velocity offsets mentioned above. It is also important to note that the vast majority of systems discussed above have column densities typical of Lyman Limit Systems (N(H I)> 10 17.2 cm −2 ), which Galaxy rotation curves of 10 galaxies, normalized by the rotation speed as a fraction of the halo virial radius, are compared to the kinematics of their circumgalactic gas. The Mg II absorption velocities are deprojected such that the velocity shown represents the tangential motion in the disk plane that would give to the observed sight-line velocities. The measured and intrinsic velocity range of each Mg II absorption system after deprojection is indicated by the green and orange bars with the Mg II absorption velocity along the quasar sight-line (green circles). Note that the absorption systems align with the expected side for extended disk rotation. Also shown are dark matter halo rotation speed models (blue squares) and the Keplerian fall off from the galaxy rotation curves, which sets a lower limits on the rotation speed in the circumgalactic medium (dashed, black line). The cyan curves illustrate constant Rv rot (R) and indicates that the infalling gas would have specific angular momentum at least as large as that in the galactic disk. are exclusively associated with galaxies. At the lowest column densities of N(H I)< 10 14 cm −2 , Ly α absorption was found to not mimic the rotation of, and/or accretion onto, galaxies derived from H I observations (Côté et al., 2005). Maybe this is not unexpected since this low column density gas is not likely to be associated with galaxies and likely associated with the Ly α forest (gas associated with galaxies have N(H I)> 10 14.5 cm −2 − Rudie et al., 2012). Circumgalactic and Galaxy Gas-Phase Metallicities It is possible that metallicity can be used to determine the origins of absorbing gas observed around galaxies since outflows are expected to be metal-enriched while accreted gas should have lower metallicity (e.g., Shen et al., 2012). Gas accretion is expected to be metal poor but not purely pristine given that the first generations of Population III stars have likely enriched the gas to 10 −4 Z ⊙ by a redshift of 15 − 20 (e.g., Yoshida et al., 2004). In fact, there is a metallicity floor whereby it is rare to find absorption systems with metallicities much lower than 10 −3 Z ⊙ even out to z ∼ 5 (Prochaska et al., 2003;Penprase et al., 2010;Cooke et al., 2011;Battisti et al., 2012;Rafelski et al., 2012;Jorgenson et al., 2013;Cooke et al., 2015;Cooper et al., 2015;Fumagalli et al., 2016a;Lehner et al., 2016;Quiret et al., 2016). Although a few systems do have metallicities <10 −3 Z ⊙ (Fumagalli et al., 2011b;Cooke et al., 2011Cooke et al., , 2015Lehner et al., 2016) and possibly have Population III abundance patterns . Cosmological simulations predict gas accretion metallicities should be between 10 −3 − 10 −0.5 Z ⊙ , which is dependent on redshift and halo mass (Fumagalli et al., 2011a;Oppenheimer et al., 2012;van de Voort & Schaye, 2012;Shen et al., 2013;Kacprzak et al., 2016), however this metallicity range does have some overlap with the metallicities of recycled outflowing gas. There has been an abundance of studies that have identified galaxies with circumgalactic gas metallicity measurements in an effort to determine the source of the absorption. The general census shows that absorption systems near galaxies are either metal-poor with metallicities between −2 <[X/H]< −1 (Tripp et al., 2005;Cooksey et al., 2008;Kacprzak et al., 2010b;Ribaudo et al., 2011;Thom et al., 2011;Churchill et al., 2012;Bouché et al., 2013;Crighton et al., 2013;Stocke et al., 2013;Kacprzak et al., 2014;Crighton et al., 2015;Muzahid et al., 2015;Bouché et al., 2016;Fumagalli et al., 2016b;Rahmani et al., 2016) or metal-enriched with metallicities of [X/H]> −0.7 Lehner et al., 2009;Péroux et al., 2011;Bregman et al., 2013;Krogager et al., 2013;Meiring et al., 2013;Stocke et al., 2013;Crighton et al., 2015;Muzahid et al., 2015Muzahid et al., , 2016Péroux et al., 2016;Rahmani et al., 2016). Determining the fraction of metal-rich and metal-poor systems is complicated since all the aforementioned studies have a range of observational biases due to the way the targets were selected -some selected from the presence of metallines. A clear complication of tracing the circumgalactic gas using metal-lines as tracers may bias you towards metal enriched systems. Ideally, the best way to avoid such a biases is to select absorption systems by hydrogen only. Selecting only by hydrogen (but not necessarily with known galaxy hosts), has shown that the metallicity distribution of all Lyman limit systems below z < 1 appear to have a bi-modal distribution (Lehner et al., 2013;Wotta et al., 2016). The shape of Lyman limit systems (17.2 <log N(H I) < 17.7 in their study) metallicity bimodality distribution could be explained by outflows producing the high metallicity peak ([X/H]∼ −0.3) while accreting/recycled gas could produce the low metallicity peak ([X/H] ∼ −1.9) (Wotta et al., 2016). Interestingly, between 2 < z < 3.5, the metallicity distribution of Lyman limit systems uni-modal peaking at [X/H]= −2, in contrast to the bimodal distribution seen at z < 1 (Fumagalli et al., 2011b;Lehner et al., 2016). Therefore it is likely that there exists a vast reservoir of metal-poor cool gas that can accrete onto galaxies at high redshift and outflows build up the circumgalactic medium at a later time. These results are discussed in detail in the chapter by Nicolas Lehner. The overall knowledge of the metallicity distribution of the circumgalactic medium provides critical clues to the physics of gas cycles of galaxies. The bi-modal metallicity distribution is suggestive that we are observing both outflows and accretion, but our assumptions rely strongly on predictions from simulations, which still have some issues with modeling the circumgalactic medium since it is extremely feedback dependent. One way to determine if we are observing outflows and gas accretion is to combine our expectation of the gas geometry of accretion being along the minor axis and metal poor, while outflows are metal-enrich and ejected along the minor axis. Preliminary work by Péroux et al. (2016), using nine galaxies, indicate that there is a very weak anti-correlation with metallicity and azimuthal angle. Figure 6 show the relative metallicity of the absorbing gas with respect to the host galaxy metallicity as a function of azimuthal angle, which are two independent indicators of gas flow origins. Note that in the figure a positive difference in metallicity indicates a circumgalactic medium metallicity lower than that of the host galaxy's HII regions (expected for metal-poor accreting gas) while a negative value indicates a higher circumgalactic medium metallicity than the host galaxy (expected for metal-enriched outflows). For the few objects shown in Figure 6, there is not clear correlation as expected under simple geometric and metallicity assumptions. Note that, different from expectations, there does not appear to be any high metallicity gas at high azimuthal angles. At low low azimuthal angles, there are a range of metallicity differences including negative values, which is unexpected for accreting gas. Given the few number of systems and only a weak anti-correlation, more systems are required to understand if there is a relation between the spatial location of the circumgalactic medium and metallicity. This is an active area of research and may be the most promising avenue to peruse in the future. Putting it all Together The previous sections describe the individual geometric, kinematic and metallicity indicators as evidence for cold-mode accretion. On their own, they are quite suggestive that we have detected signatures of gas accretion, however, combining all of these accretion indicators together can provide quite compelling evidence. There are a few of such examples that exist where some point to their circumgalactic medium originating from metal enriched outflows along the galaxy minor axis Muzahid et al., 2015) or the circumgalactic medium is kinematically consistent with gas arising from tidal/streams or interacting galaxy Fig. 6 Metallicity difference between the host galaxy and absorber as a function of azimuthal angle. In this plot, accreted gas is expected to reside in the upper-left corner (for low/co-planer azimuthal angle and high metallicity difference), while outflowing gas should reside in the lowerright corner (high/minor axis azimuthal angle and metallicity similar to the host galaxy) as indicated by the red arrows. Note outflowing gas appears to be metal-poor, while accreting gas exhibits a range in metallicity. Additional observations are necessary to better relate metallicity and geometry in gas flows as only a minor anti-correlation is currently measured. Image from Péroux et al. (2016). groups (Kacprzak et al., 2011b;Muzahid et al., 2016;Péroux et al., 2016) or even enriched gas that is being recycled along the galaxy major axis . Bouché et al. (2013) presented a nice example of a moderately inclined z = 2.3 star-forming galaxy where the quasar sight-line is within 20 degrees of the galaxy's projected major axis. They derived the galaxy rotation field using IFU observations and found that the kinematics of circumgalactic medium at 26 kpc away could be reproduced by a combination of an extended rotating disk and radial gas accretion. The metallicity of the circumgalactic medium is −0.72, which is typically for gas accretion metallicities from cosmological simulations at these redshifts (van de Voort & Schaye, 2012;Kacprzak et al., 2016). The mass inflow rate was estimated to be between 30 − 60 M ⊙ yr −1 , which is similar to the galaxy star-formation rate of 33 M ⊙ yr −1 and suggestive that there is a balance between gas accretion and star-formation activity. This particular system is described in detail in the chapter by Nicolas Bouché. In a slightly different case, a star-forming galaxy at z = 0.66 was examined where the quasar sightline is within 3 degrees of the minor axis at a distance of 104 kpc (Kacprzak et al., 2012b). Contrary to expectations, they identify a cool gas phase with metallicity ∼ −1.7 that has kinematics consistent with a accretion or lagging halo model. Furthermore, they also identify a warm collisionally ionized phase that also has low metallicity (∼ −2.2). The warm gas phase is kinematically consistent with both radial outflows or radial accretion. Given the metallicities and kinematics, they conclude that the gas is accreting onto the galaxy, however this is contrary to the previously discussed interpretations that absorption found along a the projected minor axis is typically associated outflows. This could be a case where there is a miss-alignment with the accreting filaments and the disk or is an example of the three filaments typically predicted by simulations (e.g., Dekel et al., 2009;Danovich et al., 2012), thus making it unlikely that all accreting gas is co-planer. Detailed studies like these are one of the best ways of constraining the origins of the circumgalactic medium and to help us understand how gas accretion works. Larger samples are required to build up a statistical sample of systems to negate cosmic variance. Large IFU instruments like the Multi Unit Spectroscopic Explorer (MUSE) will provide ample full field imaging/spectra datasets and help us build up these larger samples with less observational time. Gas accretion studies using IFUs are further discussed in the chapter by Nicolas Bouché. Direct Imaging of Gas Accretion All of the aforementioned efforts are possibly direct observations of gas accretion, yet the scientific community is not satisfied since it is difficult to prove conclusively that we have detected cold gas accretion onto galaxies. The only way to conclusively do so is by direct spectral imaging of cosmic flows onto galaxies. Obviously this is quite difficult due to the faintness and low gas column densities of these coldflow filaments. However, we may be on the verge of being able to detect these cold . The data were obtained using Palomar cosmic web imager. The disk and filamentary candidates are indicated. -b) A pseudo slit is placed over the narrow-band image covering the disk and filaments 1 and 3 indicated by the curved white lines with tick marks indicating the distance in arcseconds along the slit. -c) The narrow-band image produced using the sheared velocity window slit indicating the distance along the slit as in panel b) -d) The mean velocity (green) and velocity dispersion (red) from the above panels. Note observed rotation with high velocity dispersions seen along the filament. This is also seen for the remaining filaments (not shown here). This is potentially one of the first direct images of cold gas accretion. Image courtesy of Chris Martin and modified from Martin et al. (2016). gas-flows using the new generation of ultra sensitive instruments such as the Keck Comic Web Imager. It is also possible that we have already observed comic accretion onto two separate quasar hosts with one example shown in Figure 7. The figure shows a narrow-band image obtained with the Palomar Comic Web Imager. The extended nebula and filaments are likely illuminated by the nearby quasar. The streams, as indicted in the figure, extend out to ∼ 160 − 230 kpc. It was determined that this nebula/disk and filaments are well fit to a rotating disk model with are dark matter halo mass of ∼10 12.5 M ⊙ with a circular velocity of ∼350 km s −1 at the viral radius of 125 kpc . They further found that by adding gas accretion with velocities of 80−100 km s −1 improved their kinematic model fit. Furthermore, they estimate the baryonic spin parameter is 3 times higher than that of the dark matter halo and has an orbital period of 1.9 Gyr. The high-angular momentum and the well fitted inflowing stable disk is consistent with the predictions of cold accretion from cosmological simulations (Stewart et al., 2011;Danovich et al., 2012;Stewart et al., 2016;Danovich et al., 2015;Stewart et al., 2013). It is possible that we have direct evidence of cold accretion already, but it has yet to be observed and quantified for typical star-forming galaxies (which is much harder). These types of ultra-sensitive instruments will potentially allow us to directly image cosmic accretion in the near future. Summary Although we only have candidate detections of gas accretion, the community has provided a body of evidence indicating that cosmic accretion exists and is occurring over a large range of redshifts. Individual studies on their own are only suggestive that this accretion is occurring, but taking all the aforementioned aspects together does paint a nice picture of low metallicity gas accreting along co-planar co-rotating disk/filaments. The picture seems nice and simple but understanding how previously ejected gas is also recycled back onto the galaxy is also important since it can possibly mimic gas accretion signatures (Oppenheimer et al., 2012;Ford et al., 2014). Simulations predict the cross-section of cold flows to be as low as 5% (Faucher-Giguère & Kereš, 2011;Fumagalli et al., 2011b;Kimm et al., 2011;Goerdt et al., 2012) and that the metal-poor cold flow streams signatures should be overwhelmed by the metal-rich outflows signatures detected in absorption spectra. However, signatures of intergalactic cold gas accretion seem to be quite frequent. Thus, either the simulations are under-producing gas accretion cross-sections due to various reasons such as dust, resolution effects, self-shielding and/or magnetic fields, or the observations are selecting more than just outflowing and accreting gas. Trying to address how recycled winds fit into the picture, and understanding how we can differentiate them from gas accretion, will possibly aide in resolving this issue. It is great to see however, how the simulations and observations have been working together in our community to try and understand the gas cycles of galaxies. Aside from observing accretion directly, the way forward now is to combine geometry, kinematics and metallicity at a range of epochs to try to understand how gas accretion occurs. Hopefully in the near future, we will be able to directly image gas accretion, putting to rest one of the most debated issues in the circumgalactic field.
2016-12-01T21:00:02.000Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "7a3133dd85d803aeb4d412a98ba21a1593c3f004", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1612.00451", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7a3133dd85d803aeb4d412a98ba21a1593c3f004", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219008888
pes2o/s2orc
v3-fos-license
Effect of combined aquatic and cognitive training on quality of life, fall self-efficacy, and motor performance in aged with varying cognitive status: a proof-of-concept study With the increasing number of aged individuals, research pertaining to their cognitive functions and physical-motor has become exponentially imperative. The purpose of the study was to investigate the effect com-bined aquatic and cognitive training on quality of life (QoL), fall self-effi-cacy and motor performance (static and dynamic balance) in aged with varying cognitive status levels. Thirty participants were assigned to a high cognitive status group (n=10), low cognitive status group (n=10), or nonintervention control group (n=10). Participants completed a 6-week motor-cognitive training regime with increasing intensity. QoL, fall self-efficacy, static balance, and dynamic balance were assessed. Preliminary results suggest proof-of-concept significant (P<0.05) im-provements were found in both the high and low cognitive status groups for static and dynamic balance and fall self-efficacy. However, QoL was only found to be significantly improved in the low cognitive status group. Aqua training along with cognitive training can effectively be used to prevent falls in the elderly and to improve their physical-motor perfor-mance. However, when attempting to improve QoL, the cognitive status of the individual should be considered. INTRODUCTION Aging is a process that occurs over time and leads to deleterious structural and functional deviations in the body. Physical and cognitive impairment represent two of the most acute of these changes among the elderly because they can lead to physical dependence and social isolation . According to the United Nations statistical estimates, about 2.5% of the population over the age of 65 is added annually, and by 2025 one out of every seven people in the world is expected to be over 60 years old (Mangine et al., 2014). Iran is no exception to this phenomenon and according to the latest population and housing census of 2016, the population of people over 60 years is over 7,400,000 people, which makes up about 10% of the total population (Fuladvandi et al., 2017). Life expectancy in Iran has been reported to be 73 years (Delavari et al., 2016). One-third of the elderly have cognitive disabilities, and more than 60% of them need help with daily activities. Taken together, the interconnected nature of physical and cognitive function emphasizes the importance of cognition when examining physical function in the elderly (Aliberti et al., 2019). Specifically, dysfunction in executive functions (such as working memory, inhibition, transitive attention) and psycho-verbal in the elderly are accompanied by many declining changes in health and physical function and is manifested by an increased risk of falls (Goudarzian et al., 2017;Schoene et al., 2015). Falling is the third leading cause of chronic disability in the world (Desjardins-Crépeau et al., 2016). In fact, the incidence of falls in the elderly with cognitive deficits is approximately 60% higher than in individuals with normal cognitive status. Cognitive problems in the elderly include memory loss and difficulties in recognizing time, place, etc. (Petersen et al., 2014). It develops with varying severity and, depending on the severity, leads to behavioral disturbances in the elderly and may have other negative social consequences (Lachs et al., 1992). In the field of exercise science, practitioners are always searching for nonpharmacological This is because exercise has been shown to affect both the psychological and physiological factors that are associated with decreased quality of life due to aging (Sanders et al., 2019;T O'Dwyer et al., 2007). However, while exercise is seen as a promising intervention to prevent or delay cognitive decline in individuals aged 50 years and older, the evidence is not conclusive (Northey et al., 2018). In addition, while some clinical trials of exercise interventions demonstrate positive effects of exercise on cognitive performance, other trials show minimal to no effect (Kirk-Sanchez and McGough, 2014). This may be so since exercise programs may need to be structured, individualized (to cognitive status) and be multicomponent in design to show promise for preserving cognitive and physical performance in older adults (Kirk-Sanchez and McGough, 2014). Problematically, while studies investigating the impact of aquatic exercise on cognition are scarce (Ayán et al., 2017), and no studies have been performed on a subgroup analysis of the effects of exercise interventions on different cognitive domains of the elderly (Zhou et al., 2018). In addition, since water-based exercise provides the same physiological benefits as land-based exercise with reduced risk of acute injury (Fedor et al., 2015), especially from falls, the present study novely attempted to determine if a multimodal therapy of aquatic exercise and cognitive training could improve on motor performance and quality of life in aged. Further, the present study uniquely attempted to determine if this multimodal therapy is more effective on those with low or high cognitive status in order to individualize and target program design. Study design This study made use of a quasi-experimental proof-of-concept design for this potential application of combining aquatic and cognitive training to improve motor performance and quality of life in the aged with varying cognitive status. The study protocol was approved by University of Tehran, Iran (ID: IR.SSRI. REC. 1398.624). In order to determine feasibility via proof-of-concept, but not represent deliverables of subgroup analysis of the effects of exercise interventions on different cognitive domains of the elderly (Zhou et al., 2018), 30 previously sedentary participants aged 60 years and above (Northey et al., 2018;Resende-Neto et al., 2019) were randomly assigned to a low cognitive status group (n=10), high cognitive status group (n=10), or a control group (n=10), whose participants continued with their normal daily activities. Participants were community-dwelling female volunteers (aged from 60 to 70 years; average, 63.57±3.31 years). Interested individuals were screened to determine whether they met the inclusion and exclusion criteria. For inclusion, participants had to be older than 60 years, have a fluency of Persian; had to sign the informed consent; were required have the ability to respond to questionnaires, including the Mini-Mental Status Questionnaire (MMSE); have no absolute or relative contraindications to exercise and be willing to use the intervention for six weeks (Ansari et al., 2010;Shariat et al., 2018). Exclusion criteria included seniors with diagnosed Alzheimer disease, dementia, recent head injury, or unstable chronic diseases (e.g., stroke, diabetes), rapidly progressing or terminal illnesses (Eggenberger et al., 2015), or a score not being classified as having a low or high cognitive status on the MMSE. Measurements (primary outcomes) An MMSE was used to assess cognitive status (MacKenzie et al., 1996) with the validity of the Persian version of this questionnaire being developed previously (Ansari et al., 2010). The World Health Organization quality of life questionnaire was used to measure quality of life. The questionnaire consisted of 26 questions in four domains: physical health, psychological, social relationships and environmental (Juniper et al., 1994). The international fall depression self-esteem inventory questionnaire was used to collect falling selfefficacy data. Static balance was evaluated by the adjusted Romberg test (McIlroy and Maki, 1997). Dynamic balance was measured using the Timed Up & Go test in which the participants were required to rise from the chair, walk three meters, turn around, walk back to the chair and sit down (Hakakzadeh et al., 2019). Interventions After the pretest measurements, the participants in the intervention protocols followed an exercise regime based on that of Jung et al. (2014) and Means and O'Sullivan (2000) for 6 weeks, 3 times weekly (Jung et al., 2014;Means and O Sullivan, 2000). The pool was 7×6 m and 110 cm in depth. The water temperature was kept at 30°C-33°C for the aquatic exercises. Each session commenced with a 15-min warm-up, followed by 40 min of aquatic training and concluding with a 5-min cool-down. The waterbased program consisted of incremental bodily exercises. In order to implement obstacles and stepping-stones in water, the IGYM system (ISOPA, Hwasung, Korea) was used together with the pool step. After installing the round towers, they were connected by bars at the height of the holes to make an obstacle. The obstacle training consisted of three subparts, stepping over the IGYM system, going up and down stairs, and crossing over a step. The obstacle training was as follows. Warm-up included upper extremity and lower extremity stretching and range of motion exercises for flexibility (15 min). The main exercise included stepping over the IGYM (a height of 10 cm), stepping over the IGYM (a height of 20 cm), going up and down stairs (a height of 19 cm), crossing over a step (a height of 14 cm), and turning around a target and returning along the obstacle course for 20 min. Participants combined these exercises with other tasks intended to stimulate cognitive functions through movement using BrainGym therapy for an additional 20 min (Dennison and Dennison, 1989;Morgenstern et al., 2017). At first only simple motor tasks were performed, then gradually simple cognitive tasks were added to the motor tasks; in turn, both motor and cognitive tasks became more complex. Cool down included upper extremity and lower extremity stretching and range of motion exercises for flexibility (5 min) (Jung et al., 2014;Means and O Sullivan, 2000). All sessions were supervised by the primary investigator. The control group was not assigned any intervention and was required to continue with their normal daily activities. Statistical analyses After the posttest, the data were monitored and normalized. A Shapiro-Wilk test using tilting and elongation indices were applied, and the data normality was determined based on Q-Q graphs. Then by analysis of compound variance test 2×3 (2 groups: high and low cognitive status and 2-step control measures: pretest, posttest) with repeated measurements on the latter factor, the data were analyzed, and in terms of discriminant analysis of withingroup effects, a paired t-test was used. RESULTS Demographic characteristics of the participants are shown in Table 1. Statistical analysis showed that the three groups were homogenous (P≤0.05) for age, quality of life and fall self-efficacy, they were heterogeneous for static and dynamic balance ( Table 2). The results obtained following the 6-week intervention indicated that significant improvements were found in both the high and low cognitive status groups for fall self-efficacy, static and dynamic balance. However, quality of life was only found to be significantly improved in the low cognitive status group. The results of the compound variance analysis test that was performed to investigate the effects of the intervention and intergroup effects after assuring homogeneity of covariance with the result of the Box M test are presented in Table 3. Findings show that all variables have a significant main effect of time, i.e., interventions lead to a meaningful effect on dependent variables (P≤0.05). It was found that static and dynamic balance and self-efficacy, main effect, and interactive effect were not significant, whereas, quality of life, the main effect and the interactive effect were significant. As such, the changes in the high and low cognitive status groups were the same for quality of life, static balance and dynamic balance. However, a significant difference was found between the high and low cognitive status groups for self-efficacy. DISCUSSION The purpose of the study was to investigate the effect combined aquatic and cognitive training on quality of life, fall self-efficacy and motor performance (static and dynamic balance) in aged with varying cognitive status levels. Despite the three groups were homogenous for age, quality of life and fall self-efficacy, they were heterogeneous for static and dynamic balance. This is to be expected since cognitive problems in the elderly lead to behavioral disturbances in the elderly and may have other negative social consequences, such as an increased sedentary behavior (Lachs et al., 1992;Petersen et al., 2014). Problematically, the findings of this study are unique in that studies investigating the impact of aquatic exercise on cognition are scarce (Ayán et al., 2017), and no studies have been performed on a subgroup analysis of the effects of exercise interventions on different cognitive domains of the elderly (Zhou et al., 2018). While aquatic exercise has been demonstrated to improve cognitive function in the elderly (Ayán et al., 2017;Sato et al., 2015), the findings of the present study demonstrated inconsistent results that combined aquatic and cognitive training only improved quality of life in elderly individuals with a low cognitive status. While further research is needed to confirm or disprove this unique finding, these findings may again highlight the importance of personalized prescription for water-based exercises in elderly adults to improve cognitive function. The finding that both the low and high cognitive status groups improved their fall self-efficacy is an important finding. This is because fear of falling is an important barrier to many activities, including participation in structured exercise programs since it is with distress, increased use of medication, decreased physical function, increased risk of falls, reduced quality of life, activity restrictions, fractures, and admission to institutional care (Dewan and MacDermid, 2014). It must also be noted that the use of water-based exercises in the aged is especially important to not only reduce falling but also to mitigate the distress associated with a low fall self-efficacy. Balance is an important part of the functional evaluation in older adults when screening for falls (Cuevas-Trisan, 2017). While land-based exercise is generally regarded to improve balance, and specifically dynamic balance, more than water-based exercise in several populations (Eyvaz et al., 2018;Silva et al., 2008), it must again be noted that the use of water-based exercises in the aged is not only important for those elderly with osteoarthritis, but also to reduce actual falling, and to allay the distress associated with a fear of falling. While previous water-based studies have demonstrated improvements in static balance (Jung et al., 2014), the present study's unique finding that the 6-week water-based intervention improved both static and dynamic balance in both the high and low cognitive status groups is particularly noteworthy. This may be related to not only improvements in motor control, but also in improved cognitive function associated with cognitive training as utilized in the present study (van Het Reve and de Bruin, 2014). The finding that exercise can improve balance in elderly groups with high and low cognitive status has previously been demonstrated, albeit following land-based strength-balancecognitive training (van Het Reve and de Bruin, 2014). The present study has several limitations that could affect ecological validity. In this regard, the small sample size, while similar to that of numerous previous studies (Fedor et al., 2015;Jung et al., 2014;Means and O Sullivan, 2000;Sato et al., 2015) serves as proof-of-concept evidence for the effect of the effect combined aquatic and cognitive training on quality of life, fall self-efficacy and motor performance (static and dynamic balance) in aged with varying cognitive status levels. In addition, several extraneous variables could affect quality of life, fall self-efficacy and motor performance (static and dynamic balance). In this regard, alterations in various factors that were not measured in the present study, such as socioeconomic status, income, marital status, could have affected quality of life (Zhang et al., 2019). Aquatic exercise is a safe and effective intervention in the elderly with cognitive deficits and/or at risk of falling. Based on the findings of the present study, it seems that by combining aquatic and cognitive training, a new pathway could be utilized to improve the elderly's quality of life and physical abilities (i.e., static and dynamic balance) to prevent falls. This study also revealed the necessity of considering an elderly individual's cognitive status prior to prescribing exercise in order to optimize the benefits of exercise.
2020-04-30T09:06:40.986Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "f44aed02008101fe582c5d92e55f0866d289063a", "oa_license": "CCBYNC", "oa_url": "https://www.e-jer.org/upload/jer-16-2-148.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f4e5622b771971a6ac4a89acc1a0f5e704183df", "s2fieldsofstudy": [ "Psychology", "Education" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
17772670
pes2o/s2orc
v3-fos-license
Antifungal Susceptibilities of Bloodstream Isolates of Candida Species from Nine Hospitals in Korea: Application of New Antifungal Breakpoints and Relationship to Antifungal Usage We applied the new clinical breakpoints (CBPs) of the Clinical and Laboratory Standards Institute (CLSI) to a multicenter study to determine the antifungal susceptibility of bloodstream infection (BSI) isolates of Candida species in Korea, and determined the relationship between the frequency of antifungal-resistant Candida BSI isolates and antifungal use at hospitals. Four hundred and fifty BSI isolates of Candida species were collected over a 1-year period in 2011 from nine hospitals. The susceptibilities of the isolates to four antifungal agents were determined using the CLSI M27 broth microdilution method. By applying the species-specific CBPs, non-susceptibility to fluconazole was found in 16.4% (70/428) of isolates, comprising 2.6% resistant and 13.8% susceptible-dose dependent isolates. However, non-susceptibility to voriconazole, caspofungin, or micafungin was found in 0% (0/370), 0% (0/437), or 0.5% (2/437) of the Candida BSI isolates, respectively. Of the 450 isolates, 72 (16.0%) showed decreased susceptibility to fluconazole [minimum inhibitory concentration (MIC) ≥4 μg/ml]. The total usage of systemic antifungals varied considerably among the hospitals, ranging from 190.0 to 7.7 defined daily dose per 1,000 patient days, and fluconazole was the most commonly prescribed agent (46.3%). By Spearman’s correlation analysis, fluconazole usage did not show a significant correlation with the percentage of fluconazole resistant isolates at hospitals. However, fluconazole usage was significantly correlated with the percentage of fluconazole non-susceptible isolates (r = 0.733; P = 0.025) or the percentage of isolates with decreased susceptibility to fluconazole (MIC ≥4 μg/ml) (r = 0.700; P = 0.036) at hospitals. Our work represents the first South Korean multicenter study demonstrating an association between antifungal use and antifungal resistance among BSI isolates of Candida at hospitals using the new CBPs of the CLSI. Introduction The incidence of Candida bloodstream infections (BSIs) has increased over the past several decades, and antifungal use has increased drastically worldwide [1,2]. Although Candida albicans remains the leading Candida species that causes BSI, a shift in the epidemiology of Candida BSI toward greater isolation of non-albicans Candida species has been a global concern in the past two decades [1,3]. Several studies have evaluated the relationship between antifungal drug use and changes in the epidemiology of candidemia [4][5][6]; however, to date, few multicenter surveillance studies have been conducted on the relationship between antifungal drug use and the frequency of antifungal resistance of Candida BSI isolates at hospitals. Moreover, several surveillance programs on Candida isolates responsible for BSI worldwide have not reported alterations in the patterns of susceptibility to the azoles or echinocandins over time [1,3,5,6]. The cause may be the rarity of acquired antifungal resistance among BSI Candida isolates or due to the antifungal susceptibility tests and breakpoints being non-optimal for the detection of resistance in Candida isolates [7]. The Clinical and Laboratory Standards Institute (CLSI) recently developed new Candida species-specific clinical breakpoints (CBPs) for fluconazole, voriconazole, and echinicandins [7,8]. A recent report has shown that resistance to the azoles and echinocandins of Candida species may be increased using the new CLSI CBPs [9]. Therefore, the new CBPs of the CLSI may be applied to antifungal surveillance studies as sensitive tools for detecting emerging resistance in Candida BSI isolates. In the present study, we applied new species-specific CLSI CBPs in a multicenter study to determine the antifungal susceptibility of BSI isolates of Candida species in Korea, and investigated the relationship between the frequency of antifungal resistance of Candida BSI isolates and antifungal use at nine hospitals. Materials and Methods A surveillance study was conducted at nine university hospitals (A-I) located throughout Korea to determine the frequency of antifungal resistance of Candida BSI isolates, and to determine their relationship with antifungal use. The participant hospitals were as follows: Ajou University School of Medicine, Suwon; The Catholic University College of Medicine, Seoul; Chonnam National University Hwasun Hospital, Hwasun; Chonnam National University Hospital, Gwangju; Chonbuk National University of Medicine, Jeonju; Chung-Ang University College of Medicine, Seoul; Pusan National University Yangsan Hospital, Yangsan; Yonsei University College of Medicine, Seoul; and Yonsei University Wonju College of Medicine, Wonju. Among the nine hospitals, six had more than 1,000 beds, while the other three (B, D, and I) had 500-1,000 beds. During the 1-year period from January to December 2011, each participant hospital was required to collect blood isolates (one isolate per patient) of Candida species. The annual usage of systemic antifungal agents for patients admitted to each hospital between January and December 2011 was determined by calculating the number of defined daily doses per 1,000 patient days (DDD/1,000 PD), as specified by the WHO ATC/DDD system (www.whocc.no/atcddd/) and the DDD measurement methodology [10]. In addition, the incidence of candidemia was defined as the number of cases of candidemia per 10,000 patient days (PD). Species identification and antifungal susceptibility testing were performed at Chonnam National University Hospital for all 450 isolates. This study was approved by the institutional review board of Chonnam National University Hospital (IRB CNUH-2011-026). A waiver of consent was granted given the observational nature of the project. The study involved only the results of the species identification and antifungal susceptibility testing of Candida species isolated from routine cultures in the mycology laboratory, and no information was used that could lead to patient identification. Species identification was based on colony morphology using CHROMagar Candida and a commercial system (API 20C; bioMérieux, Marcy L'Étoile, France) (Vitek 2 system; Vitek 2 YST; bioMérieux) or sequencing [11]. Susceptibility to fluconazole, voriconazole, caspofungin, and micafungin was assessed by the CLSI BMD method M27-A3 after 24 h and using the new CLSI-developed CBPs [8,12]. Revised CLSI CBPs were applied to 437 isolates of six Candida species, including Candida albicans, Candida parapsilosis, Candida tropicalis, Candida glabrata, Candida guilliermondii and Candida krusei, for both caspofungin, and micafungin, while the revised CLSI CBPs were applied to 370 isolates of four common species (C. albicans, C. parapsilosis, C. tropicalis, and C. krusei) for voriconazole. Revised CLSI CBPs for fluconazole were applied to four common Candida isolates (C. albicans, C. parapsilosis, C. tropicalis, and C. glabrata), and all C. krusei isolates are considered resistant to fluconazole irrespective of the minimum inhibitory concentration (MIC) [8]. Two reference strains, C. parapsilosis ATCC 22019 and C. krusei ATCC 6258, were included in each test as quality control isolates. The relationship between antifungal (total or individual) usage and the incidence of candidemia or antifungal susceptibility was determined using Spearman's rank correlation coefficient (rho, r) and its corresponding P value. A P value less than 0.05 was deemed to indicate significance in both analyses. Table 1 summarizes the in vitro susceptibility of 450 Candida isolates to azoles (fluconazole, voriconazole) and echinocandins (caspofungin and micafungin). Of the 450 total BSI isolates from nine hospitals, 16.0% (72/450) had fluconazole MICs 4 μg/ml. By applying the speciesspecific new CBPs, resistance and susceptible-dose dependence to fluconazole were found in 2.6% (11/428) and 13.8% (59/428) of four common Candida species and C. krusei, respectively. However, no voriconazole, caspofungin or micafungin resistance was detected in any species, with the exception of only two C. parapsilosis isolates which exhibited intermediate resistance (MIC, 4 μg/ml) to micafungin. Overall, Candida BSI isolates remained largely susceptible to the three antifungals, with susceptibility rates of 100%, 100%, and 99.5% for voriconazole caspofungin, and micafungin, respectively. These findings indicate that the current rate of resistance to voriconazole and two echinocandins among Candida species is low in Korea; this trend is similar to our previous report [13]. However, susceptibility to fluconazole was found in 83.6% (358/428) of the isolates by the new CBPs, which is significantly lower than our previous report (96.4% susceptible, P <0.005) based on the original CBPs [13]. Results and Discussion Several global surveillance programs have demonstrated that most BSIs caused by three common Candida species (i.e., C. albicans, C. tropicalis, and C. parapsilosis) are susceptible to fluconazole using the original CLSI CBPs [3,13]. During the past few years, the CLSI has adjusted CBPs for fluconazole by lowering them to the same MIC values (S, 2 μg/mL; susceptible dose dependent [SDD], 4 μg/mL; R, 8 μg/mL) as the European Committee on Antimicrobial Susceptibility Testing (EUCAST) for three common Candida species [14]. Using these species-specific CBPs, trends toward increased resistance to fluconazole in these three species were noted [9,15]. Similarly, 2.2% (8/366) of isolates of three common Candida species in the present study were categorized as fluconazole non-susceptible (resistant or SDD) based on the revised CLSI CBPs, while none were categorized as fluconazole non-susceptible based on the original CBPs (fluconazole MIC, 16 μg/ml). Additionally, only 34.5% (20/58) of C. glabrata were categorized as fluconazole non-susceptible (10.3% resistant or 24.1% SDD) based on the original CBPs, but all 58 isolates of C. glabrata were categorized as fluconazole non-susceptible (10.3% resistant and 89.7% SDD), suggesting that all C. glabrata isolates were no Others (13) longer considered susceptible to fluconazole using revised CBPs [7,8,14]. In 2013, the EUCAST also defined fluconazole CBPs for C. glabrata (intermediate, 0.002 to 32 μg/ml; resistant, >32 μg/ml; [http://www.eucast.org/clinical_breakpoints/; version 7.0]), suggesting that a harmonization of CBPs between the CLSI and the EUCAST for fluconazole has been achieved for the four most common Candida species [14,16]. Overall, these data show that non-susceptibility to fluconazole of Candida species may be increased using the new CLSI CBPs, which may thus be a sensitive measure for detecting the emergence of Candida strains with decreased fluconazole susceptibility, a finding that is consistent with other reports [7,9]. The incidence of candidemia, antifungal agent consumption, and the percentage of fluconazole non-susceptible Candida BSI isolates at nine Korean hospitals are shown in Table 2. The average incidence of candidemia at the nine university hospitals was 2.01 (1.14 to 2.70) per 10,000 PD. The most prevalent species was C. albicans (average: 0.79 episodes/10,000 PD; 40.4%) followed by C. parapsilosis (19.9%), C. tropicalis (17.6%), and C. glabrata (14.2%). There was considerable variation in total antifungal use at the nine hospitals, ranging from 7.7 to 190.0 DDD/1,000 PD. Fluconazole was the most common agent (average: 29.3 DDD/1,000 PD; 46.3%), followed by itraconazole (20.5%), amphotericin B (13.6%), voriconazole (7.4%), lipid formulation of amphotericin B (7.3%), and the echinocandins (4.9%). Oral fluconazole represented 77.5% of the total fluconazole usage at the nine hospitals (36.0% of the total antifungal usage). The mean total usage of antifungals was 63.2 DDD/PD, which was higher than that in 2005 (39.2 DDD/PD) [17]. In addition, the mean usage of fluconazole more than doubled from 2005 (11.7 DDD/1,000PD) to 2011 (29.3 DDD/1,000 PD) [17]. Accordingly, the rate of fluconazole resistance (MIC 64 μg/ml) of C. glabrata was increased to 10.3% (6/58) in the present study (2011), compared with 6% (2/72) in our previous multicenter study (2006)(2007) [13]. The use of fluconazole has been suggested to be associated with an increased risk of candidemia caused by non-albicans Candida species [4,18]. However, serial follow-up studies demonstrated that antifungal usage increased significantly, but the incidence of non-albicans candidemia was maintained [5,6]. In the present study, no relationship was found between antifungal use and the incidence of candidemia caused by all non-albicans Candida or all Candida species. However, the incidence of candidemia caused by all Candida species other than three common species showed positive correlations with usage of total (oral and intravenous) fluconazole (r = 0.681, P = 0.044) or oral fluconazole (r = 0.773, P = 0.015). In addition, oral fluconazole usage showed a significant positive correlation with the incidence of candidemia caused by C. glabrata (r = 0.832, P = 0.005). This is supported by the report that the most of C. albicans, C. tropicalis, and C. parapsilosis are highly susceptible to fluconazole, but many of the less common Candida species, including C. glabrata, exhibit decreased susceptibility to fluconazole [19]. When the species-specific new CBPs were applied, the percentage of fluconazole non-susceptible Candida BSI isolates varied among the nine hospitals from 5.3% to 33.3% (Table 2). Because revised species-specific CBPs are not at present available for less common Candida species, we also assessed the percentage of decreased susceptibility to fluconazole (MIC 4 μg/ml) in all 450 BSI isolates, which also varied among the hospitals from 5.3% to 26.5%. By Spearman's correlation analysis, total fluconazole usage showed positive correlations with the percentage of isolates with decreased susceptibility to fluconazole (MIC 4 μg/ml) (r = 0.700, P = 0.036) at hospitals (Fig. 1). Moreover, the incidence of candidemia caused by fluconazole non-susceptible Candida isolates showed positive correlations with usage of total fluconazole (r = 0.733, P = 0.025) or oral fluconazole (r = 0.8, P = 0.01). In the present study, we evaluated the annual usage of systemic antifungal agents for patients admitted to each hospital. However, given the fact that many of the hematologic patients are exposed to heavy antifungal use as outpatients, oral fluconazole use for outpatients also may have an influence on the frequency of antifungal-resistant Candida BSI isolates at hospitals. Among nine hospitals, the higher use of antifungal agents including fluconazole was found in two hospitals (hospitals A and B) which reported a higher population of immunocompromised patients (mainly hematological malignancies). The percentages of fluconazole resistant isolates, as expected, were higher in these two hospitals than those in other hospitals (11.8% vs. 0.8%, P = 0.001). However, Spearman's analysis indicated no significant relationship was found between antifungal usage (total or individual) and the presence of fluconazole-resistant Candida species among nine hospitals. Recent two studies have shown that patients with candidemia due to fluconazole non-susceptible Candida species are more likely to have received prior or recent fluconazole therapy [20,21]. In addition, suboptimal initial dosing of prior fluconazole therapy is associated with candidemia with fluconazole-non susceptible Candida species [21]. Although the present study found an association, not causality, we first describe here that hospitals with a higher rate of fluconazole use exhibited a higher incidence of candidemia due to fluconazole non-susceptible Candida isolates, using revised CLSI CBPs. We also add evidence on a significant relationship between fluconazole use and the incidence of candidemia caused by all Candida species other than three common species. To our knowledge, this work represents the first multicenter study to demonstrate an association between antifungal use and the frequency of antifungal resistance of Candida BSI isolates at hospitals. Changing trends in the epidemiology of candidemia and the emergence of antifungal resistance have affected selection of appropriate antifungal agents [1,7]. A recent study in a hospital showed that an increase in the use of echinocandins was associated with a decrease in the incidence of C. parapsilosis or C. guilliermondii candidemia and an increase in the incidence of C. tropicalis candidemia [4]. However, our data suggested no relationship between the usage of echinocandins and the incidence of candidemia. This might be due to the limited use of echinocandins in Korean hospitals. At present, fluconazole as a commonly used antifungal agent is suggested to contribute to the changing epidemiology of candidemia at Korean hospitals. However, given the increasing use of antifungal drugs, including echinocandins, continuous national surveillance programs using the new antifungal CBPs are needed to identify changes in the antifungal susceptibility patterns of Candida BSI isolates.
2016-05-04T20:20:58.661Z
2015-02-23T00:00:00.000
{ "year": 2015, "sha1": "df9367fca8aa3c17478255d518ef7b142d89c0ca", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0118770&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df9367fca8aa3c17478255d518ef7b142d89c0ca", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
260400109
pes2o/s2orc
v3-fos-license
The spectrum of measles in COVID-19 pandemic; An observational study in children. … Objective: To assess the spectrum of measles during COVID-19 era among children. Study Design: Cohort study. Setting: Department of Pediatric, Dr. Ziauddin University Hospital, Kemari Karachi. Period: February 2019 to February 2021. Material & Methods: Clinically diagnosed measles children of either gender aged between 3 months to 5 years admitted to our tertiary care center in the 2 years marked period of the study were analyzed. Measles was labeled as the presence of high grade fever (>104 0 F) and maculopapular rash. Medical history was noted and clinical examination was performed in all children. Necessary laboratory investigations like complete blood count and chest X-rays (CXR) were evaluated. All patients were treated as per standard institutional protocols. Outcomes were noted in terms of successful discharge or expiry. Results: In a total of 88 children, 55 (62.5%) were male. The mean age was 1.61±1.12 years while 49 (55.7%) children were aged ≤ 1 year. Twenty four (27.3%) children were fully vaccinated appropriate to their age. Fever and rash were found among all children (100%) while respiratory distress and coryza were reported by 82 (93.2%) and 51 (58.0%) children respectively. Overall duration of hospitalization was 7.02±2.32 days. Mortality was reported in 11 (12.5%) children while 76 (86.4%) children were successfully discharged. Development of acute respiratory distress (p=0.0011) and shock (p<0.0001) proved to have significant association with mortality. Conclusion: Mortality was relatively high among children with measles during the COVID-19 pandemic era. During hospitalization, most frequent complications were pneumonia, and eye and/or mouth related complications. Development of acute respiratory distress and shock proved to have significant association with mortality. INTRODUCTION Measles is a highly contagious and infectious respiratory disease caused by Morbillivirus, a member of the Paramyxoviridae family. 1 Clinically measles is characterized by high fever, cough, rhinitis, conjunctivitis, Kolpik's spots, and maculopapular rash. 2 The infection is acquired via the respiratory tract and mostly affects infants and children under the age of five estimated at approximately 100,000 deaths in children aged less than five in 2008. 3 The measles virus has a reproduction number of 12 to 18 (expected number of cases directly generated by one case in a population where all individuals are susceptible to infection) as compared to COVID which has a reproduction number of 2.5 to 3.5 therefore measles is considered as the most infectious virus on the planet. 4 The virus incubation period is usually 2 weeks; individuals usually recover in 3 weeks if they do not develop any complications. Immune suppression caused by the virus can last even after recovery thereby increasing the susceptibility to secondary infections. Pneumonia is therefore the most common fatal complication of the disease with 50% of them being due to a bacterial superinfection. 5,6 Measles continues to be a burden on the global health system despite the availability of a safe vaccine for the last 5 decades. The estimated total global measles death in 2016 were around 90,000 which drastically increased to approx. 207,000. 7-10 Between 2000 to 2018 there has been a 73% decrease in the mortality of measles https://doi.org/10.29309/TPMJ/2023.30.08.7586 2 worldwide through vaccination but it is still prevalent in the developing nations of Asia and Africa with poor health infrastructure. 11 Pakistan is amongst those countries where measles is still a common sight. In 2017, there were 6,494 confirmed measles cases in Pakistan, according to WHO; this figure accounted for more than 65% of the total number of cases reported in WHO's Eastern Mediterranean Region comprising 22 countries. 12 Dating back to 1974 WHO initiated the immunization program against vaccinepreventable diseases and Pakistan started it in 1978. 13 Two doses of the measles vaccine have been recommended by WHO; the first one conferring 85% immunity against the disease and the second dose making children 95% immune to the disease. 8 The Expanded Program of Immunization (EPI) schedule of Pakistan includes two shots of measles vaccine at 9 months and 15 months. 14 The global target of the Program is to immunize over 95% of infants. 6 A Survey conducted in Pakistan in 2017 and 2018 indicated the nationwide coverage of the first and second dose of measles vaccine at 73% and 67%, respectively which is well below the target mark of 95%. 15 During the COVID pandemic, we observed a rapid increase in hospitalized measles patients supported by preliminary data of 1550 cases of measles reported at the beginning of 2021 according to the National Institute of Health Islamabad Weekly Field Epidemiology report. Given the recent pandemic of COVID-19 is over, a more objective understanding of its impact on measles as a disease in terms of prevalence is required. This study was aimed to assess the spectrum of measles during COVID-19 era among children. MATERIAL & METHODS This cohort study was conducted in the Pediatric Department (Pediatric Ward, PICU) at Dr. Ziauddin University Hospital, Kemari Karachi from February 2019 to February 2021. Ethical approval was acquired by "Institutional Review Board and Ethical Committee" (4631221PBPED). Informed and written consents were acquired from parents/ caregivers of all the children studied. The sampling technique used was a nonprobability convenient sampling technique. Clinically diagnosed measles children of either gender aged between 3 months to 5 years admitted to our tertiary care center in the 2 years marked period of the study were analyzed. Children whose parents/caregivers did not want to be part of this study or those who were having any chronic illness or underlying immunodeficiency syndrome were excluded. Measles was labeled as the presence of high grade fever (>104 0 F) and maculopapular rash. Medical history was noted and clinical examination was performed in all children. Necessary laboratory investigations like complete blood count and chest X-rays (CXR) were evaluated. All the investigations were performed and reported by Ziauddin Hospital. Nutritional status was evaluated considering Weight to height Z -scores as severe acute malnutrition (SAM) with scores <-3Z while moderate acute malnutrition was labeled as weight to height Z-scores between -1Z up to -3Z. All patients were treated as per standard institutional protocols. Outcomes were noted in terms of successful discharge or expiry. For statistical analysis, "Statistical Package for Social Sciences (SPSS)", version 26.0 was used. Qualitative data were expressed as numbers and percentages while numeric data were shown as mean and standard deviation. Chi-square test was used to compare outcomes between study variables considering p<0.05 as significant. RESULTS In a total of 88 children, 55 (62.5%) were male and 33 (37.5%) female representing a male to female ratio of 1 Fever and rash were found among all children (100%) while respiratory distress and coryza were reported by 82 (93.2%) and 51 (58.0%) children respectively. Figure-1 is showing details about the presenting signs and symptoms. DISCUSSION The present study shows that measles continues to be an important public health problem especially during pandemic and caused significant morbidity and complications. In the present study, 62.5% measles children were male and 55.7% were aged ≤1 year. Based on the meta-analysis by Green et al, it was found that the incidence rates of clinical measles were 7%, 10%, 3%, and 5% significantly higher in males in infancy, aged 1-4, 5-9 and 10-14, respectively. 16 According to Wang et al, it was reported from China that measles had a significantly higher incidence in males (57.6%) which is quite close to what were found. 17 Sex differences in measles incidence rates may be related to the imbalance in the expression of genes encoded on the X and Y-chromosomes of a host. The phenomenon of X chromosome inheritance and expression is a cause of the immune disadvantage of males and the enhanced survival of females following immunological challenges. 18,19 In the present study, all (100%) children were having fever and rash. Hisada et Al from Indonesia reported that 100% of measles patients were found to have fever as a symptom. 20 Cherry et al studied patients in California and reported that 95.3% of measles patients presented with fever. 21 Data from Serbia reported that 84.1% measles children had fever. 22 Measles patients from Serbia, in a study by Jelena et Al, reported 89.7% measles patients to have cough. 22 Cherry et al reported that 86.6% of measles patients presented with cough. 21 In our study population, 93.2% had respiratory distress. Data reported from Israel reported that 87.2% measles patients were having cough. 23 The present study found that development of acute respiratory distress syndrome (p=0.0011) was significantly linked with mortality. This could be explained by the fact that during Covid-19 pandemic era, COVID infection could have been a major concern in the studied children. We were unable to rule to COVID-19 infection in the present set of cases but it was reported that 97.7% children with measles developed pneumonia during hospitalization. One of the most commonly reported complications in the existing literature among patients of measles is pneumonia and our findings were consistent with what has been reported previously. [24][25][26][27][28] A study from Pakistan reported that 63% measles patients had pneumonia. 29 We found relatively higher rates of complications development among current set of children with measles. This could have been due to the fact that during COVID pandemic, people were hesitant to head to hospitals and that could have caused a delay in management of measles, allowing more patients to develop complications before heading to a healthcare facility. Moreover, strict vigilance of lung related problems was being done during the pandemic which should have created a reporting bias. The present study noted a mortality rate of 12.5% among admitted children with measles. Mortality due to measles is reported to be nearly negligible in most of the studies. 30 Relatively high rates of morality among children with measles marks that there was a delay in the management of measles due to closure of outpatient department and We unable to verify the presence of COVID-19 infection in the studied children which was one of the limitations of this study. Being a single center study, our findings cannot be generalized but the current study documented very important insights about the clinical presentation and spectrum of complication development among children with measles which adds to the worth of this study. CONCLUSION Mortality was relatively high among children with measles during the COVID-19 pandemic era. During hospitalization, most frequent complications were pneumonia, and eye and/ or mouth related complications. Development of acute respiratory distress and shock proved to have significant association with mortality. Copyright© 28 June, 2023.
2023-08-03T15:04:56.637Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "a34eac4832869f2afcf13dfeffd422b2babb56af", "oa_license": "CCBYNC", "oa_url": "http://www.theprofesional.com/index.php/tpmj/article/download/7586/5239", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fcf429f4e6f80ce40d78889826fa1f11199d9b43", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
85552523
pes2o/s2orc
v3-fos-license
Organic Private Labels as Sources of Competitive Advantage — The Case of International Retailers Operating on the Polish Market The main aim of this study was to determine how chains of modern international retailers can achieve a competitive advantage (CA) by introducing private labels (PLs) in the organic category and can, in turn, stimulate the consumption of food produced with respect to sustainability principles. The research was conducted with the use of a qualitative approach and involved two steps. First, in order to select retailers with organic private labels (OPLs) and producers delivering products under OPLs, in-depth semistructured interviews were conducted with the representatives of management boards of 17 enterprises. Second, in order to analyze the assortment-based competitive advantage of the OPLs in depth, 8 enterprises were analyzed. In order to explore the price-related competitive advantage, three products offered under PLs, organic PLs, producer brands, and imported brands were selected for subsequent analysis. For retail chains, it was found that the introduction of OPLs is the source of CA via six contributors, namely, price, range of assortment, type of PLs, image of the retailer, sustainability and specific process, and product-related attributes of organic food. Extension of offers with organic private labels makes it easier for consumers to buy organic food at more affordable prices and follow the principles of proper nutrition and a sustainable diet with low environmental impact. At the same time, the international retailers can position themselves as chains contributing to more sustainable consumption. The importance of PLs in global retail trade is increasing [10,20,[25][26][27][28][29].According to the Nielsen Report, a new retail revolution connected with the development of private label products is currently taking place.Retailers are innovating in order to meet consumer expectations and are stretching PLs from premium to budget segments.The largest markets for PL products are to be found in Western European markets [25].According to the Private Label Market's (PLMA's) 2017 International Private Label Yearbook, the market share of PLs was equal to 40% or above in 7 out of 20 European countries, namely, in the United Kingdom, Germany, Austria, Belgium, Switzerland, Spain, and Portugal.More countries recorded a share of 30% or above, namely, Denmark, Sweden, Norway, France, Poland, Hungary, Italy, the Czech Republic, and Slovakia.The largest market share increases were noted in Austria (up to 2.8 p.p.), 43%; Germany (up to 2.1 p.p.), 45%; and in Poland (up to 1.4 p.p.), 31% [30]. The increase of PLs shares is related to the ongoing concentration in retail trade and the growing share of large international retail chains [31].Large international retailers are perceived as trendsetters in relation to lifestyle, consumer behavior, consumer experience, digital preferences, etc. [32,33].In Poland, the share of large-scale formats has been increasing since 1998 in the structure of retail trade.In 2017 the total share of the value of large format stores in fast-moving consumer goods was approximately 57%.The detailed structure was as follows: discount stores (31.7%), supermarkets (10.5%), and hypermarkets (14.9%) [34].In 2008, for example, the total share of large-scale formats was equal to approximately 34% [35].Poland is perceived as a leader of PL development in Central and Eastern European countries [30], and the number of PLs has been on the rise for the last 20 years in this country.Initially, PLs were introduced as economic brands characterized by a lower price by ca.40-50% than producer brands and by small assortment diversity [36].PLs' dynamic development resulted in a rise in market shares and in strengthening the competitive position of individual retail chains.This influenced retailers to decide to introduce PLs in new product categories, i.e., with more added value [37].Currently, PLs are available in nearly every product category; thus, they constitute an important element of the competition.Changing consumer preferences and growing interest in organic food have become factors determining the development of PLs [37,38]. PLs are available in a wide range of product categories, from food to cosmetics and other shopping products such as textiles, household appliances, and electronic devices [39].Recently, retailers have begun offering PLs in such categories as organic, health, and wellness products.Some studies have pointed out that consumers are interested in products produced in a health-, environmental-, and social-friendly manner [40][41][42][43][44][45][46].This is linked with growing consumer concern in relation to product origin, environmental impact, and nutritional value-for example, products such as coffee, tea, cocoa, and palm oil that are available under PLs must deal with important environmental issues, e.g., waste management, water saving, soil erosion, and crop protection [47]. Over the years, the evolution of how PLs are perceived was observed via the price-quality strategy, product types, and differentiation.Trade researchers present three types of PLs: economy PLs (low-quality tier or generics), standard PLs (mid-quality tier), and premium PLs (top-quality tier) [21,48].Some retailers introduce standard and economy PLs to complement the standard line of private label products (SPLs) with a special "economy" private label (EPL) [8,19].Economy PLs positioned as a budget, low-priced, or low-value private labels comprise elementary products in basic types or varieties with little marketing support [19].Standard PLs are perceived at a higher quality level than economy PLs [8]. A very important stage in PL development was the introduction of premium PLs that were positioned in higher price-quality segments.Some retailers distinguish between premium private labels (PPLs) and value private labels (VPLs) [48], and premium and economy PLs [23].Some retailers use brand positioning strategies to distinguish premium PLBs from classic PLBs [9] and standard PLs [49].Contrarily, manufacturers often refuse to produce economy PLs as they want to supply only premium labels that will satisfy consumer needs and maintain a positive image [17]. The introduction and development of PLs determines a broad range of advantages for retailers [3].PLs have higher profit margins than other products, which results in greater profits and leads to maximizing total retailer profitability [13], yet retailing companies introduce PLs not only to gain profits [13,50]. In recent years, PLs have been analyzed as a tool to create and achieve a competitive advantage [12].PLs can build store loyalty [51,52] and influence product usage [22].This can be associated with opportunities for companies to achieve differentiation [2,51,53] and positioning [2]. PLs can also build and strengthen strong consumer relationships [53].This is the result of adequate relationship management that leads to consumer satisfaction, trust, and commitment.Successful PLs can personalize a customer's shopping experience through greater acceptance that can, in turn, create higher customer loyalty [39,53].PLs are also perceived as a means of competing for the rational consumer versus the value innovator [13]. Organic Food Sector in Poland The organic farming sector in Poland, just as in many other Central and Eastern European countries, has experienced tremendous growth in the last decade in the production area due to policy-driven incentives for conversion to organic production.In 2016 the number of farms was 22,369, with 536,579 ha under organic production (approximately 4.0% of utilized agricultural area).Despite this significant growth of organically managed land in Poland, the domestic organic market is still considered to be immature [54].Even though there were 705 organic processors representing various branches in the year 2016, the supply of domestic organic processed products continues to remain low [55]. Organic food is commonly described as the product of a farming system which excludes the use of chemical fertilizers, pesticides, growth regulators, and synthetic feed additives for livestock.Irradiation and the use of genetically modified organisms (GMOs) or products produced from or by GMOs are generally prohibited in organic production. Use of the term "organic food" refers to Council Regulation (EC) No 834/2007 of 28 June 2007 on organic production and labeling of organic products and repealing Regulation (EEC) No 2092/91, as it sets out the principles of organic production and defines how organic products should be labeled. As a result, the range of products offered in organic quality is limited.A serious impairment to creating a diverse assortment offer is the reluctance to cooperate and the lack of a professional organization dealing with organic logistics and sales, which negatively impacts the potential for creating a common offer, particularly in relation to small-sized organic farms.Another issue that appears to be a crucial factor towards further development of the organic food market in Poland is the structure of sales channels and the prices for organic products. Wier and Calverley [56], in their multifaceted analysis of factors determining the development of organic food markets in Europe in the late 1990s, have argued that the conditions for developing organic food markets are related to a high level of supermarket sales.However, sales through supermarkets pose various challenges to the organic sector in countries with a low supply because they demand large quantities of organic products of a homogeneous quality, delivered to schedule and supported by professional promotion [57].These conditions, despite many policy-related initiatives and financial support, are still difficult for the organic sector to fulfill.Poland has three main channels of organic products: specialized organic food shops, hypermarket and supermarket chains, and direct sales from farmers and processors.The online sale of organic food is developing very quickly, particularly when it comes to processed organic products and non-food.However, there are also many initiatives that directly link farmers and consumers via online delivery systems.However, there has been a growing presence of organic products, including in mainstream channels such as discount stores and hypermarkets, which was one of the key drivers of the current growth in value of 10% as recorded in 2016 [58].Data collected in a recently completed project on the organic food market in Poland shows an increased retail focus on organic products, with PLs gaining the share in branded organic sales.The main driver of organic sales growth is expected to be the introduction of new product types and the extension of already-established private labels and branded product lines to also include organic products [58]. In the past, organic packaged food was only available in a very limited number of specialized outlets.An improving economic climate and a growing focus on health are the main reasons for increased interest in the organic category, with 19.4% of Poles stating that they buy organic food at least once a week [59].Most often, organic food is bought in specialized stores and discounts.According to the consumer survey above, the top categories of purchased products are eggs, vegetables, and cereal products. Organic food consumers in Poland tent to be better educated and well off.The most important motive to buy organic food is health but environmental aspects are gaining more and more importance.The consumer segment with the highest spending on organic food is more open to novelties but also looking for the best value for money and expecting that organic food will be convenient [59]. The forecast for organic food market development in Poland is optimistic.According to the Euromonitor Report on Organic Packaged Food in Poland, sales are expected to increase by 24% in 2016-2021.In the forecasted period, competition should increase due to the development of private labels and new entrepreneurs entering the market [58]. Competitive Advantage In recent years, PLs have been perceived as a tool to gain a competitive advantage for retailers [5,6,12,37,53,[60][61][62][63][64][65][66].Development of PLs is associated with the processes of concentration in trade and retailers strengthened their positions in distribution channels.It enabled integration processes, increasing the bargaining power and competitiveness of retailers.In these aspects, PLs should be analyzed as outstanding resources connected with sustainable competitive advantage (SCA) [31]. Competitive advantage (CA) is defined as "an advantage of higher ability of competition, [and] it is the core of capacity of economic and business activities in markets where competition exists" [67].It is described as the factor that differentiates a given enterprise from others and keeps it "alive and growing" [68]. There are two basic types of CA that lead to three generic strategies: cost leadership, differentiation, and focus.The last strategy exists in two variants, namely the cost focus and differentiation focus strategies.The sources of the cost CA depend on economies of scale, proprietary technology, preferential access to raw materials, and other factors [67], e.g., process engineering skills, products designed for ease of manufacture, sustained access to inexpensive capital, close supervision of labor, tight cost control, and an efficient distribution channel [69,70].The company can achieve cost leadership, and the objective is to become the lowest-cost producer.This state can be reached mainly through high capacity utilization, high productivity, and effective use of technology [68].The strategy is associated with a business offering standard products with relatively little differentiation achieved by standardizing the processes [70]. The differentiation strategy allows the enterprise to be unique on the market in relation to one or more attitudes that are valued by consumers.It is associated with premium positioning resulting in higher quality and higher prices.This strategy can be achieved by using (1) branding strategies that lead to consumer loyalty towards brands, high brand equity, and strong consumer relationships; (2) unique products and superior product quality; and (3) wide distribution and promotional support.The focus strategy has two variants related to the target segments.In the cost focus strategy, the company uses a cost advantage in its target segment based on low prices.In the differentiation focus strategy, differentiation in the target segment is the main business activity that concerns the special needs and preferences of the consumers [67].When analyzing the term "quality", the type of business including [67,69,[71][72][73][74] production, service, or retail should be taken into account. Barney [75] distinguishes between CA and SCA, in that "[a] firm is said to have a CA when it is implementing a value-creating strategy not simultaneously being implemented by any current or potential competitors.A firm is said to have an SCA when it is implementing a value-creating strategy not simultaneously being implemented by any current or potential competitors and when these other firms are unable to duplicate the benefits of this strategy".The resource-based theory (RBT) indicates that a CA can be created by a firm's strategic resource, which must be valuable, rare, difficult to imitate, and nonsubstitutable [75].Some studies analyzed implementation of the RBT in strategic management [76]. Many authors have analyzed the ways of creating CA and the drivers of CA [75,[77][78][79][80][81], e.g., Kitson et al. [82] suggested that the basis for a regional CA was as follows: human capital (the quality and skills of the workforce), social and institutional capital (the extent, depth, and orientation of social networks and institutional forms), cultural capital (the extent and quality of cultural facilities and assets), knowledge and creative capital (the presence of an innovative and creative class), infrastructural capital (the size and quality of public infrastructure), and productive capital (an efficient productive base for the regional economy) [82]. A very important way of achieving a CA is the implementation of a sustainable strategy with the aim of sustainable development, including environmental protection, and reduction of poverty and global warming.Detailed activities include eco-efficiency, development of environmentally friendly products, and reputation-building [83]. The two sources of CA as presented above, resulting in both a market-oriented and a resource-based approach, can be extended by measuring the outcome of competitive efforts through firm performance (e.g., profitability) or market share stability [77]. Some authors have indicated that market orientation has a positive effect on business performance in both the short and long run in order to achieve a sustained advantage [84].Conversely, the resource-based approach focuses on an internal environment, which is considered to be crucial.It can be extended to a resource-and competence-based view [85]. Therefore, the aim of this study was to determine how chains of modern international retailers can achieve a CA by introducing OPLs.Six contributors of CA were distinguished and identified, including price, range of assortment, type of PLs, image of the retailer, sustainability and specific process, and product-related attributes of organic food. Aim of the Study The main aim of the study was to determine how modern international retailer chains can achieve a CA by introducing OPLs.The scope of the research covered an analysis of factors determining a given retailer's decision to introduce OPLs; -an analysis of the contributors of the CA, including type of PLs, assortment, specific attributes of organic food, sustainability, prices, and retailer-based advantages; -identification and analysis of the drivers of the CA in relation to each contributor; -assortment and price analysis of OPLs offered by retail operators on the Polish market; -analysis of the OPLs' branding elements. Based on the above, the hypothesis proposed in this study was as follows: Hypothesis 1.The competitive advantage of OPLs is a compilation of six contributors related to price, range of assortment, type of PLs, image of the retailer, sustainability and specific process, and the product-related attributes of organic food. This study attempts to fill an important gap in the literature related to OPLs and their importance as a tool to create and maintain the retailer's CA.There is a lack of research in the area of OPLs as analyzed from the producer's and retailer's point of view in the context of CA.A considerable amount of literature has been published on factors related to consumer behavior and those determining the food and non-food PL decision-making process [2][3][4][5][6][7]11,24,39,51,52,86]. Research Methods The research was conducted using the qualitative approach, i.e., in-depth semistructured interviews and structured observation. The semistructured interview is a qualitative data collection strategy in which the researcher asks the respondents a series of predetermined but open-ended questions [87].It is used to gather focused, qualitative textual data.This method offers a balance between the flexibility of an open-ended interview and the focus of a structured one.It can also ensure that data on the experiences of the participants is gathered.Such information can help develop the investigation process from general topics (domains) to more specific insight (factors and variables) [88,89]. Interviewers using the semistructured interview approach generally follow a document called an interview guide or interview schedule that includes the following: an introduction, a list of topics and questions, suggested probes and prompts, and closing comments [90][91][92].In semistructured interviews, the researcher has more control over the topics of the interview than in unstructured interviews [87,89,93,94]; thus, we decided to use this approach as the most appropriate one for the purpose of this study.The in-depth semistructured interview questions were developed based on an analysis of the literature, including qualitative studies related to organic food products, corporate reports, and an interview with two stakeholders, namely, a producer and a distributor.The detailed structure of the interview guide used in this study is provided in Table 1. We used structured in-store observation for the assortment and price analyses [95], which entail the collection of data according to a set of predefined topics and rules.The structure of the observation as well as the predefined variables were developed in line with the main purpose of the study [96]. Data Collection Process Data was collected from August to November 2017 within a research project titled "Marketing, promotion and market analysis of organic production in Poland, including opportunities and barriers of development", financed by the Ministry of Agriculture and Rural Development.The objective of this part of the study was to identify the development of organic private labels with the aim of obtaining more insight into the CA of retailing companies. The research was carried out in accordance with the ethical principles related to an enterprise understood in accordance with article 55 1 of the Civil Code [97].For the purposes of this project, we identified retail and production enterprises from the organic food sector.The owners or representatives of the enterprise's board were asked about the possibility of conducting an interview.All of the participants of the study were provided with anonymity. We conducted 17 in-depth semistructured interviews with specialists or experts (purchasing managers and/or sales managers) of food retailers and with board directors supported, if relevant, by category managers, and by marketing specialists in the case of organic food producers.A detailed description of the data collection process is provided in Figure 1.The sample of surveyed enterprises consisted of 9 retailers of organic food, 7 organic food producers, and one producer who was also a distributor.This group was divided into two subgroups when taking into account their OPL offer (in the case of the retailers) and production of organic food on behalf of an OPL retail chain (in the case of the producers).The first subgroup consisted of 6 enterprises, including two producing under OPLs, 3 retailers owning OPLs, and one that was both a producer and a distributor.During the in-depth semistructured interviews conducted in this group of enterprises, the issue of OPLs was discussed in greater depth.In the second subgroup, there were 11 enterprises, including 6 retailers that did not have OPLs in their offer and 5 producers not delivering organic products under OPLs on behalf of the retailer.An additional question for the retailers during the in-depth interview concerned their reasons for a lack of OPLs, whereas the producers were asked why they did not produce organic products under OPLs. All of the respondents gave formal consent to participate in the interviews and designated contact persons to ensure smooth communication.We also asked if we could record the interviews and there were no refusals.The interviews took place at each enterprise's headquarters; in one case, the representative of the management board visited our university.The interviews were conducted by the researchers directly involved in the project and who are also the authors of this study.Each interview lasted from 120 to 150 min and was conducted by two or three experts. Interviews were transcribed in a matrix based on a spreadsheet to facilitate a qualitative comparative analysis in accordance with the list of issues presented in Table 1.The data that was collected was discussed during 9 sessions involving the whole research team in order to summarize and synthesize all of the relevant aspects, in line with qualitative analyses. We selected three retail enterprises for the assortment and price analysis, namely, Carrefour, Auchan, and Jeronimo Martins, and these participated in the semistructured interviews.Additionally, we covered Tesco and Lidl in the analysis since both are important players on the Polish retail market. An analysis of the assortment was made by observing the store and making a list of its assortment items.The brand names, brand type, product name, size of packaging, and price for packaging were taken into account, e.g., brand name: Carrefour Bio; brand type: organic private label; origin of the product: imported product; product name: sunflower oil; size of packaging: 750 mL; price per package: 13.99 PLN. In order to present the prospects of domestic entities, the largest Polish producer and distributor of organic products (BioPlanet) and the largest Polish producer of organic products under their own brand and for OPLs (Symbio) were selected.A producer (Wytwórnia Makaronu BioBabalski) delivering products under OPLs was also included in the group of surveyed production enterprises. The assortment analysis covered 23 product categories (Table 2).For the purpose of this study, categories of organic products appearing in the commercial offer of the production companies and retailer chains were adopted.The condition for inclusion of the product category was the presence of OPLs in it.There were 5 aspects covered by the analysis: the number of assortment items available under the PLs, the number of product categories with organic products, and the number of organic products available under the PLs were taken into account.For a comparative analysis of the prices, three organic products (coffee, oil, and rice) under PLs were selected due to their availability in all of the retail chains that were surveyed.In the case of some products available under the PLs, such as coffee, tea, cocoa, and palm oil, important environmental issues, e.g., waste management, water saving, soil erosion, and crop protection, were also evaluated [47]. The organic product items were divided (when possible) into 7 groups, namely, organic imported brands, organic producer brands, organic private labels (for Carrefour, Tesco, and Auchan), organic discount private labels (for Jeronimo Martins and Lidl), nonorganic private labels (either premium or economy), and nonorganic discount private labels.The average prices were calculated for each group.In order to present the difference between prices, 100% was accepted as the average price for organic private labels. Participants The largest retail chains operating on the Polish market were included for the purpose of this study; an additional criterion was to have PLs in the assortment offer.Thus, the following chains were taken into consideration: Biedronka, Lidl, Tesco, Auchan, and Carrefour. In Poland, the share of large-scale formats equaled approx.57% in 2017 [34].The largest retail chains are Jeronimo Martins (the Biedronka chain), Schwarz Gruppe (the Lidl and Kaufland chain), Tesco, Auchan, Eurocash, and Carrefour [98,99].In 2011-2016, the market shares of the 10 largest retail chains exceeded 16 p.p. and amounted to 58%.The highest share belonged to Jeronimo Martins (19% for the Biedronka chain), as its share exceeded 8 p.p. [99].All of the largest retail chains introduced PLs.The share of these chains in the sales structure of PLs exceeded 80% [100], including the share of the Biedronka chain, which was equal to 50% [98]. Participant No. 1 (Carrefour) is one of the largest multinational retailing chains in the world, operating in more than 30 countries in Europe, the Americas, Asia, and Africa.The French group's stores offer a variety of formats and channels, including hypermarkets, supermarkets, convenience stores, Cash&Carry stores, and e-commerce.It has three operational models: integrated, franchised, and partnership stores.The group is interested in products sourced from local or national suppliers, which in 2016 represented close to 74% of all brand food products sold.In addition to brand-name products, Carrefour offers own-brand food products, namely Carrefour, Carrefour Bio, Carrefour Selection, No Gluten, Carrefour Baby and Carrefour Kids, Carrefour Ecoplanet, Reflets de France, Terre d'Italia, De Nuestra Tierra, Viver, Bon App', Veggie, etc. [101]. Participant No. 2 (Auchan) is a multiformat retailer operating in 18 countries through city center superstores, out-of-town hypermarkets, online shopping, and drive outlets.Auchan Holding brings together 3 independent companies with complementary businesses, i.e., as a retailer, as banking services, and as commercial real estate services.Auchan Holding offers own-brand food products, mainly Les Produits Auchan (Auchan products), Le Moins Cher (which are the least expensive), Rik et Rok (products designed for children), Les Produits Régionaux (regional products), and Mieux Vivre Bio/Sans Gluten (Live Better Organic/Without Gluten), which are Auchan products that are either bio or gluten-free [102]. Participant No. Participant No. 6 (Bio Planet S.A.) is a Polish market leader in the production and distribution of organic food.Since 2006, the company has been offering certified organic products catering to the needs of consumers interested in bio food.The company's portfolio assortment consists of over 4000 products, including 350 products under the Bio Planet brand (a constant offer of ca.250 packed products).Bio Planet promotes the idea of organic nutrition with the mission "Bio Planet wants to face the challenges of the developing world by spreading environmental awareness among its consumers and promoting natural dietary components".Bio Planet delivers organic products under the Bio Planet brand name and distributes the products of well-known manufacturers [104]. Participant No. 7 (Symbio Polska S.A., Lublin, Poland) is one of the largest producers of certified organic food in Poland, manufacturing over 300 products under the Symbio brand.The company's main areas of activity are the production and export of food intermediates used in food production as well as production and distribution of retail products for the domestic market and for export.Its mission is formulated as "Pure nature.Nature products coming 100% from nature".In the structure of total sales revenue in 2016, 75.1% was generated by domestic sales [105]. Participant No. 8 (BioBabalski Pasta Manufacturer; in Polish: Wytwórnia Makaronu BioBabalski) has been operating in the sector of organic food producers since 1991.The company specializes in the production of wholegrain pasta, cereal flakes, and many types of flour and groats available under its producer brand and under the OPLs of retailers.This enterprise's mission is to popularize old cereal seeds. Organic PLs-General Approach Decisions by international retailers to introduce PLs are made at the highest levels of the corporate hierarchy.The reasons for these decisions vary and are determined by both external and internal factors.The external conditions relate to market position, macroeconomic trends, changes in consumer behavior, and country-specific implementation of corporate strategies towards PLs.Growing consumer interest in health, environmental, and social aspects in the food production process has also been observed [40][41][42][43][44][45][46].The internal factors cover enterprise management, product portfolio, and product category.On the Polish organic food market, as compared to other markets [46], the increasing interest of both producers and retailers towards environmentally friendly practices and social responsibility in food production is being observed.This is connected with the growth and spread of sustainability certification of PLs and other branded products since sustainability certification allows to gain a premium price and premium image [47]. Among the 9 retailers, 6 had not introduced any OPLs.These included retailers that had PLs in their offer but not in the category of organic food.Three smaller retailers with Polish capital did not own PLs at all.The reason for their lack of possessing PLs was their perception of such products as being cheaper and of inferior quality.The issue of PLs competing with producer brands was emphasized as due to lower prices.The retailers also mentioned the threat of PLs. Among the international retailers surveyed here, their OPLs have been available in their assortment of products from a period of a few years to more than a decade, and they were created specifically for the Polish market or transferred from a parent company operating on a foreign market.The strategy of introducing OPLs as used by the retailers mimics the strategies of parent companies and focuses on gradual expansion of products to the markets of other countries.The decision to introduce OPLs in case of international retailers is also driven by the supply of organic products.The timing of this process results from the implementation of adopted strategic goals at the corporate level. An important issue related to the decision to introduce OPLs is the availability of the suppliers of organic products.Among the 7 producers participating in the first stage of research, 5 did not deliver products under OPLs; only the largest enterprises were engaged in the supply of organic products for PLs.Smaller organic producers had not taken into account production under PLs due to competition with own brands, as the economic aspects referring to cost structure were pointed out.The necessity of using unified packaging and conducting consumer research were among the reasons for not supplying products under PLs. R2 "If the supplier is a big player, then expanding the PLs will not negatively affect its portfolio.If it is a local supplier, it cannot produce PLs because there is no such capacity.Introducing a PL with someone else may be a threat to that company.The emergence of the PL may also be a chance to cooperate." Our results revealed that the decision of a given retailer to introduce OPLs was determined by factors related to the suppliers' capacities.There are several strategic options that can be distinguished: • foreign OPLs and a foreign supplier, • foreign OPLs and a domestic supplier, • OPLs created for the Polish market and a Polish supplier, • OPLs created for the Polish market and a foreign supplier. R1: "When we started, in the first years there were mainly imported products, and from the very beginning of opening we had PL products on the market, only our PL is both local and international." The choice of a specific strategy is determined by market conditions related to ensuring the continuity of supplies in the appropriate quantity and quality: 1. deliveries of organic food, 2. deliveries for the PLs. In the first option it is possible to have several suppliers for one assortment item to ensure the product's continuous presence on store shelves.This approach allows the elimination of fluctuations in deliveries from individual suppliers. The second option requires a comprehensive approach.It is necessary to acquire, for a specific product, usually one supplier who will ensure continuous supply in the correct quantity and quality.This is determined by the fact that products will be available under the brand of the retail network, thus constituting an additional mark guaranteeing quality.A different approach to both domestic and foreign suppliers results from the requirements posed by the internal quality assurance systems that are adopted by the retailers. R1: "It is difficult to say whether the requirements are different for products, whether domestic or foreign.If we import a product from a local supplier, at the outset we determine what the quality level must be, what requirements the product must meet.Those that are imported do not assume our requirements here, this happens at the level of the country that the product comes from.Each country introducing a product takes into account the market to which the product is introduced.The products that we have on sale are made either only for our parent market or for other countries." Competitive Advantage of OPLs The CA resulting from the introduction of PLs in the category of organic food is a multifaceted issue.Six contributors of CA to be distinguished are summarized in Table 3 and further discussed in the following sections.Source: own data. PLs-Based Competitive Advantage The decision processes of retailers to introduce the OPLs were driven by the type of PLs (new or existing PLs, sub-brand or single brand, product or corporate brand) and the positioning strategy of the OPLs (economy OPLs or premium OPLs).In the literature, these types of PLs are described as eco-brands or sustainability-oriented brands [46]. In the case of large international retailers, most often the corporate brand was adopted as an umbrella brand for all of the assortment items (Table 4).This resulted from the business strategy at the corporate level.The presence of a corporate brand in the OPL's name is a supporting element and simultaneously provides a specific quality guarantee.The second element (bio or organic) supports both individualization and identification of the organic category. This approach is crucial for further development of the PLs in the retail chains, with the PLs in the basic product categories [8,19].It allows to extend green product lines [46] and distinguish premium PLs from economy PLs [9,49] since it is related to positioning in the higher quality and price segment [48] and aimed to satisfy consumer needs [17].Our study revealed that the introduction of OPLs into the portfolio of international retailers was connected with the decision to change the brand architecture in order to specify brand roles and the nature of relationships between brands.The extension of PLs in the organic food category by international retail chains resulted in the transition from a branded house strategy (e.g., Carrefour, Auchan, Tesco) towards a sub-brand strategy, e.g., in the Carrefour chain to Carrefour and Carrefour Bio, in the Auchan chain to Auchanbio and Auchan economy, and in the Tesco chain to Tesco Organic, Tesco Finest, and Tesco Value.However, the discounts accepted a house of brands strategy, e.g., goBio, Melly, Prawdziwe, and Delikate at Jeronimo Martins and Bio Organic, Pikok, Pilos, Cien, and Fin Carre at Lidl. An important element shaping the CA of OPLs is their image depending on the type of PL and the image of the retail chain.These factors strengthen the image of organic food perceived as one product category.For this reason, it is understandable that sub-brands are introduced as OPLs to ensure more accurate identification.This allows a distinction between PLs in organic food and PLs in conventional food in the process of communication with consumers.It is also a tool for facilitating merchandise-related activities, e.g., by helping to arrange products on shelves and to introduce uniform visual identification.The growing role of a brand in positioning organic products is shown in the literature.Brand communication could play an important role in triggering higher levels of consumer awareness and can reinforce consumer confidence, trust, and safety of usage [41]. Assortment-Based Competitive Advantage The existing structure of the retail trade in Poland determines the strategic objectives of retailers as covered by the research and focuses on the provision of a comprehensive assortment of products.The assortment range of organic products is enlarged by expanding both the premium and economy segments as well as by developing the PLs.Such a decision can be perceived as a strategy involving sustainability goals aimed at the development of new environmentally safe products sold at a premium price [83].This is a consequence of the ongoing competition between retail chains that reflects, on the one hand, the different formats of retail outlets and, on the other, changing consumer attitudes.The growing global demand for organic products determines the management board decisions of retail chains to introduce such product items under PLs. R3: "There is more room to maneuver ( . . .), a comprehensive approach, trend forecasting, and client searches for such products, so we must have them in our offer . . ." R1: "The variety of products is our strategy, we want to meet the expectations of those consumers who are looking for gluten-free, healthy, ecological, local, national and other products." The number of food products available under PLs in the analyzed product categories ranged from 193 (Auchan) to 401 (Tesco).This diversification results from different business strategies, including the introduction of new products and new categories under the PLs (Table 5).The number of assortment items of organic products under PLs ranges in the hyper-and supermarket chains from 22 in Auchan to 30 in Carrefour, while in the discount stores it is under 30.Few comparable organic products are available on offer in all the retailers and discounts that were surveyed.A detailed overview of assortment in private labels, organic private labels, and producer brands is presented in Tables 6 and 7. We observed a different approach of the super-and hypermarket chains as well as of the discount stores to their introduction of PLs.Discounts develop PLs and OPLs, without or with a small share of products available under producer brands. When comparing the number of products introduced in the discounts, it should be pointed out that there are more items of organic food, including those available under PLs, in the Lidl store chain than in the other retailers.This is a consequence of the adopted strategy towards the development of a category of organic food and food perceived as "natural and healthy".Most of the assortment items offered as organic are imported and available in all countries where Lidl operates. Different approaches to the category of organic food are used among the analyzed hyper-and supermarket chains (Table 6).The largest number of organic products available under producer brands (274) and OPLs (30) was observed for Carrefour.Tesco had the smallest number of organic products in the analyzed product categories (105).In Auchan there were 193 organic products, but the number of products available under OPLs was low. Our research revealed that all of the analyzed retailers, when they started producing OPLs, tended to introduce organic products that were different from what their competitors were offering.Exclusiveness is an important competitive advantage for particular retail chains, e.g., organic pasta is available only through Tesco and Jeronimo Martins (Biedronka, Cossin, Poland) chains, while organic sugar is offered only through the Carrefour chain (Table 7).This indicates the willingness to stand out and introduce OPLs that are different from those in competing chains. This differentiation in the assortment of OPLs fits into treating the own brand as an important tool in distinguishing a retailer from its competitors [2,51,53].It also ensures an appropriate market position due to the adopted positioning strategy [2].This is in accordance with the description of the comparative advantage analyzed as that which differentiates an enterprise from other enterprises [68]. A different strategy design can be observed in the analyzed Polish producers (Bio Planet, Symbio, BioBabalski Pasta Manufacturer), i.e., there is a larger number of product categories and of assortment items.They have implemented the differentiation focus strategy with organic food as the main target of business activity.This is in accordance with the description of comparative advantage as the advantage of the higher ability to compete [67] and something that keeps the enterprise "alive and growing" [68]. In the flour category, the following items are available in addition to the standard items: wheat bread flour, rye graham flour, whole rye flour, rye bread flour, buckwheat flour, oat white flour, whole oat flour, amaranth flour, millet flour, corn flour, spelt white flour, spelt bread flour, rice flour, soy flour, cassava flour, chestnut flour, quinoa flour, coconut flour, and almond flour.Basic types of flour are available in the foreign retailers' product offer. Assortment-based competitive advantage is also reflected in the introduction of such categories of organic food under PLs as superfoods, including baobab powder, hemp protein powder, camu camu powder, chlorella powder, guarana powder, acai powder (freeze-dried), kale powder, lucuma powder, psyllium seeds, and spirulina powder.PLs-food products under private labels, OPLs-organic products under PLs, O-organic products under producer brands.Source: own data. Organic-Attributes-Based Competitive Advantage The CA resulting from the introduction of OPLs is "healthiness", which is a key attribute of organic products and the main driver of organic food consumption.It can also be analyzed as an example of the healthy lifestyle trend.R3: " . . .for the retailer, OPLs mean 'promoting health'" P1: " in Poland, the egoistic approach-the health benefit-dominates" P2: " . . .a safe product, devoid of herbicides or pesticides, it is not genetically modified.( . . . ) it is in one dimension, such as health . . ." Another attribute of primary importance is that the organic production is controlled with the third party involved in certification process. Our research revealed that products available as OPLs were checked in a multistage process.First, the documents (of producer and products) are analyzed and the certificates are verified.Then the product and its compliance with the specification are checked.The next stage is laboratory tests, including sensory ones.This approach is related to the fact that the retailer guarantees its quality.Noncompliance with quality requirements by the supplier is treated as a retailer's risk. R1: "For us, the risk is one-failure to maintain the quality of the product.[If] we sell as organic, and then after the results of research it turns out that the non-ecological product must be withdrawn, a media crisis is emerging.The issue of disloyalty and falsification of the product-here is the risk.This is mainly about our brand, where we sign our logo." Sustainability-Based Competitive Advantage The approach to OPLs is determined by how organic food is perceived in the context of sustainable development and environmental protection.This is an additional dimension and an additional attribute that allows shaping of the CA.Organic products are also related to benefits in terms of animal welfare. P1: "It is pesticide-free food."P1: " . . . .crops are less harmful to the environment than conventional ones."P2: "[In] other dimensions, such as environmental protection, sustainable development and animal welfare ( . . . ) organic food is not only healthy, but also the philosophy of life, which means caring about environmental protection, sustainable development and animal welfare, which is very important."R1: " . . .ecological products also bring with them issues of tradition and the environmental protection that is connected with it, they have a lot of advantages related to limiting the level, processing and complexity of the product."Some researchers have indicated that enterprises can use brands to promote sustainability to their consumers, industrial customers, and other stakeholders [84] as well as to improve the sustainability of food production [47].This is connected with consumer perception and interest as well as with the enterprises' strategies [106][107][108]. Our research revealed that an important aspect determining the introduction of PLs in the category of organic food was social responsibility.It is understood as the introduction of products produced via methods that are less harmful to the environment and are a response to the needs of ever-larger groups of society. P3: "Each retail network will build PLs, social responsibility is important." Another issue is sustainable consumption [43,109] and the role of OPLs in assuring more sustainable levels of consumption.This was observed in the Romanian retail system [109] as well as on the German [47], Italian [44,47], and Belgian markets [42].Therefore, it is also very important to the retailers operating in the Polish market of organic food. Our study revealed that the retailer achieved a CA by building trust-based relationships with suppliers of organic products.This aspect is important for cooperation with current and future suppliers.It can also be used-as was mentioned in the literature-to build the company's reputation by increasing environmental efficiency [83].R1: " . . .our goal is to maintain cooperation with suppliers as long as possible." Price-Related Competitive Advantage The price competitive advantage of OPLs is determined by two factors.The first is the price that is proposed by the suppliers and that takes into account the costs of organic production; the second results from the price strategy adopted by the retailer in relation to a PL positioned in either the lower or higher price segment.R1: "( . . . ) our weakness, ( . . . ) the barrier is price.Perhaps now this barrier is smaller, but until recently the prices proposed by the suppliers were very high, [and] we have a rule that our own brand must be available to the customer in a given price range." The price competitive advantage of OPLs should be analyzed in three areas: • the difference between organic products sold under PLs and the producer brand, • the difference between organic products sold under discount OPLs vs non-discount OPLs, • the difference between products sold as OPLs and PLs. The price analysis for three products (rice, coffee, and oils) allowed us to determine the price differences (Tables 8-10).Organic products available under producer brands were offered in the higher price segment.The highest level of retail prices was found to be for imported products, and lower prices were observed for organic products available under PLs as the OPLs of retail chains (Carrefour BIO, AuchanBio, Tesco Organic).Discount stores were characterized by a lower price level for such PLs as goBio and Bio Organic.The non-organic equivalents of organic products offered under the organic PLs of super-and hypermarkets as well as discountswere positioned in the lowest price segment. R1: "Organic food under private labels will be cheaper ( . . . ) It seems to me that if someone is looking for an organic product, that person does not look whether it is a private label or a national brand." On markets perceived to be more advanced in terms of promoting food sustainability, the prices of many products with sustainability certification were not higher than the prices of corresponding products without the environmental label [47].This difference can be explained by the degree of development of the organic sector and the particularly high share of large retailers in organic sales [56].Price competitive advantage results from the actual price differences; it is also determined by the consumers' perception of prices, which consists of how the price of organic products is perceived in comparison to that of PLs.Organic food is perceived by consumers as more expensive than conventional food.Organic products, due to the price level, are also perceived as premium food.At the same time, PLs are perceived in the lower price segments. R2: "The price can be a strong point, showing that organic products are not only exclusive products, intended only for a selected social group that can afford such food.We increase the availability of this product group."R1: "The price difference between organic and conventional food depends on the product, but these are the upper limits; a few years ago it was 45%, and not 20." In the literature, there is a satisfactory amount of evidence showing how the prices of PLs influence perceived value and brand equity, e.g., price discounts and the brand's perceived quality exert a significant influence on perceived value [52].Some studies have indicated that information on both the environment and health that is placed on organic labels can influence consumers' willingness to pay for labeled organic products [42]. On the other hand, perception of a given PL's quality is driven by the complexity, price level, average inter-purchase time, and quality variance of the product category [16].Such price perception is particularly important for organic products that are available under PLs due to the image of organic products as a premium food and in the context of a lower price for them. Retailer-Based Competitive Advantage An important group of attributes for shaping the CA based on OPLs are related to a given retail network.One of them is the perception of added value that results from the image of retailer PLs as a guarantee of quality since retailers adopt additional quality standards for development of PLs. For consumers, the labeling of organic products with the retailer's PL is an additional element that guarantees product quality and a proper production process.PLs also allow retailers to be in full control of the product and to take independent initiatives.P2: "Here, value can be added to the overall values attributed to eco-friendly products if the retailer is convinced that these products under its PLs are subject to special supervision and have been specially selected.This is a double value.This is a sign of the quality of this retailer, we take responsibility for it, we have checked it.We guarantee that although this product is certified and safe, we have checked and verified it.If the retailer convinces its consumers [about] organic products under the PL of that retailer, they will be competitive with organic products under the producer brand, because this is a lower price and a double certificate." The introduction of OPLs is a part of the implementation of adopted positioning strategies.This allows the retailer's market position to be strengthened on the total food market and for it to be perceived as a "trendsetter" in the organic food category.P1: "Owning the PL builds one's market position, one is not anonymous on the market" P1: "the consumers of organic food are more interested in the origin of the product, its path from the field to the table; owning a PL helps to give credence to the fact that they are familiar with the production of such food." Our research revealed that strengthening the market position also applied to the consumer-retailer relationship.Consumer loyalty towards the retailer and its PLs is the result, on the one hand, of the competition strategies have been adopted and, on the other, of the consumers' conviction regarding product quality, the reliability of the retailer, and the way business is done.This enhances the retailer's positive image on a competitive market.An analysis of the literature shows the importance of PLs in the context of building and strengthening relationships with consumers [53], since greater loyalty resulting from satisfaction is built [39,53].The "social quality" of the PLs plays an important role in perceived quality and the intention for consumers' loyalty to be improved [111]. Another aspect of retailer-based competitive advantage is that the cooperation between the retailer and producer based on relationship management is very important.Long-term cooperation contracts are based on mutual benefits connected with trust and partner relations and eliminate frequent rotations.It has been emphasized in the literature that environmental certification determines the reorganization of supply chain relationships, which is aimed at higher bilateral dependency among supply chain players.Additional aspects can be analyzed in terms of a reduction in product uncertainty and an increased degree of vertical coordination [47]. R1: "it is rather constant cooperation.We have many examples of suppliers that we have been with since the very first day, for nearly 20 years."R1: " . . .just as every brand builds its trust, the case is the same for the domestic supplier.Every supplier builds his/her trust, a known one will be perceived better, and a less known one worse, or vice versa-it's better because it's from my region and I know the supplier." Conclusions The past several years have been a period of dynamic development of PLs in both Poland and Europe.The quantitative increase in the range available under PLs has been accompanied by changes related to launching new product categories.Organic food is an example of such a category, and OPLs have become the source of a CA for large international retail chains.The research conducted here confirmed the hypothesis that the competitive advantage for OPLs is a compilation of six contributors related to the type of PLs, assortment range, sustainability, attributes of organic food, price, and retailer-based issues.The contributors identified here result from the product specificity of organic food and are related to the introduction of OPLs by international retailers.The OPLs can be also approached via RBT as outstanding resources allowing a retailer to achieve an SCA. The following conclusions can be drawn when summing up the contributors to the CA of OPLs: • The competitive advantage of OPLs that results from the type of PLs is due to brand architecture and adoption of either a house of brands or a branded house strategy.New PLs have been introduced for organic food with elements supporting individualization and identification of the organic category. • The assortment-based competitive advantage can be described by the number and types of categories, specific products, and innovations.Retailers exclusively offer certain organic products that are not available in competing chains.This gives them the opportunity to underline the uniqueness of the product offer and to attract consumers who are looking for products with a specific added value at affordable prices. • The organic-attributes-based competitive advantage is associated with health and naturalness, organic quality, and a certified process of production.The introduction of OPLs allows international retailers to position themselves as chains promoting a healthy lifestyle. • The sustainability-based competitive advantage is determined by organic food perceived as a product category that respects sustainable development and the natural environment, and enhances a shift towards more ethical and responsible food consumption also in terms of other credence attributes such as animal welfare. • Another important area of the CA of retailers as derived from OPLs is the management of relations with suppliers.Long-term cooperation based on trust helps to achieve mutual benefits and to reduce the risk of inadequate quality.At the same time, the perception of PLs related to the image of the retailer constitutes an additional element that guarantees, in the minds of the consumers, the quality of an end product and becomes an important contributor of the CA for retailers. • Price positioning places the PLs of organic food below the producer's brands and above the PLs for non-organic products.A favorable price combined with the attributes of organic food determines the growing interest of consumers in PLs.At the same time, it separates sub-brands by highlighting the various attributes of organic food that are created for the PLs of organic food.This supports the use of positioning strategies that place the OPLs in the higher quality and price segments. The OPLs' competitive advantage as presented above relates to large international retailers with large suppliers.It results from economies of scale and is also an opportunity to implement the principles of sustainable development and to promote sustainable production and consumption. Our study also has some limitations.The analysis process focused on qualitative data in relation to the OPLs.This research did not address the wide range of other food products in order to provide more insight.Additional research should investigate other food and non-food products in order to evaluate retailers' complex strategies towards PLs. Future studies should also cover the management of supplier relations, which is important in relation to organic products.Identifying the factors that determine cooperation is a critical element to ensure the continuity of supply and proper quality of end products.It also constitutes the basis for making decisions about expanding the range of food products available under the PLs of retail chains. Table 1 . Structure of the semistructured questionnaire. of Category Scope of Topics Topic Category Organic Food Source: own data. Table 2 . Range of assortment and price analysis. 3 (Tesco PLC) is a British multinational retailer with leading supermarkets, hypermarkets, and superstores under such brands as Tesco Extra, Tesco Superstores, Tesco Metro, One Stop, and Tesco.com-only.Tesco is diversified geographically and into various areas of business activity, such as the retailing of books, furniture, toys, clothing, electronics, gas, and software, and financial and Internet services.It has over 6550 stores in 12 countries across Europe and Asia offering products ranging from "Tesco Value" items to the "Tesco Finest" range.Participant No. 4 (Jeronimo Martins) is a Portuguese corporate group operating in the food distribution sector in Portugal, Poland, and Colombia.In Poland, Biedronka is the food retail sales leader, operating 2722 stores across the country.Biedronka offers a wide assortment of PLs under individual or category brands [103].Participant No. 5 (Lidl Stiftung & Co., Neckarsulm, Germany) is a German global discount supermarket chain that belongs to Schwarz Gruppe.It is a private family-owned German retail group operating over 10,000 stores (ca.9900 Lidl stores, of which 3200 are in Germany, and ca.1190 Kaufland hypermarkets, of which 640 are in Germany). Table 3 . Contributors to the competitive advantage of OPLs. Table 4 . Description of the retailers' OPLs. Table 5 . Overview of PLs and organic products, including the number of categories and total product items. Table 6 . Overview of product items offered as OPLs, PLs, and as producer brands. Table 7 . Organic products available under PLs. Source: own data. Table 8 . Prices of rice under organic PLs, PLs, and producer brands (in PLN per kg). Table 9 . Prices of coffee under organic PLs, PLs, and producer brands (in PLN per kg). Table 10 . Prices of oils under organic PLs, PLs, and producer brands (in PLN per liter).
2019-03-25T17:18:53.655Z
2018-07-05T00:00:00.000
{ "year": 2018, "sha1": "cd81be3f587d770acba358ea3e7a8bfb6cd54151", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/10/7/2338/pdf?version=1530796232", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cd81be3f587d770acba358ea3e7a8bfb6cd54151", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics" ] }
25296622
pes2o/s2orc
v3-fos-license
Surface-modified nanoparticles as anti-biofilm filler for dental polymers The objective of the study was to synthesis silica nanoparticles modified with (i) a tertiary amine bearing two t-cinnamaldehyde substituents or (ii) dimethyl-octyl ammonium, alongside the well-studied quaternary ammonium polyethyleneimine nanoparticles. These were to be evaluated for their chemical and mechanical properties, as well for antibacterial and antibiofilm activity. Samples were incorporated in commercial dental resin material and the degree of monomer conversion, mechanical strength, and water contact angle were tested to characterize the effect of the nanoparticles on resin material. Antibacterial activity was evaluated with the direct contact test and the biofilm inhibition test against Streptococcus mutans. Addition of cinnamaldehyde-modified particles preserved the degree of conversion and compressive strength of the base material and increased surface hydrophobicity. Quaternary ammonium functional groups led to a decrease in the degree of conversion and to low compressive strength, without altering the hydrophilic nature of the base material. In the direct contact test and the anti-biofilm test, the polyethyleneimine particles exhibited the strongest antibacterial effect. The cinnamaldehyde-modified particles displayed antibiofilm activity, silica particles with quaternary ammonium were ineffective. Immobilization of t-cinnamaldehyde onto a solid surface via amine linkers provided a better alternative to the well-known quaternary ammonium bactericides. Introduction Contact-acting non-leachable antibacterial compounds are drawing increased attention in recent years. They may be regarded as a promising solution for contamination of medical devices surfaces, including dental restorations. Several approaches are used to incorporate antibacterial agents such as quaternary ammonium salts (QAS) in dental resin-based materials. For example, the acrylic monomer 12-methacryloyloxydodecylpyridinium bromide (MDPB) [1,2] has a methacrylate group available for co-polymerization with the host resin and a pyridinium cation linked to a long alkyl chain spacer. Similar acrylic monomers, modified with mL absolute ethanol. This procedure was repeated five times to wash out all unreacted TEOS and ammonia, then dried under vacuum. Further surface modifications were performed to convert the obtained NPs into antibacterial filler. This was conducted once they were decorated with tertiary amine bearing two t-cial groups (Fig 1a) and, in another variant, with quaternary ammonium having one 8-carbon long chain and two methyl groups (Fig 1b). Functionalization of silica-core with t-cial NPs A total 10 mL (0.043 mol) 3-aminopropyl triethoxysilane (APTES) (Merck) were dissolved in 50 mL toluene (Merck) in a 100 mL RBF. Then 2.2 equimolars (12 ml) of t-cial (Sigma-Aldrich, St. Louis, MO, USA) were added, followed by three drops of 98% sulphuric acid (Merck) to propagate imine formation. After stirring for 1hr with a magnetic stirrer at room temperature, dry NaBH 4 (Acros Orgnics, Geel, Belgium) was added (4 equimolars), immediately followed by the addition of catalytic iodine (Merck,). The reaction was allowed to proceed for 24 hrs at 120˚C under azeotropic distillation in a Dean-Stark apparatus. Next, the salts formed and unreacted NaBH 4 were removed by gravitational filtration, and the solvent was evaporated under vacuum for 72 hrs. The chemical structure of APTES with t-cial (t-cial-APTES) was confirmed by mass spectrometry (Orbitrap Fusion Lumos Tribrid, Thermo Fisher Scientific, Waltham, MA), nuclear magnetic resonance ( 1 H-NMR), (Varian 300-MHz, Varian inc. Palo Alto, CA), in DMSO-d 6 (Sigma-Aldrich), and infrared spectroscopy (FT-IR) analysis (Nicolet i-10, Thermo Fisher Scientific, Waltham, MA) with an attenuated total reflectance (ATR) device. The surface of silica NPs with t-cial-APTES was modified by suspending 1.0 g NPs in 10 mL toluene in the presence of 1 mL t-cial-APTES and a catalytic amount of glacial acetic acid. The suspension was stirred at room temperature for 24 hrs, then the particles were recovered by centrifugation/ethanol rinsing and stored under vacuum in the presence of dry silica gel. The degree of functionalization was determined with an elemental analyzer (2400/II CHN, Perkin Elmer, Waltham, MA) as the percentage of nitrogen (%N) and carbon (%C). Synthesis of silica-core functionalized with N-octyl, N,N-dimethyl ammonium NPs (QASi) A total 10 mL (0.043 mol) APTES were dissolved in 50 mL anhydrous tetrahydrofuran (Sigma-Aldrich,) in 100 mL RBF. Then, to convert all the APTES primary amines into the N, N-dimethylamino form, 12.9 g (10 equimolars) of paraformaldehyde (Sigma-Aldrich) were added, immeditely followed by the addition of 10 mL formic acid (96%) (Sigma-Aldrich). The reductive amination reaction was conducted at room temperature and under continuous stirring for 24 hrs. Then the unreacted paraformaldehyde was removed by gravitational filtration and the solvent was evaporated under vacuum at 37˚C for 72 hrs. Mass analysis was performed to confirm complete conversion of the primary amines into tertiary amines. N-octylation was performed with the Menchutkin reaction. A total 1.1 equimolar (8.5 mL) of 1-iodooctane (Sigma-Aldrich) in 100 mL absolute ethanol were added to N,N-dimethylamine-APTES. Alkylation was allowed to proceed at 60˚C for 72 hrs. After most of the solvent evaporated under vacuum, the quaternary ammonium product was isolated by crystallization from diethyl ether. 1,4-diiodopentane (Sigma-Aldrich) under reflux for 24 hrs. Next, 1 equimolars 1-iodooctane (Sigma-Aldrich) was added, followed by neutralization of the generated acid with NaHCO 3 (Merck) Methylation was performed by the addition of 2.25 equimolmars iodomethane (Sigma-Aldrich) and 40˚C for 72 hrs. The generated acid was neutralized with 1 equimolars NaHCO 3 and the obtained NPs were recovered by the addition of double distilled water (DDW), resulting in particle precipitation. The QPEI NPs were rinsed again with DDW, lyophilized and pulverized to a fine powder. Infrared spectroscopy was used to benchmark the prepared QPEI NPs with those reported previously in the literature [11] (Fig 1c). Sample preparation Acrylic resin-based, powder-liquid dental material, Unifast Trad, (GC Europe, Leuven, Belgium) served as host polymer to which the test NPs were added at 8% (wt/wt). We previously found that when QPEI nanoparticles are mixed with Unifast at 2, 4, 6, 8 and 16% (wt/wt), and screened with the direct contact test (DCT) against various bacteria, the optimal concentration is 8% (wt/wt) (data not shown). The NPs were first blended with the solid portion of the acrylic material, followed by the addition of the liquid portion according to the manufacturer's instructions. Compressive strength Cylindrical specimens of 0.4 mm diameter and 10 mm length, prepared in a polypropylene tube were allowed to polymerize at room temperature for 1 hr, then stored at 37˚C for 24 hrs before testing. Each test group consisted of 10 specimens of polymerized resin material with 8% (wt/wt) NPs, specimens without NPs served as control. Compressive strength was determined with a universal testing machine (3366, Instron, Canton, MA) operated at a displacement speed of 1 mm/min. The data were analyzed immediately with Merlin software to calculate the compressive strength and the Young modulus. Degree of monomer conversion Three samples for each test formulation, mixed according to the manufacturer's specifications, were placed on the ATR surface of the FT-IR and allowed to polymerize. Infrared spectra were taken every 60sec. from the start of the reaction and characteristic absorption peaks in the infrared range for carbon-carbon double bonds (1638 cm -1 ) and reference peaks of carbonoxygen double bonds (1720 cm -1 ) were recorded. The degree of monomer conversion (DC) was calculated according to the following equation: where A is the light absorption at a defined wavenumber. Contact angle Disc-shaped specimens (d = 5 mm, h = 2 mm) of Unifast material with 8% (wt/wt) NPs were prepared in a polypropylene mold, polymerized for 1 hr and then stored in DDW at 37˚C for 24 hrs. Non-modified Unifast material discs served as control. All specimens were polished with No. 320 abrasive paper. The contact angle between the deionized water and the test samples was measured with a LAUDA tensiometer (FirsTen Angstroms, Portsmouth, VA). A 2 μL drop of DDW was applied 10 times to each surface. The contact angle was recorded as the average of 10 readings for each droplet, i.e. a total 100 contact angle reads from each material sample. Direct contact test (DCT) Samples of resin material containing 8% (wt/wt) NPs were allowed to polymerize on the sidewalls of a polystyrene flat bottom 96-well plate (Nunclon, Nunc, Copenhagen, Denmark). After 7 days at 37˚C, the samples were rinsed several times with sterile phosphate-buffered saline (PBS) to remove excess polymerized material and the plates were sterilized overnight under UV radiation. A flowchart with DCT procedure summarized in Fig 2. A total 10 μL bacterial suspension was placed on each sample surface and the plates were placed vertically and incubated at 37˚C for~40 min until all the liquid evaporated, to ensure physical contact between the tested surface and the bacteria. Then 220 μL BHI were added to each well and the plates were placed in a spectrophotometer (VERSAmax, Molecular Devices Corporation, Menlo Oaks Corporate Centre, Menlo Park, CA). OD 650 measurements were taken every 20 min at 37˚C during 24 hrs, and plotted as a function of time, representing bacterial growth curves, as previously described [27]. Antibiofilm test under dynamic conditions Biofilm formation was studied in a modified drip-flow reactor (MDFR), as described by Hahnel et al [28] and Ionescu et al [29] and consisted of three microbiological models: 1 -Nonospecific S. mutans biofilm grown for 48 hrs. 2-Same as above, but with prior simulation of salivary pellicle formation for 24 hrs; 3-Oral microcosm mixed plaque freshly pooled from human volunteers grown for 48 hrs with prior simulation of salivary pellicle formation for 24 hrs. All the materials and reagents for the microbiological procedures were purchased from Sigma-Aldrich and all the culture media were purchased from Becton-Dickinson (BD Diagnostics-Difco, Franklin Lakes, NJ, USA). Fig 3 shows schematic presentation of antibiofilm tests. Specimens subjected to the antibiofilm test were stored in 24-well plates under light-proof conditions and 1000 μL sterile PBS were added to each well. The plates were stored at room temperature for an additional 7 days to allow unreacted monomers or loosely attached antibacterial compounds to leach out of the specimens. To remove these compounds, each well was rinsed twice daily with 1000 μL sterile PBS. Saliva collection Paraffin-stimulated whole saliva was collected by expectoration from three healthy donors (with their informed consent), in accordance with the protocol published by Guggenheim et al [30]. Briefly, saliva was collected in chilled test-tubes, pooled, heated at 60˚C for 30 min to inactivate endogenous enzymes, and then centrifuged (12,000 x g) for 15 min at 4˚C. The supernatant was transferred to sterile test-tubes, stored at -20˚C and thawed at 37˚C, 1 h before the experiments. Modified drip-flow reactor Biofilm formation was simulated under continuous flow conditions in a modified drip-flow reactor for 48 hrs [31]. Briefly, for simulating S. mutans biofilm formation under dynamic conditions, a modification of a commercially available drip-flow reactor (DFR 110, BioSurface Technologies; Bozeman, MT, USA) was used (MDFR). The modified design allows placement of customized sample carriers on the bottom of the flow cells, ensuring complete immersion of the specimens' surfaces in the flow medium. The specimens were mounted on PTFE carriers, which were subsequently fixed on the bottom of each MDFR flow cell. Ten specimens for each test material were used, and the experiment was repeated three times. To minimize the risk of microbial contamination of the MDFR, all tubing and trays were sterilized in a low-temperature hydrogen peroxide gas plasma chemiclave (Sterrad, ASP; Irvine, CA) before assembly. The entire MDFR was then placed in a sterile hood. In microbiological model 1, each cell was inoculated with 10 mL S. mutans suspension in early log phase to allow bacterial adhesion. After 4 hrs, a constant flow of sterile nutrient broth was provided with a multi-channel computer-controlled peristaltic pump (RP-1, Rainin; Emeryville, CA, USA). The broth was enriched with 10.0 g/L sucrose [32] and consisted of 2.5 g/L mucin (type II, porcine gastric), 2.0 g/L bacteriological peptone, 2.0 g/L tryptone, 1.0 g/L yeast extract, 0.35 g/L NaCl, 0.2 g/L KCl, 0.2 g/L CaCl2, 0.1 g/L cysteine hydrochloride, 0.001 g/L hemin, and 0.0002 g/L vitamin K1; the flow rate was set at 9.6 mL/hr and the temperature was maintained at 37˚C [29]. In microbiological model 2, before inoculation the specimens' surfaces were completely covered with thawed sterile saliva and maintained at 37˚C for 24 hr. Then, after carefully aspirating and decanting the supernatant, 10 mL of S. mutans suspension in early log phase were inserted in each flow-cell to allow bacterial adhesion. After 4 hrs, a constant flow of nutrient broth was provided, as described above to allow bacterial adhesion and the MDFR continued to operate for 48 hrs. In microbiological model 3, the specimens' surfaces were completely covered with thawed sterile saliva and maintained at 37˚C for 24 hrs to allow the formation of a salivary pellicle. Human whole saliva was collected by expectoration from at least three healthy volunteers who gave their informed consent to participate. The donors refrained from oral hygiene for 24 hrs, did not have any active dental disease, and had not consumed any antibiotics for at least the 3 months leading to the experiments. The saliva was collected within 30 min during a single session and the samples were pooled before further processing. Then the supernatant was gently discarded and 10 mL of pooled saliva were inoculated in each flow cell. After 4 hrs, to allow bacterial adhesion, a constant flow of nutrient broth was provided, and the MDFR continued to operate for 48 hrs, as described above. Viable biomass assessment The viable biomass adhering to the specimen surfaces was assessed according to a tetrazolium salt (MTT) assay, as previously described [28]. Briefly, an MTT stock solution was prepared by dissolving 5 mg/mL 3-(4,5)-dimethylthiazol-2-yl-2,5-diphenyltetrazolium bromide in sterile PBS; a phenazinium salt (PMS) stock solution was prepared by dissolving 0.3 mg/mL Nmethylphenazinium methyl sulphate in sterile PBS. The solutions were stored at 2˚C in lightproof vials until the day of the experiment, when a fresh measurement solution (FMS) was prepared by diluting 1:10 v/v MTT stock solution and 1:10 v/v PMS stock solution in sterile PBS. A lysing solution (LS) was prepared by dissolving 10% v/v sodium dodecyl sulphate and 50% v/v dimethylformamide in distilled water. After 48 hrs, the nutrient broth flow to the selected flow-cells of each microbiological model was discontinued. The specimen carriers were removed from the flow chambers and immediately placed in petri dishes containing sterile PBS at 37˚C. Specimens were carefully detached from the tray with a pair of sterile tweezers, transferred to a plate containing sterile PBS at 37˚C in order to remove non adherent streptococci, and placed in the wells of a sterile 48-well plate. A total 300μl FMS were added to each well, and the plates were incubated for 3 hrs at 37˚C under light-proof conditions. During the incubation, electron transport across the microbial plasma membrane and, to a lesser extent, microbial redox systems converted the yellow MTT salt to insoluble purple formazan. The conversion was facilitated by the intermediate electron acceptor (PMS). The unreacted FMS was gently aspirated from the wells and the formazan crystals were then dissolved by adding 100 mL LS to each well and the plates were further incubated for 1 h at room temperature under lightproof conditions. Subsequently, 90 μL of the solution were transferred into the wells of 96-well plates. The absorbance of the supernatant was measured with a spectrophotometer (Genesis 10-S, Thermo Spectronic, Rochester, NY,) at a wavelength of 650 nm. The results, in OD 650 units, were displayed graphically. Statistical procedures Statistical analyses were performed with JMP 10.0 software (SAS Institute, Cary, NC, U.S.A.). Because of the low number of samples for some of the analyses, first the normality of distribution was checked and verified according to the Shapiro-Wilk test and hmogeneity of variances was checked and verified according to Levene's test. One-way ANOVA and Tukey's HSD post hoc test were used to highlight significant differences between experimental groups. The level of significance (α) was set at 0.05. Compressive strength Compressive strength and Young's modulus are shown in Fig 4. A significantly strong decrease in mechanical strength was measured for Unifast samples with 8% (wt/wt) QASi, whereas the addition of the same amount of QPEI resulted in complete softening of the host material. The addition of 8% (wt/wt) SiCial caused only a slight decrease in compressive strength Degree of monomer conversion (DC) The DC of the three test groups and of the control group are shown in Fig 5. Only a slight decrease in monomer conversion occurred in samples with incorporated SiCial NPs, whereas samples with quaternary ammonium functionality resulted in severe inhibition of polymerization. Moreover, the QPEI NPs terminated the polymerization at such an early stage, that the final material remained in soft solid phase. Fig 6 demonstrates representative FT-IR spectra of unmodified resin material before and after curing. Discoloration. Fig 7 shows the significant discoloration in resin samples with incorporated nanoparticles containing quaternary ammonium iodide salts (QPEI and QASi) vs unmodified resin material (control). A relatively mild change in color is evident in samples with SiCial particles. Surface-modified nanoparticles as anti-biofilm filler Direct contact test S. mutans growth curves during 48 hrs on acrylic resin material with the three antibacterial nanoparticles are shown in Fig 9. Samples containing 8% QPEI led to material disruption due to poor polymerization. This can be seen in the wavy nature of the related curve, resulting from fluctuations in optical density caused by material rupture and dissemination in the course of the experiment. Although samples of all the three test formulations resulted in some inhibition of bacteria growth, unmodified polymer had no effect. The QPEI samples showed the strongest antibacterial activity, followed by QASi and the lowest activity was observed for SiCial. Antibiofilm test As the results of the statistical tests verified the assumptions of normality and homoscedasticity for all the microbiological data, it was possible to perform ANOVA and post-hoc tests. The biofilm data is depicted graphically in Fig 10; the statistical significance is shown in Table 1. The monospecific S. mutans microbiological model without simulation of salivary pellicle, showed significant antibacterial activity upon QPEI and SiCial incorporation. Biofilm formation was reduced by about 22% (p = 0.017) and 60% (p<0.001),). No significant differences in biofilm formation were found upon the addition of QASi. Pre-treatment of specimen surfaces by simulating salivary pellicle formation for 24 hrs before allowing S. mutans biofilm formation resulted in a lack of antibacterial activity of all the experimental materials (Table 2, Fig 10). The microbiological model based on a mixed plaque oral microcosm biofilm grown after salivary pellicle formation for 24 hrs showed significant antibacterial activity of QPEI (31% reduction in biofilm formation, p = 0.027) and a borderline non-significant biofilm reduction by SiCial (27% reduction, p = 0.059). Discussion The present study proposing two novel silica-core based NPs as a possible surface-active antibiofilm filler for resin-based dental materials, showed that SiCial was the most effective. IN comparison with the well-studied QPEI NPs, incorporation of SiCial NPs has a much lower antibiofilm effect, but it did not induce resin's polymerization inhibition, the acrylic material preserving its original mechanical properties. QPEI are well studied NPs that consist of a polyethyleneimine core and dimethyl-octyl ammonium iodide functional groups. The high activity of iodide anions in free-radical polymerization systems act in particular as peroxide activators in the generation of free radicals [12] [33]. As shown in Fig 5, this phenomenon was strongly reflected in the % DC measured for resin material with QPEI and QASi particles, both containing ammonium iodide. In contrast, resins containing SiCial NPs did not show a decrease in monomer conversion. The hydrophobicity of resin samples modified with NPs vs. that of the unmodified material clearly demonstrate that the incorporation of 8%w of both polyethyleneimine-based and silica-based particles with dimethyl-octyl ammonium iodide functionality did not induce changing in the water contact angle (Fig 7). In contrast, di-cinnamyl amine increased the contact angle in a manner that changed the nature of the material's surface from hydrophilic to hydrophobic. These findings may account for the reduced biofilm when tested without a salivary pellicle, i.e. fewer bacteria came in contact with the surface. The DCT growth curves of S. mutans for test samples vs. unmodified resin material showed an only partial antibacterial effect for silica NPs but strong inhibition for QPEI (Fig 8). This is probably due to the lower loading degree of functional units attached to silica-core NPs. It is conceivable that the poor condition of the host resin material with the QPEI NPs resulted in material degradation, diffusion into the medium and thus an irregular curve. Both silica-based QASi and the SiCial NPs showed similarly mild results; the cinnamyl-modified nanoparticles were slightly more active than the quaternary ammonium ones This leads to the conclusion that immobilized t-cinnamaldehyde preserves its antibacterial potency after immobilization onto silica particle. Recreating the complex oral environment in the lab is extremely challenging. The results are sound when interpreted within their known limits, and allow for isolation of several parameters under controlled conditions. Different microbiological models lead, therefore, to different results. It is clear that the salivary pellicle plays a prominent role as a moderator in the antibacterial activity of contact-active materials. The contact-active killing capacity of QPEI was previously described [3,4,[7][8][9][10]. These particles, and the even more effective SiCial, were able to reduce biofilm formation of S. mutans biofilm after 48 hr in a continuous flow environment. However, the activity was hindered by the salivary pellicle, which probably acted as a separation layer between the bacterial cells and the contact-active surface, inactivating the materials' antibacterial properties. It is conceivable, therefore, that the contact-killing feature of a dental material may be important during the first steps of adhesion and colonization, rather than after salivary pellicle formation, at least for early colonizing cocci such as S. mutans. The choice of incubation time in the present study allowed the formation of mature biofilm structures, such as those taking place in sites where cleaning by brushing is difficult to achieve. Our results suggest that under the test conditions the addition of QPEI nanoparticles results in reduced biofilm formation after 48 hrs and in the presence of a salivary pellicle. QPEI nanoparticles, being 10 times smaller than SiCial, may have enhanced activity, due to the smaller dimension and the consequent increase in surface to volume ratio. Indeed, the QPEI nanoparticles when incorporated in the carrier material resulted in a much less hydrophobic surface Surface-modified nanoparticles as anti-biofilm filler than the SiCial ones. A previous study demonstrated that resin with the higher hydrophobicity caused an increase in biofilm formation in the oral microcosm model vs the less hydrophobic one [29]. Our results indicate that the SiCial-adduct yields a hydrophobic surface, whereas QPEI and QASi NPs produce a slightly hydrophilic surface. For this reason, SiCial nanoparticles may be less efficient than QPEI as contact-killing agents on an oral microcosm biofilm. Further studies may help to assess whether the relative dimensions of the two nanoparticles or the resulting hydrophobicity of the material alone can account for their different antibacterial behavior. It bears mention that QASi NPs did not show any antibiofilm effect in the three microbiological biofilm models used. In view of all the above it maybe concluded that SiCial NPs although less efficient than QPEI NPs, are more effective as they do not destroy the resin material.
2018-04-03T04:09:58.496Z
2017-12-15T00:00:00.000
{ "year": 2017, "sha1": "e96b3cf3061601af91b87b3d160e9b8fc9a8c4f6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0189397&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e96b3cf3061601af91b87b3d160e9b8fc9a8c4f6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
108836758
pes2o/s2orc
v3-fos-license
Aqueous Dehydration , Hydrogenation , and Hydrodeoxygenation Reactions of Bio-Based Mucic Acid over Ni , NiMo , Pt , Rh , and Ru on Neutral or Acidic Catalyst Supports Hydrotreatment of mucic acid (also known as galactaric acid, an glucaric acid enantiomer), one of the most promising bio-based platform chemicals, was systematically investigated in aqueous media over alumina, silica, or carbon-supported transition (nickel and nickel-molybdenum) or noble (platinum, ruthenium and rhodium) metals. Mucic acid was only converted into mucic-1,4-lactone under non-catalytic reaction conditions in N2 atmosphere, while the 5 MPa gaseous H2 addition triggers hydrogenation in the bulk phase, resulting in formation of galacturonic and galactonic acid. However, dehydroxylation, hydrogenation, decarbonylation, decarboxylation, and cyclization occurred during catalytic hydrotreatment, forming various partially and completely deoxygenated products with a chain length of 3–6 C atoms. Characterization results of tested catalysts were correlated with their activity and selectivity. Insufficient pore diameter of microporous supports completely hindered the mass transfer of reactants to the active sites, resulting in negligible conversion of mucic acid. A comprehensive reaction pathway network was proposed and several industrially interesting compounds were formed, including levulinic acid, furoic acid, and adipic acid. However, selectivity towards adipic acid, a bio-based nylon 6,6 precursor, was low (up to 5 mol%) in aqueous media and elevated temperatures. Introduction A huge amount of interest is devoted to a search of new sources and processes for the production of industrially important chemicals, especially monomers, which can substitute petrol-based precursors.To avoid competition between food and chemical or fuel production, only non-edible biomass should be utilized in the chemical industry.The missed opportunities for converting biomass into chemicals lie in agricultural wastes, wood residues, and other similar lignocellulosic materials [1][2][3].In general, lignocellulosic biomass is composed of three main polymer structures-lignin, cellulose, and hemicellulose.Each of them presents its own system of potential sources of chemicals [4].Glucose as a cellulose monomer has a high potential for biopolymer production.Oxidation of a glucose at both terminal C-atoms yields a group of dicarboxylic sugar acids, named aldaric acids (saccharic acids), poorly soluble in polar and nonpolar solvents [5]. The main disadvantage utilization of cellulose as a feedstock for transportation fuels is a high oxygen/carbon ratio if compared to the petro-based ones.Oxygen-rich functional groups in cellulose, however, represent a huge potential for production of various chemicals.For that reason selective removal or transformation of certain functional groups by hydrodeoxygenation process (HDO) is adopted to get desired products.Few researchers devoted their work to the HDO of aldaric acids over transition or noble metals.Only a few patents and nearly no scientific papers are published on chemocatalytic conversion of aldaric acids (glucaric or mucic acid), most often towards adipic acid.Rennovia recently patented a process of glucose oxidation into glucaric acid over heterogenenous catalysts with air, where a progressive oxidation of the terminal functional groups (on C1 and C6 position) had proceeded.Two main routes are considered, and they both resulted in the same final product, glucaric acid [6].Glucaric acid was converted further into adipic acid by a catalytic HDO process over different metal catalysts in an acidic environment, where acetic acid with additional HBr was used as a solvent.The highest yield (89%) of adipic acid was reached from RhPt/Silica catalyst.Asikainen et al. [7] presented and patented a process for selective dehydroxylation of aldaric acid into muconic acid or furanic chemicals.Homogeneous catalytic process was performed in different short chain alcohols over methylrhenium trioxide.The selectivity to linear or cyclic products depends on the reaction temperature.Higher temperatures (>150 • C) resulted in cyclic products, while experiments performed at lower temperature (<150 • C) resulted in higher yields of linear products, esters of (di)carboxylic acids.Since alcohol was presented in abundant quantity, esterification proceeded between an alcohol solvent and carboxylic groups.Nevertheless, esters act as a protective group even though carboxylic group is more temperature-stable compared to the hydroxyl group.When carboxylic groups are protected, selective removal of hydroxyl groups can proceed.A comprehensive review of the catalytic hydrodeoxygenation of biomass-derived polyfunctionalized substrates was published by Yoshinao et al. [8].Aqueous-phase hydrodeoxygenation of low-molecular-weight biomass oxygenates over heterogeneous Ni catalyst was studied by Jahromi and Agblevor [9,10]. However, if aqueous conditions are applied, aldaric acids form equilibrium between the acid and lactone.The carboxyl group lactonizes by condensation with the C4 or C5 hydroxyl group, yielding 1,4-or 1,5-lactone.The lactone formation is accelerated by heating and in the presence of strong acids.It was reported that 1,4-lactone is more stable compared to 1,5-lactone, which is readily hydrolyzed in aqueous solution [5].Lactone formation could be avoided by avoiding aqueous conditions or by protection of free carboxylic acids by esterification.Selective removal of hydroxyl groups can proceed when carboxylic groups are protected.If aldaric acids are converted into esters or lactones, both forms of aldaric acids could be used in condensation polymerization to produce hydroxylated nylons, as presented by Chen [11]. Selective hydrodeoxygenation of C-X bonds (X presents O, S, or N atom) is a highly desirable reaction, not only in biomass conversion [12,13] but also in the petroleum industry for producing platform chemicals and industrially important chemicals, or to reduce sulphur content [14].A comprehensive study was done on the catalytic hydrodeoxygenation of aliphatic and aromatic compounds by many researchers [15,16].However, selective removal of particular functional groups is still a challenging process, especially because of various reactivity of each functional group and their mutual influence [17].As presented by Boussie et al. [17], the reactivity of -OH groups is different depending on αor β-position on the aldaric acids.β-OH groups, placed on C3 and C4 positions on glucaric acid, is expected to have higher reactivity than α-OH groups placed on C2 and C5 positions.For that reason, hydrohalic acids are added for catalytic activation of all C-OH bonds, by transformation into more reactive alkyl-halide intermediates [17].Removing all -OH groups results in adipic acid.The highest yield of adipic acid was reached when HBr is added to the reaction mixture and acetic acid is used as a solvent. Conversion, selectivity, and yield are strongly dependent on catalyst choice and reaction conditions.The most typical approach for HDO is a combination of an acid and a noble or transition metal catalyst.Presence of Ru, Ir, or Rh metals can easily cause C-C dissociation, while Pd and Pt are reported to be quite inactive for C-C cleavage, also at high temperatures [8,18].Normally, a combination of noble metal and group 6 or 7 metals are used, since they are responsible for H 2 dissociation and C-O cleavage, respectively [8].Metals are usually finely dispersed on acidic supports, which promote C-O bond cleavage. In our previous studies, the HDO of various functional groups on the C6 chain was studied [19] in order to establish a detailed microkinetic model.Based on previous studies, reaction conditions for presented research were estimated.The HDO of C6 aldaric acid (particularly mucic acid used in our study) over 10 different metal catalysts on neutral or acidic support was performed in this work in order to pin-point the most active catalyst for selective conversion of mucic acid.Initially, commercially available NiMo catalysts on Al 2 O 3 support were tested, as it is the most well-known and commonly used hydrogenation and hydrotreatment catalyst in industrial applications.NiMo/γ-Al 2 O 3 catalyst was thoroughly tested for its activity of oxygen functional group removal from terminal and secondary positions of linear C6 backbone [19,20] and lactonisation of levulinic acid [21].NiMo catalyst was selected due to its proved HDO activity and low price, although the alumina support can transform into oxy-hydroxide (AlOOH or trihydroxide (Al(OH) 3 )) in aqueous conditions [22].However, the aim of the presented study is to investigate the influence of monometallic noble and transition metals on different supports on the HDO of mucic acid, a bio-based precursor for several important chemicals for the polymer industry. Firstly, two experiments were performed with the selected catalyst at temperatures of 200 • C and 225 • C. Furthermore, conventional HDO catalysts with noble and transitional metals (Ru, Pt, Rh, Ni, NiMo) on various supports (C, SiO 2 , Al 2 O 3 ) were selected to be tested.Furthermore, the detail reaction pathway of mucic acid HDO was developed based on detected intermediates and products.The influence of catalyst type and reaction temperature on the conversion and product distribution was studied. Catalyst Characterization N 2 physisorption analyses by BET (Brunauer-Emmett-Teller) method were performed for all fresh catalysts in order to determine specific surface area (A BET ), pore volume (Vp), and average pore width (Dp).Results are collected in Table 1.Metal loading was 5 wt% for all catalysts except for NiMo on alumina, where metal loading for Ni and Mo was 3 and 15 wt%, respectively.The highest surface area when C was used as a support is in line with expectations.Among all the catalysts, Pt/C showed the highest specific surface area and Pt/γ-Al 2 O 3 the lowest.All metal catalysts on SiO 2 support have similar surface areas (137-154 m 2 g −1 ).The same trend in the size distribution was observed for pore volume-the highest value (twice higher compared to the others) was determined for Pt/C and the lowest for the Pt/γ-Al 2 O 3 , which was comparable to the NiMo/γ-Al 2 O 3 .The highest pore width was determined for silica-supported catalysts (17.1 ± 1.6 nm) followed by alumina-supported materials (around 9.5 nm), and the lowest was reported for catalysts on C support (3.2 ± 0.9 nm).TEM images of two catalysts are reported in Supplementary Materials (Figures S11 and S12), while other SEM and TEM micrographs were already reported and discussed in detail elsewhere [23].In all cases, the results show that commercial catalysts contain well dispersed metal clusters below 5 nm in size. CO and CO 2 TPD (temperature programmed desorption) analyses were done for 4 catalysts, where the highest expected activities were informed by the literature (e.g., Ru/C, Rh/C; Ru/SiO 2 , Rh/SiO 2 ).In Figure 1a, the CO responses from CO TPD are shown.The samples containing carbon as a support resulted in a large peak at 700 • C, while the samples based on silica did not.Rh samples showed several peaks below 350 • C, with a mean desorption temperature of 196 • C and 207 • C for Rh/C and Rh/SiO 2 , respectively.The peak area associated with Rh is 3.1 times higher in the case of Rh/C in comparison with Rh/SiO 2 , while the specific surface area is 7 times higher for Rh/C.Results in this work show an agreement with the literature, where CO was reported to desorb from the Rh on graphene oxide at 80 • C [24] and at 247 • C from Rh(111) [25]. Desorption of CO was not observed in the case of Ru samples in significant amounts at temperatures below 350 • C. Large peaks should not be associated with CO desorption from Ru metal (previously observed at much lower temperatures of 117 • C and 167 • C), but rather with decarboxylation of carbonaceous support [26,27].However, we also observed desorption of CO 2 in a CO-TPD experiment in the case of Ru catalysts at temperatures below 400 • C (Figure 1b).A comparison between CO 2 response from CO-TPD and CO 2 response from CO 2 -TPD showed higher desorbed amounts of CO 2 in the case of CO-TPD at several temperatures.This can be explained by the oxidation of CO with H 2 O impurities on Ru surface, since supported Ru is a WGS (water gas shift) catalyst with TOF (turnover frequency) roughly 10 times higher than the supported Rh catalyst [28].Sulphided NiMo/γ-Al 2 O 3 catalyst was extensively characterized and tested on other HDO systems in our previous works [19,21,29].Data corresponded to the surface density of metallic sites (N AS ) and showed no significant difference between different metals on supports.Obtained concentrations of active sites were in the relatively narrow range between 2.5 × 10 −5 and 1.1 × 10 −4 mol g −1 for all commercial catalysts used in this work, therefore their availability and relatively similar dispersion are not expected to be the most critical or deterministic criteria of the catalyst performance.However, this information is crucial for the upcoming micro-kinetic modelling concentration of acidic sites determined by NH 3 -TPD analysis, which showed similar values as for metal sites, with concentrations in the range between 4.1 × 10 −5 and 1.2 × 10 −3 mol g −1 .The highest concentration of acidic active sites was determined on alumina support, as it can be seen in Figure S13.Detailed characterization results are available in Supplementary Materials (Tables S1-S2). Reaction Pathway Development: Bulk Phase Reactions The proposed reaction pathway network of mucic acid hydrotreatment is presented in Figure 2. The reaction pathway is subdivided into the bulk (non-catalytic) and heterogeneously catalyzed part.The upper part of the reaction scheme presents non-catalytic transformations in the bulk liquid phase, while the scheme of heterogeneous transformations is extensive due to the series of parallel and consecutive reactions and several intermediates and products observed during the HDO.Intermediates and products follow the deoxygenation method (upside down) and the decrease of chain length (from left to right side).Furanic compounds are marked in orange and completely deoxygenated products in blue.Further oligomerization (regarding the detected products, please see Supplementary Materials Tables S3-S5) is not presented on the scheme to retain its clarity.Different colors present lumping groups, in which the compounds are grouped for determination of their selectivity. In the absence of a catalyst, mucic-1,4-lactone ( 2) is formed, which is common in aqueous media [30].Equilibrium between mucic acid (1) and its lactone (2) was established, since the lactonization of carboxyl group mucic acid with the -OH group on C4 or C5 position can form 1,4-or 1,5-lactone, respectively.It is well known from the literature that 1,4-lactone is more stable compared to 1,5-lactone, which was reported to readily re-hydrolyze [30].1,4-lactone was detected already at room temperature, which proves immediate formation of lactone from mucic acid in the presence of water.The established equilibrium between mucic acid and its 1,4-lactone is marked with a blue arrow in Figure 2. The formation of lactone was confirmed by LC-MS (liquid chromatography-mass spectrometry) analysis, where its specific mass fragments (parent ion with molar mass of 196 g mol −1 ) were detected (Figure S2).Under inert atmosphere only, the above-mentioned lactone was detected besides mucic acid, while under high H 2 pressure (>5 MPa), formation of galactonic acid was promoted (4), marked with an orange arrow.Minor formation of galacturonic acid (3) was noticed at the highest temperature without a catalyst, 175 • C. Galactonic and galacturonic acid were both confirmed by the LC-MS (Figures S2 and S4). Before the reaction temperature reached the set value, mucic acid was not completely dissolved in water.The solubility of mucic acid and 1,4-lactone increases with temperature, resulting in the gradual concentration increase in the liquid phase.As presented by Brown et al. [31], glucaric acid can form 2 different monolactones that can both further convert into the mucic-(1,4:6,3)-dilactone. In the pH-neutral media, 1,4-lactone forms more slowly than 1,5-lactone.Elevated temperature and neutral media favored the formation of mucic-(1,4:6,3)-dilactone, which was not detected at lower temperatures.Even though higher temperature was applied, dilactone was not detected in our experiments.Homogeneous transformation was confirmed using LC-MS analysis, where specific fragmentation patterns confirmed lactone, galacturonic, and gluconic acid formation according to specific molar masses.Detailed LC-MS results, including chromatograms and MS spectra, are presented in Supplementary Materials (Figures S2-S4). Reaction Pathway Development: Catalytic Reactions HDO of mucic acid was catalyzed by different metal catalysts on neutral or acidic supports.Partially or completely deoxygenated products were detected by GC-MS (gas chromatography mass spectrometry).The proposed reaction scheme is presented in Figure 2. H 2 O was eliminated in the first step either directly from mucic acid (1) or from galacturonic (3) and galactonic acid (4), resulting in C6 diols with one or two terminal carboxylic groups (compounds marked as 5 and 6 in Figure 2).A pair of water molecules can be eliminated from mucic acid, specifically from the C2-C3 or C3-C4 position [17], which determines further formation of intermediates.Compounds ( 5) and ( 6) are further deoxygenated and hydrogenated, resulting in adipic acid, detected by GC-MS and HPLC (high-performance liquid chromatography) analysis.Adipic acid was gradually deoxygenated to the 1,6-hexanediol and furthermore to the final alkane-hexane.If HDO proceeds only on one terminal carboxylic group, hexanoic acid is formed.However, gradual HDO of each terminal carboxylic group leads to the same final deoxygenated alkane-hexane.During water molecule elimination from the oxygenated compound (-H 2 O), the double bond is usually formed, readily hydrogenated (+ H 2 ) to the single bond.For better review of the reaction pathway, this step is sometimes omitted (e.g., from 1,6-hexanediol to the hexane, from the pentanol to the pentane, etc.). However, formation of shorter chain products is linked to the decarboxylation or decarbonylation of carboxylic functional group.In the case where two molecules of water were eliminated from the C2-C3 position (of 6) and further decarboxylation (of 25) proceeded with HDO, levulinic acid (26) was formed.Double bonds were hydrogenated simultaneously.Further hydrogenation of the carbonyl group (C=O) resulted in hydroxyl pentanoic acid (21), detected by GC-MS (gas chromatography-mass spectrometry).An additional deoxygenation led to the pentanoic acid, subsequently deoxygenated to the pentane through pentanal and 1-pentanol formation. As mentioned before, water elimination from mucic acid (or galacturonic acid and galactonic acid) usually proceeds at the C3 and C4 positions, due to higher reactivity of β-OH groups [17] resulting in the products ( 5), (7), and (15).Further dehydroxylation of (5) led to the same product as water elimination from galacturonic acid (7).Furanic compounds, such as furancarboxylic acid and tetrahydrofurfuryl alcohol, were formed due to water elimination with subsequent cyclization.With hydrogenation of furanic compounds, firstly double bonds were saturated (16→17→18); furthermore, the furan ring opens, resulting in 2-hydroxy pentanoic acid (27).The 2-hydroxypentanoic acid was deoxygenated by a known procedure, where firstly the -OH group on the C2 position was deoxygenated to the different forms of pentenoic acid, shifting the double bond (20).The double bond was hydrogenated, resulting in pentanoic acid (21).All further HDO steps of pentanoic acid are already mentioned above.When complete HDO was reached, only alkanes were formed (pentane, butane).Gradual deoxygenation follows a similar pathway as was previously presented in the linear model compound HDO studies [19,20].Partially-deoxygenated C3 compounds were detected in small amounts.At high temperatures, C-C bond cleavage was observed, which resulted in partially deoxygenated compounds of C3-C5 chain length.Furan-2,5-dicarboxylic acid is a possible intermediate mentioned in the literature [32]; however, it was not detected in the experiment, therefore its formation is marked with a dashed arrow on the reaction scheme.The first publication regarding the conversion of mucic acid into furan dicarboxylic acid was published in 1876 by Fitting and Heinzelmann.Many of the detected compounds are of special interest to the chemical industry, especially in polymer production as bio-based monomers.The most important detected chemicals are adipic acid and furan-based products.Detected adipic acid is one of the most important platform chemicals, mostly used for Nylon 6,6 production.Furoic acid is known as a flavoring agent in the food industry and it can also be used for optic technologies.Other carboxylic acids and polyols are important chemicals in the polymer industry [33]. The Influence of Catalyst Type and Reaction Conditions on the HDO Selectivity of Mucic Acid Ten different commercially available materials consisting of noble or transition metals on C, Al 2 O 3 , or SiO 2 supports (various acidity) were tested in this catalyst screening study.Noble metals are known for activation of oxygenated compounds on metal sites [8,34]. The complexity of the system demands a combination of different analytic methods to detect all formed products presented in Figure 2 and collected in the Table 2.The final products of hydrodeoxygenation are alkanes, which are non-soluble in water and could form a separate (organic) liquid phase.Therefore, the final reaction mixture was always extracted with diethyl ether, derived from (esterified with three methyl silyl esters), and analyzed by, GC-MS.From combined results of both analytic methods (Table 3), a table with products detected by HPLC and GC-MS was formed.Catalysts affected the formation rate of products and their distribution. Complete conversion of mucic acid was reached over NiMo/γ-Al 2 O 3 at 200 • C or higher and no lactones were detected in the liquid phase.Not all deoxygenated products with low polarity remain in aqueous phase, and the most probably form a separate (organic) liquid phase, therefore they were only identified after the extraction of final products in diethyl ether.Many short-chain products were detected after derivatization and GC-MS analysis (Figure S1), revealing that C-C bond cleavage proceeded alongside HDO.Mucic acid and products (2-4) could not be detected by GC-MS due to the geometrically-hindered derivatization and high boiling points of corresponding products (boiling points above 350 • C). Table 3 presents a distribution of intermediates and products in the aqueous phase and diethyl ether extract and Figure 3 the conversion of mucic acid over NiMo/γ-Al 2 O 3 catalyst at 200 and 225 • C. In the latter, the highest concentrations were detected for tetrahydro-furfuryl alcohol (36.0% at 200 • C) and adipic acid (32.3% at 225 • C).Notable amounts of 2-hydroxypentanoic acid (5.6-7.4%),levulinic acid (5.0-5.5%),furoic acid (6.3-25.4%),2-pentenoic acid (1.8-3.6%), and 3-methyl-2-hydroxypentanoic acid (3.0-5.8%) were detected as well.Other products were present in lower concentrations (below 3%).Some of the compounds were detected in aqueous phase (by HPLC) and in the extract (GC-MS).Based on the HPLC results, the selectivity towards the adipic acid in the final reaction mixture was 4.25% at the temperature of 200 • C and 4.33% at 225 • C. Selectivity to THF-alcohol (Tetrahydro-2-furfuryl alcohol) was 10.1% at the temperature of 200 • C, but due to its very low UV absorbance, it could not be detected in reaction mixture at 225 • C. The ratio between products with C6 chain length and chain length the same or lower than C5 increases with reaction temperature.The reason for the observed behavior could be temperature-dependent selectivity, which is, however, opposite to the results obtained in the literature, where higher temperatures led to the furanic compounds formation over Rh catalyst in MeOH [7].Even though conversion of mucic acid was complete and no lactone was detected, the main product was galactonic acid.Besides galactonic acid, many partially deoxygenated products were formed.From the aspect of selectivity towards C6 dicarboxylic acids and diols, C-C bond cleavage (i.e., decarboxylation) is not desired, therefore the catalysts with higher activity were chosen and temperature below 200 • C was applied to aim the selectivity towards the desired products.Dicarboxylic acids and polyols are desirable products, especially for the polymer industry.Successful catalyst reusability tests for NiMo/γ-Al 2 O 3 are described in the supplementary data (Figures S14 and S15). Besides NiMo on alumina, the catalyst often used for HDO of biomass compounds is Pt/Al 2 O 3 [12,35,36].Since sulphided NiMo on alumina showed sufficient stability under aqueous conditions for short reactions, we decided to test the Pt/Al 2 O 3 catalyst for mucic acid hydrotreatment.Pt on alumina showed lower mucic acid conversion at 175 • C compared to NiMo on alumina, which was similar or higher at lower temperatures (125-150 • C).The 70% mucic acid conversion was achieved over Pt/Al 2 O 3 , while cumulative conversion of mucic acid and its lactones (the latter is formed by non-catalytic dehydration) was 60%.Results showed that Pt has lower HDO activity compared to NiMo at temperatures >175 • C and higher at temperatures <150 • C, while the summarized conversions of mucic acid and lactone are alike.There is no significant change in conversion between the lowest or the highest tested temperature over Pt/Al 2 O 3 , while NiMo/γ-Al 2 O 3 showed high temperature dependence.Regarding the products detected by the HPLC, 2-hydroxyhexanoic acid was detected besides lactone at all temperatures at approximately the same yield of 4.9 mol%.An acidic support (alumina) promotes the formation of two different lactones (1,4-and 3,6-lactone), which is in accordance with the literature [30,31,37]. However, the ratio between lactone and mucic acid is slightly affected by the temperature; lactone formation is faster than mucic acid hydrodeoxygenation at higher temperatures.The same trend was noticed under non-catalytic conditions in inert atmosphere.As mentioned before, under aqueous conditions the γ-Al 2 O 3 support could be transformed into alumina oxy-hydroxide, but a favorable combination of NiMo or Pt metals on Al 2 O 3 support could have beneficial effects on supported catalyst activity and stability in an aqueous environment [38].However; the deactivation of the alumina-supported catalyst was tested, where recycled NiMo/γ-Al 2 O 3 catalyst was used in a second run.Results (Figures S14 and S15) showed negligible differences in the conversion of mucic acid and product distribution.Considering the BET results, alumina supported catalysts showed the lowest surface area per gram of catalyst.The lowest surface area and pore volume was determined for Pt catalyst, since the pore width was larger for alumina-supported catalysts compared to carbon-supported.Our results are in line with the literature, where many authors showed that pore size diameter has an effect on mass transfer in catalyst pores, and therefore on catalyst activity and product selectivity [39,40]. The influence of catalyst type on the time-resolved conversion of mucic acid is presented in Figure 4, while Figure 5 shows a cumulative conversion of mucic acid and its lactone.Conversions for all other experiments are collected in Supplementary Materials (Figures S5 and S6).Cumulative conversion (X ML ) of mucic acid and its lactone was calculated according to Equation (1). where C 0M represents the initial concentration of mucic acid, while C M represents concentrations of mucic acid and C L of its lactone at every given time.Ru, Rh, Ni, and Pt on SiO 2 support were tested for HDO of mucic acid and the results showed the following metal activity: Rh > Ru > Ni > Pt.Rh and Ru are the most active metals, since both mucic acid and its lactone conversion were higher than 80%.A slightly lower conversion was reached with Ru at 125 • C. Results are quite expected and in accordance with the literature, where high activity of Ru and Rh for C-O and C-C bond cleavage was reported [12,41].Levulinic acid was detected at 150 • C over Ru/SiO 2 in minor concentrations (yield of 1 mol%).Yield to 2-hydroxy hexanoic acid decreased with increasing temperature (from 8.0 mol% to 4.5 mol%).The same trend of decreasing yield to 2-hydroxy hexanoic was noticed when Rh/SiO 2 was used.At the highest temperature (175 • C) the conversion for both Ru and Rh was around 100%, but almost no products were detected in the final reaction mixture by HPLC.The possible explanation is formation of the second (nonpolar) phase, where all products with low polarity were dissolved.The water from the final product was removed by vacuum distillation and the solid residue was derived by esterification and analyzed by GCMS.The components reported in Tables S3-S5 and their main groups are presented in Figure 6.Among all the tested catalysts, Rh showed the highest selectivity to the furanic and cyclic compounds at the lowest tested temperature.It is worth mentioning that the highest concentration (38.6%, in the esterified mixture of products, where water was removed by rotary evaporation) of tetrahydro-2H-pyran-4-ol was formed at the reaction temperature of 125 • C over the Rh/C catalyst.The amount of it decreased from 38.6% to 3.9% with increasing reaction temperature from 125 • C to 175 • C, respectively.However, the conversion of mucic acid over Rh/C was very low.In contrast, the amount of the tetrahydro furfuryl alcohol increased with the increasing temperature, from 21.7% to 31.0% at temperatures from 125 • C to 175 • C, respectively.A similar trend was noticed for tetrahydro-2H-pyran-4-ol, but in considerably lower concentrations than the Rh/SiO 2 catalyst.However, the concentration of furfuryl alcohol showed no significant difference between the lowest and the highest temperature when silica-supported Rh was used. Ni and Pt on SiO 2 support resulted in significantly lower conversion; specifically, it was between 20% and 60% for both metals.The summarized conversion of mucic acid and lactone was even lower, with Pt/SiO 2 showing a minor conversion.In general, Pt/SiO 2 mucic acid only achieved equilibrium with its own lactone, whereas the ratio was similar to the non-catalytic conditions.The 2-Hydroxyhexanoic acid was determined by HPLC to be the product with the highest yield (7.7 mol% with Ni/SiO 2 ).The established equilibrium between mucic acid and lactone can change during the reaction, resulting in an apparent drop of the conversion, especially for less active catalysts (e.g., Pt/SiO 2 ).Some unknown compounds were detected in product mixtures (analyzed by HPLC), which were expected to be compounds considered in extraction mixtures of NiMo/γ-Al 2 O 3 , and were confirmed with further GCMS analysis of the solid products after water removal by vacuum distillation and further derivatization (Figure 6 and Table S5).Pt-supported carbon and silica showed very similar results.At the lowest temperatures, mainly shorter, partially-deoxygenated compounds and furanic compounds were formed, while at the highest temperature, the concentration of oligomers increased.The highest formation of oligomers was noticed over alumina-supported Pt; however, the conversion was, in this case, lower compared to other more active metals (Rh, Ru).Comparing both alumina-supported catalysts, NiMo showed higher selectivity to partially deoxygenated linear products with chain lengths < C6 and much lower selectivity to the oligomers compared to Pt catalyst. Ru, Rh, Ni, and Pt metal screening over neutral C-support again showed high activity for the Ru catalyst, as the conversion exceeded 85% at all tested temperatures, while other metals never reached 80% conversion.Rh was nearly non-active on C although it was the most active on SiO 2 support.In contrast, Ni showed better activity on C, as it did on the SiO 2 support.A significantly higher conversion (near 100%) was reached at the temperature of 175 • C.However, the summarized conversions of mucic acid and lactone reached only 60% for the Rh catalyst and 85% for the Ni catalyst with C support.The Pt catalyst showed better activity on the C support than on the SiO 2 support, especially the summarized conversion of mucic acid and its lactone.Even though mucic acid conversion over Rh/C was quite high, it can be seen that summarized conversion of mucic acid and lactone was very low, especially at low temperatures, which is quite unexpected.The lack of acidic active sites on Rh/C can lead to a low yield of HDO products.Interactions between noble metals and the acidic support could give better HDO activity for the catalyst [42].However, Ru/C has 1.7 times wider pores than Rh/C.Wider pores are more accessible for molecules to be adsorbed in.Considering the BET results, the wider pores of Ru/C could lead to higher conversion of mucic acid and catalytic transformation of oxygenated molecules due to better mass transfer of molecules [39].Similar results obtained by SiO 2 2-hydroxyl hexanoic acid was detected when noble metals on C support were used.Moreover tetrahydro-2-furoic acid was also detected.Selectivity to shorter chain products is enhanced by using the catalyst with narrow pore width [39].BET results (Table 1) showed that all carbon-supported catalysts have pores with diameter <4 nm.Among all the tested metals, Ni showed the highest selectivity to adipic acid.Nevertheless, the yield of adipic acid not under any tested reaction conditions exceeded 5 mol%. In general, from obtained results the most active catalysts for HDO of mucic acid in water phase at low temperatures were Ru/C, Ru/SiO 2 , and Rh/SiO 2 catalysts.As a support, SiO 2 showed the best performance at tested reaction conditions (T and P), and although it is known to be stable in the acidic aqueous phase, some studies reported a decrease of activity after its testing in boiling water for 24 h [43].Carbon-supported Ni enhanced the oligomerization of the lower chain products and results at a temperature of 135 • C were very similar to the results obtained at 175 • C with the carbon-supported Pt catalyst.Formation of oligomer compounds, especially completely deoxygenated ones, showed high activity for carbon-supported Ni and Pt for deoxygenation processes.Results obtained for silica-supported Ru showed no significant difference in the selectivity at the lowest and the highest temperature.Moreover, almost the same selectivity to each group of products were obtained for carbon and silica-supported Ru at 135 • C (7.8%, 73.0%, and 19.1% over carbon-supported Ru and 10.6%, 71.8%, and 17.7% over silica-supported Ru for lower chain products, furans, and oligomers, respectively). Catalysts doped with transition metal presented better hydrothermal stability than the undoped support.[44] The presence of acidic active sites (Al 2 O 3 ) or highly active noble metals were crucial for HDO to proceed.However, NiMo/γ-Al 2 O 3 showed good activity and stability at higher temperatures, even in the aqueous phase above 175 • C for a short treatment (less than 5 h) [44,45].The reusability test was performed over alumina-supported NiMo catalyst under presented reaction conditions at 175 • C. The fresh catalyst was used for the first experiment, and recycled for the second experiment.Results were very similar with a conversion of 89% and 90% under fresh and recycled catalysts, respectively.Detailed results are collected in the Supplementary Materials (Figures S14 and S15).The lowest activity for the selected catalytic system was reached with all three tested Pt catalysts (on alumina, silica, and carbon support), showing normal HDO levels but high hydrogenation activity, in accordance with the literature [43].Experiments performed only over supports showed similar results to those obtained under noncatalytic system (Figures S8-S10).Only formation of different lactones was detected. Catalyst Characterization Each of the commercially available catalysts (Table 1) was milled into the particles with diameters under 100 µm before catalytic activity testing.The catalyst composition is given according to the data provided by the suppliers (Sigma Aldrich, St. Louis, MO, USA, and Riogen, NJ, USA). The Braunauer-Emmett-Teller method (BET) was used for the evaluation of the specific surface area (A BET ), using an ASAP 2020 instrument for N 2 -physisorption (Micrometrics, Norcross, GA, USA).Metalic surface site densities were measured using a Micrometrics AutoChem II Chemisorption Analyzer (Micrometrics, Norcross, GA, USA) by the CO TPD experiment.Analyses were carried out in the temperature range of 10-900 • C. The molecule desorbing was monitored on-line by a quadrupole mass spectrometer ThermoStar TM GSD 301 T (Pfeiffer Vacuum GmbH, Aßlar, Germany).Prior to analysis, each catalyst (about 100 mg) was reduced by heating it up to 300 • C in the flow of 5% H 2 /Ar with a heating rate of 10 • C min −1 and by keeping this temperature constant for 10 min.Subsequently the sample was cooled down in a flow of inert gas (helium; 50 mL min −1 ) to 10 • C and saturated in a flow of gas mixture containing 5 vol% CO in He.The sample was then purged to remove weakly adsorbed species in He for 2 h.Desorption was carried out in the flow of He (20 mL min −1 ) with a linear heating rate of 8.9 • C min −1 from 10 • C to 900 • C and kept for 30 min at 900 • C. The total time of the TPD analysis was 150 min.During CO-TPD, concentration profiles were collected for CO and CO 2 by following the m/z = 12 and 44 mass-to-charge signal ratio, respectively.Mass spectrometer was calibrated daily with the calibration mixtures. Hydrotreatment Experiments Catalytic hydrotreatment experiments were performed in a six parallel batch high-pressure autoclave system (Amar Equipment Pvt. Ltd., Mumbai, India), each consisting of a vessel with 250 mL volume with the inner diameter of 67 mm and the height of 80 mm.Each autoclave was equipped with a magnetically driven Rushton turbine impeller, diameter of 30 mm, placed 14 mm above the autoclave bottom.Before reaction the autoclave was filled with 120 g of solvent (distilled water), 0.17 wt% of a reactant (Mucic acid, 97 wt%, Sigma Aldrich), and 0. Reactants, solvents, and catalysts were weighed on the autoclave vessel, placed in the housing and sealed.The heating belt was attached to the autoclave.Before pressurizing the reactor with hydrogen (5.0, Messer, Bad Soden am Taunus, Germany), each autoclave was flushed with inert gas twice (N 2 ; 5.0, Messer, Bad Soden am Taunus, Germany).All experiments were performed in a batch regime with an agitation speed of 600 min −1 , which ensures the gaseous phase aspiration and complete dispersion of the catalyst particles.The temperature profile started at room temperature and then increased to the desired temperature with a heating ramp of 5 K min −1 by keeping the reaction at the plateau temperature for three hours.Liquid samples were collected during the reaction time.After three hours at the plateau temperature, the autoclave was rapidly cooled down, the gas phase was released, and the headspace was purged with inert gas (N 2 ) before opening the autoclave.All data about reaction conditions (time, set temperature, temperature in the autoclave, heater temperature, and pressure) were recorded automatically by SCADA system.Experimental conditions are summarized in Table 4. Mucic acid was used in all experiments as a reactant dissolved in water, as a solvent, and in hydrogen as a co-reactant in a gaseous phase, with initial pressure at room temperature set to 5 MPa. Analytic Methods Liquid samples were collected from the reactor vessel during the reaction.The initial sample was taken at room temperature right before the heater was turned on, the second was taken at half of the heating ramp temperature, the third at the final temperature, and then at 15 min intervals within the first hour after completion, and following this in 30 min intervals.Eleven liquid samples of 1 mL were taken during the experimental time.The samples were further prepared for analyses by different methods, namely high performance liquid chromatography (HPLC) or gas chromatography coupled with mass spectrometry (GC-MS).Analytical methods and sample preparations are reported in Figure 7 and Figure S7. The collected liquid samples were directly analyzed by liquid chromatography (HPLC, Agilent 1100 Series, Agilent, Santa Clara, CA, USA),), using AcclaimTM Organic Acid LC Column (OA, 5 µm, 12 nm, 4.0 × 250 mm, TermoFisher, Waltham, MA, USA).The injection volume of a sample was 20 µL.The gradient method was applied for the mobile phase, starting with the water phase (2.5 mM H 2 SO 4 (98.0%,Sigma Aldrich, St. Luis, MO, USA) in H 2 O, pH of 2.4) for 4 min.The organic phase (Acetonitrile, anhydrous, 99.8%, Sigma Aldrich, St. Luis, MO, USA) was then gradually added to the final 9:1 volume ratio of organic to water phase, reached in 10 min and then kept constant for 11 min, and after that in changed back to pure water phase in 2 min and kept for constant for a further 3 min.The constant mobile phase flow of 1 mL min −1 was regulated by two binary pumps.Column temperature was kept at 30 • C. Reactants, intermediates, and products were detected by a UV detector in the range from 200 to 400 nm.Quantification data were collected at the reference wavelength of 210 nm using a narrow slit.For unknown compounds, LC-MS (molecular mass spectrometer from Applied Biosystems, model 4000 Q TRAP MS/MS, coupled with liquid chromatography; HPLC-DAD, Agilent, 1100 Series, Agilent, Santa Clara, CA, USA) analysis was performed using the same column (AcclaimTM Organic Acid LC Column, TermoFisher, Waltham, MA, USA) with the same method; however, the 2.5 mM H 2 SO 4 mobile phase was substituted with the 0.1 v/v% trifluoro acetic acid (TFA, ≥99.0%,Sigma Aldrich, St. Luis, MO, USA) because of MS analysis restrictions.The final product from the reactor vessel was furthermore treated by two different protocols for GC-MS analysis in order to remove water and convert the components into more volatile derivatives.Besides the increase of volatility, peak shape and separation is also improved with derivatization, especially for (di)carboxylic acids.Silylating reagent (BSTFA + TMCS, 99:1, Sigma Aldrich, St. Luis, MO, USA) and esterification reagent (MeOH + HCl, Sigma Aldrich, St. Luis, MO, USA) were used for derivatization.Two different approaches were applied for GC-MS sample preparation.Firstly, extraction in diethyleter from aqueous solution was used (in 1:1 v/v ratio).The diethyl ether phase was separated and the derivatization agent was added in abundance regarding the stoichiometry of compounds in the reaction mixture.The second approach included water removal by rotary evaporator, addition of the derivatization agent, and diethyl ether to dissolve derivatized compounds.Extracted and derivatizated compounds were analyzed by GC-MS, equipped with a nonpolar column (ZebronTM ZB-5MS, length 60 m, diameter 0.25 mm, film thickness 0.25 µm), and the FID (flame ionization detector, Shimadzu, Kyoto, Japan) detector.Temperature programmed method was used for the analyses of derivatized samples, where the column oven was initially kept at 313 K for 7.2 min, then increased to 383 K with a heating rate of 25 K min −1 , heated to 498 K at a rate of 18 K min −1 , and finally to 553 K at a rate of 40 K min −1 , where the temperature was kept constant for additional 13 min.The injector and detector were maintained at 568 K; the injection volume was 1 µL and the split ratio was set to 15. Mass spectrometry (MS) was used for identification of intermediates and products, where each compound was sent through the ion source and the fragments were separated and scanned in the range from 35 to 500 m/z and compared to the FFNSC (Flavour and Fragnance Natural and Synthetic Compounds) and NIST14 (National Institute of Standards and Technology) libraries. Conclusions Hydrodeoxygenation of mucic acid, one of the promising bio-based chemicals in value-added chemicals, showed good potential.Using a cheaper NiMo catalyst at slightly higher temperatures delivered better results compared to the more active noble metal catalysts at lower temperatures.Even though NiMo/γ-Al 2 O 3 catalyst is a well-known industrial catalyst for HDO reactions, many authors pointed out that its support (alumina) is not stable in the water environment.For short (t total = 3.5 h) experiments it worked very well without any deactivation.Under aqueous conditions mucic acid readily forms an equilibrium with mucic-1,4-lactone that is subsequently hydrogenated into galacturonic and galactonic acid in H 2 atmosphere.The main products detected in the catalytic HDO process were tetrahydro-2-furfuryl alcohol (max.yield of 10.1%), adipic acid (max.yield of 4.3%), 2-hydroxy pentanoic acid, levulinic acid, 2-furoic acid, 2-pentenoic acid, and 2-hydroxy hexanoic acid.Many of them represent important platform chemicals, specifically for the polymer industry.Noble metals (especially Rh and Ru) proved to be very active for the conversion of mucic acid into hydrocarbons.It turned out that the pore diameter width had an important role on the conversion due to mass transfer limitations.Nevertheless, the highest yield of adipic acid did not exceed 5 mol%.However, we are the first to report formation of adipic acid in an aqueous system with heterogeneous catalysts. The main issue using an oxidized cellulose-based feedstock, such as mucic and glucaric acid, is the solubility, which is very low (or negligible) in water, alcohols, and organic solvents.The aqueous environment is eco-friendly, but on the other hand could degrade the Al 2 O 3 structure of the catalyst support, resulting in lower overall activity.From an environmental point of view, the developed process is eco-friendly, since is uses water as a solvent.It offers a good starting point for producing important bio-based chemicals from nonedible lignocellulosic biomass.Lignocellulosic biomass or bio-waste is a sustainable feedstock that can replace oil and could be included in the circular economy system with a minor environmental impact. Figure 1 . Figure 1.CO (a) and CO 2 (b) desorption rates of the supported Ru-and Rh-based catalysts during CO-TPD. Figure 2 . Figure 2. Proposed reaction pathway development based on HPLC, LC-MS, and GC-MS analyses. Figure 7 . Figure 7.A parallel reaction system of 6 batch reactors and analytic methods with additional sample (pre)treatments. Figure S2: Chromatogram of mucic acid standard and MS spectrum of the detected peaks. Figure S3: Adipic acid chromatogram and MS spectrum. Figure S4 : Chromatogram and MS spectrum of liquid sample. Figure S5: Conversions of mucic acid over noble and transition metals in the temperature range 125-175 • C. Figure S6: Summarized conversions of mucic acid and its lactone over noble and transition metals in the temperature range 125-175 • C. Figure S7: (a) HPLC mobile phase flow a combination of water phase (2.5 mM H 2 SO 4 ) and organic phase (100% acetonitrile) and (b) GC temperature program with different heating ramps.Figure S8.HPLC chromatogram of the final product over neutral C support. Figure S9.HPLC chromatogram of the final product over acidic alumina support.Figure S10.Concentrations of mucic acid, lactones, galacturonic and galactonic acid over different supports.Figure S12.TEM image of Rh/C. Figure S13.NH3-TPD of catalysts on different supports.Figure S14.Chromatograms of Table 1 . Results from BET analyses. Table 2 . List of compounds with molecular formula, molar mass, and retention times according to the HPLC and GC-MS analysis. Table 3 . Products distribution in the aqueous and extracted phase over NiMo/γ-Al 2 O 3 catalyst. Table 4 . List of experiments using mucic acid as a reactant, dissolved in distilled water.Experiments were performed under high pressure of inert gas (N 2 ) or hydrogen (H 2 ).
2019-04-12T13:50:51.783Z
2019-03-20T00:00:00.000
{ "year": 2019, "sha1": "4e657a9d66e95ba600292870e17397da6c066c48", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/9/3/286/pdf?version=1553158843", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4e657a9d66e95ba600292870e17397da6c066c48", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
20138794
pes2o/s2orc
v3-fos-license
Cyclotide Evolution: Insights from the Analyses of Their Precursor Sequences, Structures and Distribution in Violets (Viola) Cyclotides are a family of plant proteins that are characterized by a cyclic backbone and a knotted disulfide topology. Their cyclic cystine knot (CCK) motif makes them exceptionally resistant to thermal, chemical, and enzymatic degradation. By disrupting cell membranes, the cyclotides function as host defense peptides by exhibiting insecticidal, anthelmintic, antifouling, and molluscicidal activities. In this work, we provide the first insight into the evolution of this family of plant proteins by studying the Violaceae, in particular species of the genus Viola. We discovered 157 novel precursor sequences by the transcriptomic analysis of six Viola species: V. albida var. takahashii, V. mandshurica, V. orientalis, V. verecunda, V. acuminata, and V. canadensis. By combining these precursor sequences with the phylogenetic classification of Viola, we infer the distribution of cyclotides across 63% of the species in the genus (i.e., ~380 species). Using full precursor sequences from transcriptomes, we show an evolutionary link to the structural diversity of the cyclotides, and further classify the cyclotides by sequence signatures from the non-cyclotide domain. Also, transcriptomes were compared to cyclotide expression on a peptide level determined using liquid chromatography-mass spectrometry. Furthermore, the novel cyclotides discovered were associated with the emergence of new biological functions. INTRODUCTION Cyclotides are proteins of ∼30 amino acid residues that are characterized by the cyclic cystine knot (CCK) motif (Craik et al., 1999;Burman et al., 2014). The CCK motif consists of six conserved cysteines that form three disulfide bonds and a head to tail cyclic backbone ( Figure 1A). The cyclotides have been classified into two main subfamilies, the Möbius and the bracelets, based on a single structural trait: the presence or absence of a conceptual 180 • twist in the cyclic backbone caused by a conserved cis-Pro residue in loop 5 (Craik et al., 1999; Figure 1B). Aside from the CCK, two loops (defined as sequences between adjacent cysteines) have high sequence (CTR). Precursor architecture varies with both plant species and structural subfamily. Architecture types are labeled with Greek letters in (D), and their distribution is shown in (C). Oak1 (α) and Vok1 (β), precursors of kb1, are found in the Rubiaceae and Violaceae, respectively. Vok1 contains repeated domains. VocA (δ) and panitide pL1 (ε) are precursors of linear cyclotides and are found in the Violaceae and Poaceae, respectively. Precursors of these linear cyclotides do not contain a CTPP domain. TI subfamily precursors (ζ) contain both linear and cyclic cyclotides in the same sequence. Several architectures are often found in a single plant species. For example, Viola species contain VoK1 (α), VoC1 (β), and VocA (δ). In the precursor of Cter M, the cyclotide domain replaces the albumin-1b chain. similarity between subfamilies (loop 1 and 4), whereas loops 2 and 3 are conserved only within individual subfamilies. The discovery of other cyclotides has created a need for a more versatile classification system (Ireland et al., 2006;Nguyen et al., 2013;Ravipati et al., 2015). These varieties include socalled hybrid cyclotides that exhibit sequence characteristics of both the Möbius and bracelet subfamilies , as well as minor subfamily known as the trypsin inhibitors originating from gourd plants (Hernandez et al., 2000). They contain the CCK motif but do not otherwise exhibit any sequence similarity with the other subfamilies. In addition, linear cyclotide derivatives that exhibit sequence similarity with conventional cyclotides but lack their cyclic backbone have been reported (Ireland et al., 2006;Nguyen et al., 2013). The high sequence diversity of the cyclotides appears to be due to natural selection in angiosperms, the flowering plants, but little is known about the evolutionary mechanisms underpinning the corresponding selection processes or the evolutionary background of cyclotide diversity. Cyclotides and the CCK motif have only been found in angiosperms, but proteins having one of their two defining structural motifs-cyclic peptides/proteins without the cystineknot (Trabi and Craik, 2002;Arnison et al., 2013) or linear proteins with a cystine-knot (Zhu et al., 2003)-are found in a wide range of organisms across all kingdoms of life. In angiosperms, the occurrences of cyclotides differ between "basal" angiosperms, monocots and eudicots ( Figure 1C): "linear cyclotides, " i.e., peptides that exhibit sequence similarity with cyclotides but lack their head-to-tail cyclic structure, are prevalent in both monocots and eudicots, but true cyclotides have been found only in eudicots (Mulvenna et al., 2006;Zhang et al., 2015a). However, neither linear nor true cyclotides have yet been found in the "basal" angiosperms. It has therefore been proposed that linear cyclotides are ancestral (more primitive) to the true cyclic cyclotides (Mulvenna et al., 2006;Gruber et al., 2008). Many of these activities appear to be due to cyclotides' ability to interact with and disrupt biological membranes (Colgrave et al., 2008b;Simonsen et al., 2008;Burman et al., 2011;Henriques et al., 2011). The membrane disruption is mediated by physicochemical interactions between cyclotides and the lipid membrane, and is governed by the distribution of lipophilic and electrostatic properties over the molecular surfaces of the cyclotides. We recently developed a quantitative structureactivity relationship (QSAR) model for these interactions (Park et al., 2014). However, the relationships between cyclotide sequence diversity, evolutionary selection, and the functions of the cyclotides in planta remain unknown. Cyclotides are expressed as precursor proteins, which undergo post-translational processing including enzymatic cleavage and subsequent cyclization (Jennings et al., 2001;Harris et al., 2015). The multi-domain architecture of these precursor proteins varies slightly between different types of cyclotides and plant families, but in sequential order from the N-to the Cterminus, they generally feature the following domains: an endoplasmic reticulum (ER) targeting signal, an N-terminal propeptide (NTPP), an N-terminal repeat (NTR), the cyclotide domain (CD), and finally a C-terminal tail (CTR) (Figure 1D). In some cases, the modular domains NTR, CD, and CTR are repeated more than once. The cyclotides have been suggested to co-evolve with asparaginyl endopeptidase (AEP) because of its suggested role in cyclization . Moreover, the divergent evolution of cyclotides from ancestral albumin domains was suggested based on the architecture of cyclotide precursors found in the Fabaceae plant family (Nguyen et al., 2011;Poth et al., 2011b). However, the relationship between the precursor proteins' architecture and sequences and the evolutionary selection of cyclotides is still unknown. To date, cyclotide and precursor sequences have been most extensively explored in the family Violaceae Batsch. (Malpighiales), and especially in the genus Viola L. (Burman et al., 2015). The Violaceae are a medium-sized family including ∼1,100 species worldwide. The phylogeny of the Violaceae has recently been inferred from chloroplast and nuclear markers (Tokuoka, 2008;Wahlert et al., 2014), and its systematics has been revised accordingly; currently ∼30 genera are accepted in nomenclature (Wahlert et al., 2014). Viola is the largest genus in the family, with 580-620 species, representing over 50% of all known species (Ballard et al., 1998;Yockteng et al., 2003;Marcussen et al., 2012;Wahlert et al., 2014). Viola is distributed all around the world in temperate regions and at high elevation habitats in the tropics. The genus is old (∼30 million years) and comprises at least 16 extant lineages, referred to as sections, with a complex, reticulate phylogenetic history owing to allopolyploidy (Marcussen et al., 2012(Marcussen et al., , 2015. The species included in this study belong to four northtemperate sections, the diploid sect. Chamaemelanium Ging. (V. canadensis L., V. orientalis W.Becker) and the three allotetraploid sections Melanium Ging. (V. tricolor L.), Plagiostigma Godr. (V. albida Palibin. var. takahashii (Nakai) Kitag., V. mandshurica W.Becker, V. verecunda A.Gray) and Viola (V. acuminata Ledeb.). In the current study, we explore cyclotide evolution using an integrated approach, exploiting transcriptomics and peptidomics to analyze the sequences of cyclotide precursors and the expression of cyclotides in Viola in light of the phylogeny of the genus. In particular, full precursor sequences are used to obtain insights into the evolutionary history of the cyclotides, and we connect the evolution of new mature cyclotides with the emergence of new functions. Collection of Violets Violets were collected at the sites indicated in Table 1, and those sites are their natural habitats. When collected, the plant individuals were in adult stage; however, the exact ages of those plant individuals were not determined. The plant vouchers were deposited at the Kangwon National University herbarium; Viola albida var. takahashii (KWNU93021), V. mandshurica (KWNU93022), V. orientalis (KWNU93023), V. verecunda (KWNU93024), and V. acuminata (KWNU93025). All plant material was collected on September 23, 2014. Sample Collection, RNA Isolation, and RNA Sequencing For the five Viola species-V. albida var. takahashii, V. mandshurica, V. orientalis, V. verecunda, and V. acuminata-, total RNA was sequenced by Next Generation Sequencing (NGS), outsourced to Macrogen Inc. (Seoul, South Korea). For each Viola species, the RNA sample was prepared from one plant individual; also, the plant tissues were pooled from all of the plant's major organs, i.e., the roots, stems, flowers, and leaves. The collected tissues were immediately frozen in liquid nitrogen, and directly extracted by RNeasy Plant Mini Kit (Qiagen, Hilden, Germany) according to the manufacturers' protocols. Quality and quantity of RNA were measured using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) with an RNA Integrity Number (RIN) and rRNA ratio. The measured RIN values (rRNA ratios) are: 6.9 (1.2) . Briefly, the poly-A containing mRNAs were purified using poly-T oligo-attached magnetic beads. The purified mRNAs were fragmented into short sequences by use of divalent cations. Using these short sequences as templates, the first-stranded cDNA was synthesized using random hexamers. A second-strand cDNA was then synthesized using DNA polymerase I and RNase H. The synthesized cDNA went through an end repair process, the addition of a single "A" base, and then ligation of the adapters. The PCR was performed to enrich the selected DNA sequences, and then those selected sequences were sequenced using Illumina HiSeq 2000 Sequencing System (Illumina, USA) that generates paired-end reads with 2 × 100 base pairs (bp) read length. Sequencing Data Analysis and Assembly FASTQC (version 0.11.3) was used to determine the quality of RNA sequencing data (www.bioinformatics.babraham.ac.uk/ projects/fastqc/). Reads were cleaned using Trimmomatic (v. 0.32; Bolger et al., 2014), and the sequences with Phred score ≥33 and a minimum length of 36 bp were retained for assembly. De novo assembly of these processed reads was performed using the Trinity RNA-Seq assembly (release 17.07.2014; https:// sourceforge.net/projects/trinityrnaseq/) with the default setup, which allows the identification of cyclotide-coding transcripts larger than 200 bp. Also, the reads were assembled separately for the individual species. We summarized the detailed information on the de novo transcriptome assembly for those individual species in Supplementary Table 1 Identification of Precursor Sequences from Transcriptome The assembled transcriptomes were searched for similar sequences to the cyclotides from Cybase (cutoff date: 3.02.2014; http://www.cybase.org.au/) by the standalone NCBI-blast+ service (2.2.28) (tblastn, E-value cutoff: 50) in the Ugene software package (Okonechnikov et al., 2012). In parallel, a motif search was done using PROSITE for cyclotide precursor sequences [motifs PS60009 and PS60008 for Möbius and bracelet cyclotides, respectively (Sigrist et al., 2002)]. To assist sequence identification, the Fuzzpro service of EMBOSS (v. 5.0.0) (Rice et al., 2000) was used. The combined PROSITE and blast results were filtered to remove duplicates. The result file was further processed by manual inspection after clustal omega (v.1.2.0) alignment. In the manual inspection, we assumed that the sequence is a cyclotide precursor if the sequence contains the six conserved cysteines aligned with previously known cyclotides, and if the conserved Nterminal domain showed sequence similarity to known cyclotide precursors. Collection of Precursor Sequences In total, 312 (= 157 + 155) precursor sequences were utilised in this study: (i) 157 (=138 + 19) precursor sequences were identified and collected from transcriptome analyses conducted in the current study. Among them, 138 sequences were collected from the five Viola species (i.e., Viola albida var. takahashii, V. mandshurica, V. orientalis, V. verecunda, and V. acuminata) by the RNA sequencing in this study. The remaining 19 sequences were collected the transcriptomic data of V. canadensis obtained from the 1kp-project (www.onekp.com). (ii) Another 155 (=126 + 29) precursor sequences were collected from recent published studies. Among them, the 126 sequences were collected from other seven Viola species (i.e., V. baoshanensis, V. odorata, V. uliginosa, V. adunca, V. tricolor, V. biflora, and V. pinetorum), and the 29 sequences from other Violaceae genera, i.e., Gloeospermum, Melicytus, and Pigea. The name of Viola species where the precursor sequences collected from and the corresponding references are listed in the Table 2, and the sequence alignment of those 312 precursors are recorded in the Supplementary Data 1. Nomenclature of Cyclotides and Precursors Each cyclotide sequence was assigned a tripartite name. The first part is derived from the Latin binomial of the plant in which the corresponding precursor was found, the third part specifies the molecular species of the cyclotide, and the second part specifies the rank of the cyclotide among all the cyclotides having that particular molecular species that were identified in that particular Frontiers in Plant Science | www.frontiersin.org (Zhang et al., 2009(Zhang et al., , 2015b; c V. odorata (Dutton et al., 2004;Ireland et al., 2006); d V. uliginosa (Slazak et al., 2015); e V. pinetorum and V. adunca (Kaas and Craik, 2010); f V. tricolor (Mulvenna et al., 2005;Hellinger et al., 2015); g V. biflora (Herrmann et al., 2008); h Transcriptome data for V. canadensis were obtained from the 1kp project (www.onekp.com), and the cyclotide precursor sequences were determined in this work; i G. blakeanum and G. pauciflorum ; j M. ramiflorus (Trabi et al., 2009); k P. floribunda (as H. floribundus) (Simonsen et al., 2005). Table 2). The Latin binomials of the violet species considered in this work are Viola albida var. takahashii (valta), Viola mandshurica (viman), Viola orientalis (vorie), Viola verecunda (viver), and Viola acuminata (vacum). Thus, the cyclotide named vacum2-HS4 is the second cyclotide of the HS4 molecular species derived from V. acuminata. The molecular species are named after their NTR signature sequences, i.e., the identities of the two residues at the consensus positions 11 and 12. The precursor sequences were also assigned tripartite names in a similar way to the cyclotides: the first part of the name is derived from the Latin binomial of the species in which the sequence was discovered, the second specifies the numerical rank of the sequence (which is independent of that assigned to the cyclotides), and the third specifies the molecular species of the precursor. Any cyclotide or precursor that had been named in an earlier publication was assigned the same name in this work. However, for previously unnamed precursors, we added the prefix "prc" to the tripartite name to help readers distinguish between precursor and cyclotide sequences. Thus, prc-viul A is the name of the precursor sequence of Viul A from Viola uliginosa Bess. Sequence Alignment for Phylogenetic Analysis The cyclotide precursor sequences from transcriptome were aligned before the phylogenetic analysis i.e., construction of Bayesian phylogenetic tree, maximum parsimony and splits networks. A total of 92 precursor sequences (two sequences from each of the 46 molecular species) were prepared as DNA sequence for the sequence alignment, and the sequences include only three domains of the precursor, i.e., NTPP, NTR, and cyclotide domains. To guide the alignment of DNA sequences of the precursors, they were translated into protein sequences and aligned independently for each molecular species using Clustal in BioEdit v.7.2.5 (Hall, 1999), and those alignments were in turn combined and realigned (i.e., keeping the indel positions from the alignment of each of the molecular species). The resulting protein-guided alignment of nucleotide sequences was then subject to manual adjustments within reading frames. The aligned DNA sequences are shown in the Supplementary Data 2. The Construction of Bayesian Phylogenetic Tree Nucleotide substitution model was selected based on the AICc criterion using JModelTest v.2.1.10 (Guindon and Gascuel, 2003;Darriba et al., 2012). A Bayesian phylogenetic analysis was conducted in BEAST v.1.7.4 (Drummond and Rambaut, 2007;Drummond et al., 2012). The analysis file was set up in BEAUti (part of the BEAST package) with the following priors: GTR+G as substitution model with empirical frequencies and four gamma categories, a lognormal relaxed clock prior with rate set to 1.0, and a Yule tree prior (birth-only process). Two MCMC chains were run for 50 million generations each, using a BEAGLE library, and parameters logged every 10,000 generations. We checked the two chains for proper mixing and convergence (i.e., ESS >500) in Tracer v.1.6.0 (http://tree.bio. ed.ac.uk/software/tracer/), removed a visually determined burnin of 1 million generations from each and merged the two chains in LogCombiner v.1.7.4 (part of the BEAST package), and summarized the data in a maximum credibility tree with mean node heights using TreeAnnotator v. The Construction of Splits Networks In order to visualize the data, neighbor splits-networks of uncorrected P distance were produced, both for the nucleotide alignment and for the translated alignment, using SplitsTree (Huson and Bryant, 2006). Extraction of Plant Material Between 250 and 500 mg of dried plant material was homogenized and incubated overnight in 6 ml of 60% acetonitrile in water containing 0.05% triflouoroacetic acid (TFA) to extract cyclotides. Extracts were lyophilized and then dissolved in 2.5 ml of solvent A (Milli-Q H 2 O with 0.05% TFA). Redissolved extracts were then subjected to gel filtration using PD-10 columns (GE Healthecare) according to the manufacturer's instructions to remove small molecules. The high molecular weight fractions were collected, lyophilized and then dissolved in solvent A to a concentration proportional to the original amount of extracted material (2 µl/1 mg) for LC-MS analysis. LC-MS The samples were analyzed using ultra performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (nanoAcquity UPLC/QTof Micro; Waters, Milford, MA). Samples were eluted using a gradient of acetonitrile (1 to 90% over 50 min) containing 0.1% formic acid. A nanoLC column (Waters BEH, 75 µm (i.d) × 150 nm) operated at 0.3 µl/min flow rate was used. The capillary temperature was set at 220 • C and the spray voltage at 4 kV. The mass-to-charge (m/z) range was set from 1,000 to 2,000. LC-MS chromatogram and MS spectra were analyzed with the help of MassLyxn V4.1 (Waters, Milford, MA). Reduction and Alkylation Dried peptide extracts were reduced in a buffer containing 0.05 M Tris-HCl, pH 8.3, 4.2 M guanidine-HCl, and 8 mM DTT. The extraction solutions were incubated at 37 • C for 2 h in the dark, followed after O 2 removal with Nitrogen gas. The reduced peptides were then further alkylated in a buffer containing 0.2 M Tris-HCl, pH 8.3, and 200 mM iodoacetamide for 1 h. Identification of Cyclotides from the Plant Extracts On the original LC-MS chromatogram, all chromatographic peaks were manually investigated if they include a cyclotide-like mass spectrum. We assumed that peaks stemmed from cyclotidelike substances if their masses fell within the range of 2,700-3,300 Da, as deconvoluted from their doubly-and triply charged ions. Then, the presence of three disulfide bonds was used to support their identification as cyclotides: if peaks in the extract showed an increase in mass by 348.18 Da, we considered that the peak stemmed from a true cyclotide (Figure 2). Matching Transcriptomic Data to Cyclotides on the Protein-Level Presence of possible cyclotides identified in precursor sequences from transcriptome analyses was determined as follows: putative cyclotide sequences were listed and their monoisotopic mass were calculated. The listed cyclotide sequences were comprised of N-terminal residues ranging from [−3, 2] on the consensus position of their precursor sequences. The C-terminal residues were found differently between cyclic cyclotides and linear cyclotides, i.e., the highly conserved N/D residue located at loop 6 for cyclic cyclotides, and the residue located next to stop codon in their mRNA sequences for linear cyclotides. These calculated monoisotopic mass were then compared to the observed monoisotopic mass from the identified cyclotides in plant extracts using LC-MS. We regarded that the transcriptomic cyclotides were expressed in the protein-level if the difference of the monoisotopic mass <0.40 Da. Determination of Cyclotides' Abundance Levels in the Proteome For each cyclotide-like substance, the chromatographic peaks were identified together with their own retention time from the original LC-MS chromatogram. These chromatographic peaks were further investigated in relation with their mass spectral peaks to estimate the signal intensity (SI) of the cyclotide-like substance. The SI is estimated as a summed signal intensity of triple-charged mass spectral peaks. According to the SI, we assigned the cyclotide abundance into three levels, i.e., the abundance level is: low if SI <250, high if SI > 1,000, and medium if 250 < SI < 1,000. The Calculation of Molecular Descriptors The physicochemical properties of selected cyclotides (Park et al., 2014), i.e., the total lipophilicity and the exposure ratio, were calculated using scientific vector language (SVL), implemented in MOE 2012 (Chemical Computing Group Inc., Montreal, Canada). Accession Numbers The RESULT AND DISCUSSION The family Violaceae and the genus Viola are an excellent system for studying the evolution of cyclotides. All investigated species express large numbers of these proteins (Burman et al., 2015) as well as their precursor sequences. Furthermore, the specieslevel phylogeny of Viola (Marcussen et al., 2012(Marcussen et al., , 2015 and chloroplast phylogeny of Violaceae (Wahlert et al., 2014) We identified the mass spectra of the cyclotide-like using the mass distance on their isotopic peaks. The triple-charged monoisotopic mass of the cyO2 (marked as [cyO2] +3 in the box) is observed as 1047.1Da, and the double-charged monoisotopic mass ([cyO2] +2 ) is observed as 1570.2Da. In each mass spectrum, the distances between isotopic peaks are observed with ∼0.3Da for [cyO2] +3 , and 0.5Da for [cyO2] +2 . The monoisotopic mass of the cyO2 is 3138.4Da (calculated) and 3138.3Da (observed), and the mass difference is 0.1Da. (B) The monoisotopic mass of the cyO2 after the alkylation reaction is 3486.6Da (calculated) and 3486.9Da (observed), and the mass difference is 0.3Da. The calculated mass is based on the triple-charged monoisotopic mass of cyO2 (1163.3Da), and its related mass spectrum is shown in the box marked [cyO2-Alk] +3 . The Sequence Signature of the Prodomain Can Be Used for the Classification of Cyclotides The structural classification of cyclotides into Möbius cyclotides, bracelets, and hybrids thereof based on the structure and sequence of the mature cyclotide is here replaced by a classification based on the sequences of the cyclotide precursors, including both the prodomain (i.e., the NTPP and NTR domains) and the cyclotide domain. We focused on these three domains (i.e., the NTPP, NTR, and cyclotide domains) because they are present in all of the precursors that have been completely sequenced. The ER domain was not included in the analysis because ER domain sequences often vary strongly with the quality of the sequencing data. Other domains (i.e., the CTR domain and repeats of the NTR and cyclotide domains) are not present in all precursors, and were therefore also excluded. Precursors were classified by their sequence signatures, i.e., the patterns of insertions and deletions (indels) and conserved sequences in the prodomains (NTPP and NTR). The Nterminal cleavage site of the cyclotide domain was defined as position 0, and a large indel region was detected upstream in the NTPP at positions [−56, −38] (Figure 3). Within this indel, the insertion region features sequence variations with minor gaps, and the deletion region has definite sequence gaps [−56, −54], [−50, −38]. Interestingly, the insertions coincide with cyclotide domains of archetypical bracelet cyclotides (e.g., cycloviolacin O2), while deletions coincide with cyclotide domains of archetypical Möbius cyclotides (e.g., kalata B1). Based on these observations, we suggest that these indels in the NTPP domain can be used as a criterion for classifying precursors into the Möbius and bracelet lineages. The term lineage is used instead of subfamily in order to avoid confusion with the structural classification of the cyclotides. On the basis of these findings, i.e., the combined sequence signatures of the NTPP and NTR domains and the conservation of residues with similar physicochemical properties in the cyclotide domain, two new classification orders were defined and used to classify the members of each lineage. These new orders were termed the molecular species and molecular series. Precursors with the same sequence signatures in both the NTPP and NTR domains are assigned to the same molecular species, while precursors that only share signature sequences in the NTR domain are assigned to the same molecular series. Of 283 precursor sequences, 80 were classified into the Möbius lineage and 181 into the bracelet lineage (Figure 3; see Supplementary Figure 1). Of these 261 sequences, 249 were classified into 13 molecular series and 46 molecular species. Thus, 78 sequences representing the Möbius lineage were classified into five molecular series and 14 molecular species, while 171 sequences representing the bracelet lineage were classified into 8 molecular series and 32 molecular species. The remaining precursor sequences (34 of the initially examined 283, or 12.0% of the total) could not be grouped with any other sequences, and not be classified with molecular species (Supplementary Figure 2). Most of these sequences (31/283, or 10.9% of the total) were either partial sequences or lone unique sequences. Only three of all the precursor sequences (3/283) did not exhibit features enabling their classification into a given lineage on the basis of their NTPP domain sequences. Informal hierarchical ranks were then assigned to this system for classifying precursors, with lineage being the highest ranking classification, followed by molecular series and then molecular species. We further investigated the evolutionary relevance of this classification system by performing the phylogenetic analysis using the full DNA precursor sequences (Figure 4). We assumed that if the sequence signatures are highly conserved in the course of the evolution, the phylogenetic relationship of the precursor sequences has the hierarchical ranks as classified based on the sequence signatures. In this evaluation, we adopted a Bayesian phylogenetic approach using BEAST, because substitution model-based methods are in most cases better than Maximum Parsimony or distance-based methods at handling homoplasy, which is prominent among the cyclotide precursor sequences (Splits network shown in Supplementary Figure 3), and because BEAST sets a prior on the branch lengths that is particularly suitable also for analysis of short DNA sequences, such as cyclotide precursors. The two lower hierarchical ranks (i.e., molecular species and series) are consistently recovered in the phylogeny, but not the upper hierarchical rank (i.e., molecular lineage), as indicated by the low posterior support values at the base of the phylogeny (Figure 4). Most of precursor sequences belong to the same molecular species are grouped as one monophyletic clade (80%, 74/92) or to the different clades but more closely than the precursor sequences derived from the different molecular series. Among 13 molecular series, the molecular species belong to the four molecular series (i.e., HF, YA, FA, and HS) are not monophyletic. Only one molecular species (DI1) is exceptionally grouped into Möbius lineage in the phylogenetic tree, even though the DI1 is classified as the bracelet lineage based on the sequence signature. In the phylogenetic analysis, we selected a total of 92 precursor sequences by the random selection of two sequences from each of the 46 molecular species. We assumed that such precursor selection would be enough to show a phylogenetic support for the classification system based on the sequence signature approach, because we randomly selected two sequences from each molecular species, and these sequences were mostly paired or grouped into monophyletic clades in accordance with the classification at the level of molecular species and series. The Distribution of Cyclotide Precursors Reflects the Phylogeny of the Genus Viola Precursor sequences were compared to the established phylogenetic relationships between the four infrageneric sections of Viola (i.e., sects. Melanium, Plagiostigma, Viola, and Chamaemelanium), and between the genus Viola and the other Violaceae genera. The presence of a molecular species in different Viola taxa, such as species or sections, indicates that the molecular species evolved prior to the most recent common ancestor of these taxa. However, such inferences in Viola are complicated by the network-like phylogeny of the genus, owing to repeated ancient events of allopolyploidy (Marcussen et al., 2015). Hence, out of four sections in the current analysis, three (i.e., sects. Melanium, Plagiostigma, Viola) are allotetraploids originated by independent hybridizations between the same two parental lineages MELVIO and CHAM around 15 million years ago. The last section, sect. Chamaemelanium, is diploid and descends from the CHAM lineage. Because the parental MELVIO lineage is now extinct, it is impossible to infer with certainty the distribution of the molecular species in the common ancestor of these parental lineages, i.e., the CHAM and MELVIO lineages, especially without diploid outgroups from sister sections (e.g., Rubellium and Andinium). Gene flow by introgression between species potentially occurs between closely related species only, i.e., within the same subsection, and can be ruled out for this dataset (e.g., Marcussen et al., 2015). The current analysis revealed several points about the distribution of cyclotide precursors within the genus Viola. Firstly, the classification into infrageneric sections is reflected in the distribution of the cyclotide precursors. Some molecular species occurred sporadically or commonly across the four infrageneric sections: 2% of molecular species were found across all sections, and 5% of the studied precursor sequences belong to those molecular species (Figure 5; see Supplementary Figure 4). Also, 17% of molecular species (34% of precursor sequences) were found across three sections, and 45% of molecular species (41% of precursor sequences) were found in at least two sections. This indicates that the molecular species likely originated both from the common ancestor of Viola sections and from the hybridization between the parental lineages. Also, there are FIGURE 4 | Phylogenetic tree of cyclotide precursors. The tree contains 92 precursor sequences in total, with two representative sequences from each of the 46 molecular species. Except from the nine molecular species (i.e., GA2, GP1, NS2, NS5, NL2, HS4, PN1, QD1, and YY1), all other 37 molecular species (80%, 37/46) are grouped as one monophyletic clade (posterior probability (pp) ≥ 0.75). Also, except from four molecular series (i.e., FA, HF, HS, and YA), the molecular species belong to other nine molecular series are monophyletic (pp ≥ 0.97). In the phylogenetic tree, DI1 is a single exception from the lineage classification based on sequence signature. DI1 is grouped into Möbius lineage in the phylogenetic tree (pp ≥ 0.70), even though it is classified as bracelet lineage based on the sequence signature. some molecular species that could have occurred by the genetic changes from Viola speciation. It should be emphasized that the degree of concurrence is likely higher at the genomic level than at the transcriptomic level estimated from the current study, because all genomic presence of the molecular species can not be captured by the transcriptome. Some molecular species might not Plagiostigma as PLA, Viola as VIO, Melanium as MEL, Chamaemelanium as CHA, Rubellium as RUB, and Andinium as AND. Dotted lines indicate the complex ancestries of the three allotetraploid sections Melanium, Plagiostigma, and Viola, all derived from hybridization between the CHAM and MELVIO lineages 15-20 Ma ago. Genera and sections studied using transcriptomic methods are indicated with asterisks (*). The total number of species within Viola is estimated to be 580-620, most of which (61-65%) belong to these four sections. The first ancestor of the genus (α) is dated to 31 Mya, and the common ancestor of the four studied sections (β) is dated to 24 Mya (Marcussen et al., 2015). The phylogeny of Viola is based on the work of Marcussen et al. (2015) and that of Violaceae is based on the work of Wahlert et al. (2014). be expressed, and the RNA sequence of some molecular species might be degraded during the transcriptomic assay. Secondly, signature sequences are conserved between Viola and other Violaceae genera, i.e., Gloeospermum, Melicytus, and Pigea (Supplementary Figure 5). In particular, the sequence signature of the NTR [−9, −8] region is largely conserved in the Möbius and bracelet lineages: Y −9 -A −8 is found in the Möbius lineages of Viola and Melicytus, G −9 -A −8 in the bracelet lineages of Viola and Gloeospermum, and H −9 -S −8 in the bracelet lineages of Viola and Pigea. In addition, the sequence signature of the NTPP [−56, −38] is conserved in the bracelet lineage. That region shows high sequence similarity between some precursors, i.e., molecular species NS2 and HS3 from the genus Viola and precursors from Melicytus and Pigea, respectively. Interestingly, the protein sequence of Gpc3, a precursor belonging to the bracelet lineage, is identical in both Viola and Gloeospermum. These observations indicate that these sequence signatures have been conserved at least since the divergence of their most recent common ancestor some 50 million years ago (Marcussen, Wahlert et al., in prep.). Thirdly, the cyclotide sequence diversity of Viola is expected to be dependent on the differentiation into sections. It is estimated that 67-88% of the Viola sections are allopolyploids (Marcussen et al., 2015) that combine genomes from different diploid lineages. Also, increase in ploidy has been shown experimentally to increase also sequence diversity of cyclotides by mutation, e.g., in Oldenlandia affinis (Seydel et al., 2007). However, the cyclotide sequence diversity could not only be very huge by the sequence variations within molecular species, but also be limited by sharing the same molecular species across different sections. The sequence diversity can be large by allowing conservative substitution within the same molecular species. Under the current classification (Marcussen et al., 2015), Viola comprises at least 16 sections and some 600 species, of which 10 sections (∼454 species, 76%) possess at least one CHAM genome, either alone or, as a result of allopolyploidization, in combination with other CHAM genomes or MELVIO genomes. These 10 sections include the four (∼380 species, 63%) inspected by the current study; and the current depth analysis reveals that most of cyclotide precursor sequences across different sections are likely grouped into the molecular species. The precursors identified in the current transcriptome and those reported previously all exhibit striking sequence similarity (Zhang et al., 2009(Zhang et al., , 2015bHellinger et al., 2015;Slazak et al., 2015). Expression Profiles of Cyclotides and Their Structural Diversity The expression profiles of cyclotides were analyzed at transcriptomic and peptidomic levels both. At the transcriptomic level, expression levels of precursor RNA were evaluated by their FPKM values. The expression of those sequences cyclotide sequences were then assayed at the peptidomic level by LC-MS. At the peptidomic level, numbers ranging from 19 to 44 cyclotides were detected in each of the Viola species. This number is similar to the number of precursor sequences found at transcriptomic level (23 to 37). Only few of the cyclotides (4-26%) were detected at both levels (Figure 6; see Supplementary Table 5). Most of those cyclotides were archetypical Möbius (kB1 and kS) and archetypical bracelets (cyO2, cyO8, cyO13, mram8, and viba12). Only two of the novel cyclotides were found at both levels, and both were found eluting with very early retention time and at low protein levels of abundance. Cyclotides found at the peptidomic-level only were identified by matching their calculated MW, assuming that they could belong to any type, e.g., be hybrid, archetypical Möbius and bracelet. Discrepancy between expression at transcriptomic-and peptidomic levels has been reported previously Hellinger et al., 2015). In those studies cyclotides expressed as peptides were all hybrids, or archetypical Möbius or bracelets. The failure of detection the novel cyclotides using LC-MS could be either from the large hydrophilicity of those novel cyclotides (Supplementary Figure 6) or from their low expression at the peptidomic level. Interestingly, kS (varv A) and kB1 were not found in the transcriptome from two Viola species (V. orientalis and V. albida var. takahashii), whereas these particular cyclotides were found at the protein level in all five Viola species. Such expression differences could be related to the molecular stability and regulation mechanism. However, it is clear that cyclotides accumulate after their expression in plant tissues. Major Sequence Changes Are Linked to Neofunctionality In some cases, the differences between the precursor sequences of closely related molecular species were small ( Figure 7A) while in other they were quite large (Figures 7B-D). In this context, a large difference is defined as an indel of more than five residues. With the exception of these indel regions, the precursor sequences are homologous, and the homologous sequence regions contain unique sequence traits found only in those molecular species. This implies that despite their large sequence differences, those molecular species are closely related. Such an outcome is analogous to that predicted by the theory of punctuated equilibrium (Eldredge and Gould, 1997), which describes evolutionary trends and speciation at the organism level. Another possible explanation for the observations is that mutations gradually accumulate in cyclotide precursors, leading to the loss or inactivation of intermediate sequences. Some of the cyclotide domains exhibited unusual sequences, which fell into three main groups ( Figure 7E). The first such group contained sequences whose cyclotide domains have an uneven number of cysteines (five or seven). Having uneven number of cysteines makes the formation of a cystine knot impossible, and can create the possibility of forming disulfide bridges not associated with the cystine knot, as well as the potential for dimerization. The second group consists of sequences that lack a residue required for AEP-mediated cyclisation, i.e., an N or D residue in loop 6. Abnormal precursor sequences of this sort are presumably the result of mutations in the genes encoding cyclic cyclotide precursors rather than being inherited from an ancestral linear cyclotide because they exhibit FIGURE 6 | Cyclotide expression on peptidomic level. LC-MS chromatogram of the five Viola species used for transcriptome assays in current study (left panel). The abundance of cyclotides is linked to their signal intensity (Right panel). For each Viola species, the LC-MS chromatogram (Base Peak Ion) and mass distribution of the cyclotides are shown with retention times and abundance levels (High/Medium/Low). The symbols + and • denote cyclotides that were not found in the transcriptome and were found in the transcriptome, respectively. The cyclotides (kB1, kS, cyO2, cyO8, and cyO12) were confirmed by comparing the retention time and isotopic mass derived from the LC-MS of V. tricolor and V. odorata as reference chromatogram. strong sequence similarity with the former class of precursors in both loop 6 and the C-terminal tail. Conversely, typical linear cyclotide precursors have short loop 6 sequences, lack a C-terminal tail, and are quite abundant. The third group of abnormal precursor sequences included sequences with one or more unusually short or long loops. Only five such sequences were identified (among a total of 283), but the three types of abnormality often co-occurred in the same precursor. For example, valt1-FS3U contains one additional cysteine in loop 2, and lacks an N/D residue in loop 6. All of these precursors are only weakly expressed. The cyclic backbone seems to have emerged by molecular speciation. Comparing sequences from linear and cyclic molecular species, i.e., the cyclic YY1 and linear YY2 in Möbius lineage, and the cyclic PS1 and HS4 to the linear PN1 in the bracelet lineage (Figures 7F,G), suggests that linearity is likely to be the primitive (ancestral) trait, rather than linear cyclotides having evolved from cyclic ancestors via the mutational loss of the N/D residue in loop 6 that is needed for cyclisation. Although the cyclotide precursor sequences are too variable at the nucleotide level to resolve the relationships between linear and bracelet (Huson and Bryant, 2006), linearity being ancestral is supported by three facts. First, linear cyclotides that have undergone such mutational losses normally have elongated loop 6 sequences with mutations in the C-terminal tail. Second, the prodomain sequences of these molecular species are well-defined (all of them share the same sequence signature). Third, the molecular species corresponding to these putative primitive linear cyclotides are found in all four studied sections of Viola and also in other genera (Mra13). Some precursors sequences could also be grouped as rich in prolines (Figures 7H,I). The common structure-based criterion used to identify mature Möbius cyclotides is the presence of a cis Pro in loop 5. Classification based on this approach is not generally consistent with the evolutionary classification into lineages: many precursor sequences assigned to the Möbius lineage lack this Pro residue in loop 5 (P32) of the cyclotide domain. Examples include the molecular species YA1, HF1, and YS1-3, which exhibit appreciable sequence similarity with archetypical Möbius cyclotides in loops 2 and 3. These sequence intermediates, which have characteristics of both bracelet and Möbius cyclotides, have been classified as the hybrid subfamily, and it has been suggested that they are genetic chimeras of bracelets and Möbius cyclotides . Our analysis suggests that this is not the case, at least in the Violaceae. Instead, these structural hybrids and archetypical Möbius cyclotides seem to originate from the molecular species YY1 of the Möbius lineage, which in turn is closely related to the linear ancestor YY2. A key question is whether the emergence of the cyclic backbone changed the properties of the membrane-interacting surfaces of the linear cyclotide sequences. To answer this question, we analyzed the structural traits (i.e., the exposed ratio and physicochemical properties) of the residues that differ between linear and cyclic cyclotides within Möbius and bracelet lineages. The lineages were considered separately, so the cyclic Möbius cyclotide kalata B1 was compared to the linear cyclotide violacin A from the Möbius lineage. Similarly, the cyclic bracelet cyclotide cyO2 was compared to the linear verec1-PN1 from the bracelet lineage. In both cases, the linear cyclotide appeared to be ancestral to its cyclic counterpart. Linear and cyclic cyclotides from the same lineage exhibit substantial sequence similarity, with extensive conservation of residues' physicochemical properties at key positions. This is illustrated by the example of the residues in loop 6 ( Figure 8B). In the linear cyclotides, loop 6 is conformationally flexible, and the charged groups of the termini are largely exposed on the molecular surface (Figures 8A,B). The exposed ratios and physicochemical properties of the corresponding residues in the cyclic cyclotides are very similar to those in their linear counterparts, implying FIGURE 8 | Membrane-interactive properties of linear and cyclic cyclotides within the Möbius and bracelet lineages. (A) Structures of kalata B1 and violacin A from the Möbius lineage, and cyO2 and verec1-PN1 from the bracelet lineage. Surface-exposed side-chains are shown in blue if positively charged, red if negatively charged, and green if hydrophobic. Notably, the N-terminal residues (S2) of the linear cyclotides always give rise to an exposed positive charge on the molecular surface. The numbering of residues follow the consensus sequence of the cyclotide domain in Figure 3. (B) Exposed ratios of linear and cyclic cyclotides' residues within each lineage. The exposure ratio compares the solvent-exposed surface area (SASA) of each residue in a protein to its SASA in the tripeptide Gly-X-Gly (ψ = φ = 180 • ); the more exposed the residue is, the greater the value. With the exception of the residues in loop 6, residues of cyclic and linear cyclotides have similar exposure ratios at corresponding positions. For each residue, the boxes show the range between the first and third quartiles, and the upper and lower error bars represent the minimum and maximum exposure ratios, respectively. FIGURE 9 | A potential link between precursor architecture and neofunctionality. (A) Suggested modes of duplication in precursor genes. In Viola, the precursor architecture differs between lineages. Precursors of bracelet lineages appear to contain only one modular domain (i.e., a concatenated NTR and a cyclotide domain), whereas precursors of the Möbius lineage may contain one to three such domains. The repeats in the Möbius lineage probably originated from internal gene duplication events. All precursors sharing the same architecture, in both the Möbius and bracelet lineages, probably originated from external gene duplication. (B) Sequence alignment between new types of cyclotides and previously identified archetypes. In the Möbius lineage, sequences vary between repeated cyclotide domains within the same precursor and cyclotide domains of different precursors. There are high levels of sequence conservation within repeated cyclotide domains, as exemplified by kB1 and kS within Vok1. However, the sequences of cyclotide domains in precursors containing multiple cyclotide domains in general are frequently very different, as exemplified by kalata S and viul F within prc-Viul F[C]-FS4U, and viba 19 and viba 21 within VbCP14. Of the precursor sequences shown in the figure, the molecular species YS7, which belongs to the Mobius lineage, has a similar sequence to archetypical bracelet cyclotides. For example, valta3-YS4 lacks a proline residue in loop 5 and has an elongated loop 3. Conversely, the molecular species FA1 contains lipophilic residues in loops 2 and 5, while loop 3 consists mainly of negatively charged residues. (C) Comparison of cyclotides' surfaces and electrostatic potentials. New types of cyclotides-viman-FA1, viman1-RS2, and valta3-YS4, and viul-F-are shown with their electrostatic potential surfaces. The distributions of their electrostatic potentials are generally dissimilar to those seen in typical bracelet and Mobius cyclotides. Residues with high exposed ratios are indicated by solid and dashed lines on the molecular surfaces. Residues highlighted with solid lines have dissimilar physicochemical properties to those found in the corresponding positions of archetypical cyclotides. Residues whose physicochemical properties match those of the archetypical cyclotides are indicated by dotted lines. Residues are numbered in accordance with their sequence positions in (B). Negatively charged surface regions are colored in red, positively charged regions in blue, and hydrophobic regions in green. The numbering of residues follow the consensus sequence of the cyclotide domain in Figure 3. that the structural framework favoring membrane interaction emerged before the cyclic backbone. Some of the cyclotide domains exhibit very high levels of sequence diversity between different molecular species. Some molecular species (e.g., YS1-4, RS2 and FA1) contain cyclotide domain sequences that are highly dissimilar to archetypical cyclotide sequences (Figure 9). We found that these molecular species occur across Viola sections, which means that they must have evolved before the differentiation of these sections some 15 million years ago. Although the biological function of all cyclotide forms is currently unclear, their maintenance within the genome suggests that they fulfill vital and divergent functions in their host plants. It is likely that some cyclotides may have biological functions based on mechanisms other than membrane binding and disruption (Burman et al., 2011). For example, the cyclotides-FA1 might have different orientations to archetypical cyclotides, even when one compares them to others from the same lineage. They have lipophilic residues in loops 2 and 5 with large exposed areas on the molecular surface. The lipophilicity of this loop differs from that of archetypical cyclotides: the archetypal Möbius cyclotides do not have lipophilic residues in loop 2, and the archetypal bracelets do not have lipophilic residues in loop 5. In addition, loop 3 of the FA1 cyclotides is rich in negatively charged residues, making them unique among the molecular species found in the genus Viola and suggesting that their functions may be unrelated to membrane binding. Contradicting previous hypotheses, it appears that the cyclotides of the Möbius lineage exhibit a somewhat greater diversity of sequence traits than those of the bracelet lineage. Such large sequence diversity of the Möbius lineage may be correlated with the combined occurrence of internal and external gene duplications in the cyclotide domain. Precursor architecture with such duplication of the cyclotide domain is illustrated in the Figure 9A. A large proportion of the precursors (72%; 56/78) belonging to the Möbius lineage contain multiple cyclotide domains that appear to have originated from internal duplication events. Conversely, no bracelet precursors having repeated cyclotide domains have yet been identified in the Violaceae. There are large sequence differences in certain precursors' cyclotide domains at positions close to the N-terminal prodomains, as can be seen by comparing the sequences of the FA1 and YS4 molecular species to the archetypal Möbius cyclotides varv A and kalata S (Figures 9B,C). Pronounced sequence differences also exist between repeated cyclotide domains in some individual precursors, as in the case of the precursor of Viul F. Together with the rest of our findings, this implies that combined internal and external duplications can synergistically produce large changes in cyclotides' sequences and structures, giving rise to new biological functions: neofunctionality. AUTHOR CONTRIBUTIONS SP, TM, KJR, ID, AB, and UG designed experiments. SP carried out interpretation of precursor sequences, modeling and mass spectrometry. UG supervised the experiments. K-OY collected plant material for the transcriptome sequencing. EJ identified cyclotide genes from transcriptomic data. TM and SP performed the phylogenetic analyses. All authors contributed to the writing and revision of the manuscript. FUNDING This research was supported by Swedish Research Council (#2012-5063). KJR is an ARC Future Fellow (FT130100890).
2017-12-18T18:04:47.223Z
2017-12-18T00:00:00.000
{ "year": 2017, "sha1": "4d91449c70a3cd2134102d11455b3d9fc25b1588", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2017.02058/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d91449c70a3cd2134102d11455b3d9fc25b1588", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
260004732
pes2o/s2orc
v3-fos-license
Biogenic calcium carbonate as evidence for life . The history of Earth is a story of co-evolution of minerals and microbes: not only numerous rocks arisen from life, but the life itself may have formed from rocks. To understand the strong association between microbes and inorganic substrates, we investigated the moonmilk, a calcium carbonate deposits of microbial origin, occurring in the Iron Age Etruscan Necropolis of Tarquinia, in Italy. These tombs provide a unique environment where the hypogeal walls of the tombs are covered by this 20 speleothem. To study moonmilk formation, we investigated the bacterial community in the rock in which the tombs were carved: calcarenite and hybrid sandstone. We present the first evidence that moonmilk precipitation is driven by microbes within the rocks and not only on the rock surfaces. We also describe how the moonmilk produced within the rocks contributes to rock formation and evolution. The microbial communities of the calcarenite and hybrid sandstone displayed, at the phylum level, the same microbial pattern of the moonmilk sampled from the walls of the hypogeal tombs, suggesting that the moonmilk 25 originates from the metabolism of endolytic bacterial community. The calcite moonmilk is the only known carbonate speleothem on Earth with undoubted biogenic origin, thus representing a robust and credible biosignature of life. Its presence in the inner parts of rocks adds to its characteristics as a biosignature. Introduction Whether other planets witnessed life like what is seen on Earth remains a complete mystery.The search for traces of extra-terrestrial life suffers from the lack of durable and credible biosignatures.Some breakthroughs may happen soon from the current planetary explorations.The NASA Perseverance rover successfully landed on planet Mars in the Jezero Crater in 2021 and has been collecting many specimens.Future missions are planned to retrieve those specimens, although not before 2031, hence opening the possibility to search for evidence of life from the first-ever samples returned from Mars.The European Space Agency (ESA) Ex-oMars programme is also planning to address the question of whether life has ever existed on Mars. Earth has evolved through a long process of co-evolution between minerals and microbes (Cosmidis and Benzerara, 2022;Grosch et al., 2015;Cuadros, 2017;Hazen et al., 2008), and terrestrial rocks constitute an ideal system for the investigation of durable signs of life.Carbonates are good candidates as host rocks because, on Earth, they are largely of biogenic origin.Nevertheless, they did not receive the attention needed because of limited and sporadic evidence on Mars: lithologies revealed the presence of carbonates in the Nili Fossae region (Ehlmann et al., 2008), at the Mars Phoenix landing site (Boynton, et al., 2009), in the Columbia Hills of Gusev Crater (Morris et al., 2010), and in deep rocks exposed by meteor impacts (Michalski et al., 2010).Recently, the analysis of weathering profiles revealed the widespread distribution on Mars of carbonates associated with hydrated minerals, providing evidence of past liquid water presence (Bultel et al., 2019).Carbonates were also detected in asteroids and meteorites, contributing to the understanding of the formation and evolution of our solar system (Lee et al., 2014;Kaplan et al., 2020;Pilorget et al., 2021;Voosen, 2020). Carbonate rocks on Earth are of abiogenic or biogenic origin and have arisen since the early Archean (> 3 gigaannum, Ga) Eon, when hydrothermal systems were ubiquitous.At that time, the carbonate rocks originated by massive carbonatisation, silicification, and potassium (K) (±Sodium) metasomatism of intermediate to ultramafic silicate precursors (Veizer et al., 1989).In contrast, lower Archean marine carbonates are rare; they occur as very thin, discontinuous, and extensively mineralised beds generally replaced by chert as a result of intense microbial iron (Fe) cycling (Pomar, 2020).The 3.4 Ga old stromatolite structures (sensu Riding, 2011) are widely regarded as among the oldest biogenic carbonate production associated with carbon dioxide (CO 2 ) sequestration (Allwood et al. 2006).During the Proterozoic Eon (< 2.4 Ga), shallow water carbonate production expanded, favouring the development of carbonate platforms where abiogenic and biogenic carbonate precipitation took place (Grotzinger and James, 2000).The seawater was supersaturated with both calcite and aragonite, as evidenced by the well-preserved pseudomorphs of "abiogenic" aragonite and calcite (Pomar, 2020).According to Sumner and Grotzinger (1996), the rise in oxygen concentration at 2.2-1.9Ga led to the removal of Fe 2+ , a strong calciteprecipitation inhibitor, from seawater and resulted in a shift from Archean to Proterozoic carbonates that are dominated by microbial activity (Pomar, 2020).Starting from the Cambrian Period, the benthic microbialites (sensu Riding, 2011), the prime carbonate factory since the Late Archean, were still important but progressively decreased, while the biologically controlled carbonates appeared and expanded.The calcification of sessile, mostly colonial, metazoans and algae promoted the accumulation of biogenic carbonate sediments and the appearance and expansion of reefs (Pomar, 2020). Carbonates are common constituents of the near-surface Earth crust, although carbonate phases may also occur deep in the mantle.They are compounds formed by the anionic complex, carbonate (CO 3 ) 2− ), combined with metal ions such as calcium (Ca), magnesium (Mg), iron, manganese, sodium, barium, aluminium, zinc, copper, lead, uranium, or rare-earth elements.Uncommonly, carbonate phases are hydrated, contain hydroxyl or halogen ions, or may include silicate, sulfate, or phosphate radicals.Due to the high availability of Ca and Mg in crustal reservoirs (Hartmann et al., 2012), the CaCO 3 polymorphs, calcite, aragonite, and dolomite (CaMg(CO 3 ) 2 ) are the most widespread carbonate minerals, whose formation on Earth near-surface envi-ronments is widely related to biogenic or bio-mediated processes (Görgen et al., 2021). Carbonate biomineralisation or organomineralisation (sensu Dupraz et al., 2009) results in the formation of several mineral phases, the most common of which are the CaCO 3 anhydrous polymorphs calcite, aragonite, and vaterite, the last being a metastable transitional phase; the hydrated forms of monohydrocalcite (CaCO 3 q H 2 O) and ikaite (CaCO 3 q 6H 2 O); and various amorphous phases (ACC).Moreover, in specific environments (saline lakes, coastal lagoons) (Diloreto et al., 2021;Kaczmarek et al., 2017), microbial activity may promote the formation of dolomite (ordered phase CaMg(CO 3 ) by passing through the precursor phases of high Mg calcite (disordered 4-36 mol % MgCO 3 ), disordered dolomite (disordered > 36 mol % MgCO 3 ), and proto-dolomite (weakly ordered > 36 mol % MgCO 3 ).Carbonate mineral formation seems to proceed from amorphous or disordered phases towards the more stable and ordered forms (Asta et al., 2020); crystal growth and morphologies are controlled by the medium composition, the microbial extracellular polymeric substances (EPS), the Mg / Ca ratio, and the presence of other ions.Thus, the mineral phases resulting from abiogenic or biogenic activity are indistinguishable, and the identification of irrefutable biosignatures, evidence of past or present life, remains absent (Changela et al., 2021;Javaux, 2019). Carbonate rocks, being mostly of biogenic origin, could be considered in this perspective as a possible biosignature, but even if specific analyses could discriminate between biogenic and abiogenic carbonate rocks (Blanco et al., 2013), the attempt to unequivocally distinguish between carbonates of biogenic or abiogenic origin remains vacuous, especially in the search for evidence of life on other planets.So far, the presence of organic materials in carbonate has not been an incontrovertible indicator of biogenicity (Berg et al., 2014). Consideration for calcium carbonate minerals as possible biosignatures can be found by the study of a secondary calcite deposit, called moonmilk, formed by nanofibres of calcite, commonly found on karst cave surfaces.Calcium carbonate deposits, consisting of thin crystal fibres, have been observed in various vadose environments (soils, karstic caves, and other hypogeal spaces), and an exhaustive classification of calcite fibre morphology is presented in Cailleau et al. (2009).Although widely studied, the origin of such deposits is still a debated matter, being attributed to physicochemical processes (Kolesàr and Čurlik, 2015;Jones and Peng, 2014) or to various biogenic processes (Cañaveras et al., 2006;Millière et al., 2019).In the past, inorganic precipitation mechanisms were thought to be either controlled by climatic variations, as in the case of the Caverne de l'Ours (Lacelle et al., 2004) in Canada, or from the heterogeneous nucleation of calcite from supersaturated fluids, as in the Staloti cave (Borsato et al., 2000).More recently, many authors discussed the biogenicity of the moonmilk (Baskar et al., 2011;Braissant et al., 2012;Cacchio et al., 2014;Kon-dratyeva et al., 2020;Maciejewska et al., 2017;Portillo and Gonzalez, 2011), reporting that the moonmilk precipitation is promoted by the metabolic activities of a microbial community living in environments rich in calcium content (Banks et al., 2010;Cailleau et al., 2009;Cirigliano et al., 2018;Portillo et al., 2011;Compière et al., 2017).Nevertheless, direct evidence that bacteria promote the precipitation of nanofibres of calcite is still lacking.This evidence is essential to define the moonmilk as a bona fide biosignature. Recently, the moonmilk speleothem was discovered in the hypogeal ancient Etruscan tombs of the Monterozzi necropolis (Tarquinia, Central Italy) (Cirigliano et al., 2018;Tomassetti et al., 2017).This finding provides a unique opportunity to compare the moonmilk which covers the walls and ceilings collected from 12 tombs excavated in two types of rock, the calcarenite and hybrid sandstone.We provided insight into the formation of moonmilk, which can occur rapidly, i.e. between 10 and 50 years, and we reported that this speleothem originates from, and harbours, a microbial community able to induce carbonate precipitation (Cirigliano et al., 2021a, b).Here, we propose that the nanofibre calcite deposit (moonmilk) developing inside the rocks (calcarenite and sandstone) is promoted by a microbial community and by the physio-chemical features of the host rock.We present an example of an ongoing symbiotic co-evolution between rocks and microorganisms: the moonmilk contributes to the evolution of the rock rich in calcium carbonate, while the physio-chemical features of the host rock shape the resident microbial community which induces the moonmilk deposition. Site description and sampling Samples representative of the bedrock were collected from the ancient Etruscan necropolis of Tarquinia, a UNESCO World Heritage Site (Viterbo, Italy), in which more than 200 painted hypogeal tombs (dated from the 7th to the 2nd century BC) were discovered.The tombs were excavated in a sedimentary bedrock belonging to a middle-to upper-Pliocene formation known as macco consisting of yellowish bioclastic calcarenites interbedded with hybrid sandstones (Fig. S1 in the Supplement shows the map of the necropolis in Tarquinia and the sampling locations).From each location, rock samples were collected and kept in plastic bags on ice and transported to the laboratory for analysis.With a sterile hammer and chisel, surface material and samples from outdoor and indoor rocks were first removed to a depth of 3 to 5 cm.The interior of the rock samples was processed for DNA extraction or geological experiments. Media and growth condition for calcium carbonate organomineralisation To study the organomineralisation in the laboratory, bacterial strain cultures are maintained in an LB (Luria Bertani) liquid medium (1 % Bacto Tryptone, 0.5 % di yeast extract, 0.5 % NaCl, and 0.1 % NaOH 1N).The carbonatogenic activity of bacterial strains has been assessed in a solid complete medium YPDuc plates, containing 1 % Bacto Peptone, 1 % yeast extract, 2 % glucose, 4 % urea, 2.5 % CaCl 2 , and 2 % agar, adjusted to pH 8.0.To assess the carbonatogenic activity of the microbial community, 0.1 g of crushed calcarenite was inoculated in a BPuc medium (0.72 % Bacto-peptone, 4 % urea and 2.5 % CaCl 2 , pH 8.0) for 1 week at 28 • C, at 160 rpm min −1 .The experiments were performed in three technical and three biological replicas.These media induce the metabolism of urea of the microorganisms, resulting in a fast organomineralisation in plates or in liquid medium. DNA extraction procedures and sequencing of rock and moonmilk samples For each sample, rock material was collected aseptically using a sterile rock hammer or chisel and stored in sterile collection bags.All samples intended for DNA extraction were collected by discarding the top 3-5 cm layer and then crushing the inner part with a sterile rock hammer and further reducing it to a powder by grinding with a sterile mortar and pestle.Genomic DNA extraction was performed using the DNeasy PowerMax Soil Kit (QIAGEN) following the manufacture manual method using about 10 g of collected material.Spectrophotometric quantification was performed using a Thermo Scientific NanoDrop spectrophotometer (Thermo Scientific), and DNA purity was assessed through the evaluation of 260/280 and 260/230 absorbance ratios.PCR amplification was performed on about 50 ng of DNA from each sample, as described by Grottoli et al. (2020) The latter was obtained by dividing the difference between the geometric volume and the volume measured by the pycnometer of the sample (Ruggieri and Trippetta, 2020;Trippetta et al., 2020).All laboratory measurements were made in the Earthquake Physics Laboratory at Sapienza Earth Sciences Department.Bulk rock major and trace-element compositions were obtained by lithium metaborate-tetraborate fusion (inductively coupled plasma-atomic emission spectroscopy (ICP-AES) and ICP-mass spectrometry (ICP-MS)), at Activation Laboratories Inc. (Ontario, Canada) according to the Code 4Litho code package on solutions prepared with lithium metaborate fusion.Loss on ignition (LOI) was measured according to standard gravimetric procedures.Details on the precision and accuracy of the analyses are reported in https://actlabs.com/(last access: 22 November 2022).The calcium carbonate content was assessed by gasometric measurements using a Dietrich-Frühling calcimeter measuring 1 g of the bulk sediment, following Siesser et al. (1971). Scanning electron microscopy analysis Scanning electron microscopy (SEM) was performed on the Moonmilk samples and on the rock thin sections using the field emission scanning electron microscopy (FESEM) Zeiss Auriga 405, with a chamber room that maintains a pressure of about 10-5 to 10-6 mbar.Before mounting the samples inside the microscope, the specimens were coated with 20 nm of chromium using a Quorum Q150T sputter.Chromium has a high X-ray Kα value (5.145 keV), so it does not interfere with lighter elements during the EDX (energy dispersive Xray) analysis.EDX spectra were obtained using a Bruker QUANTAX detector in point mode for 30 s, with the electron microscope acceleration voltage set at 10 kV and a working distance of 6 mm to optimise the number of the incoming Xray signals. 3 Results and discussion Lithology, mineralogy, and geochemical characterisation of the bedrock The area of the Monterozzi necropolis offers a favourable chance to investigate the role of bedrock in the genesis of the moonmilk.To this purpose, we analysed samples of bedrock taken from different areas of the necropolis with special focus on the sites where the Tomba dei Vasi dipinti, Tomba Maggi 2, Tomba delle Pantere, and Tomba degli Scudi were carved (Fig. S1).The hypogeal tombs of the Monterozzi necropolis, located on a flat elevated area, are excavated within a sedimentary substrate of middle-to upper-Pliocene age known as macco formation consisting of two main lithofacies showing lateral and vertical heterotopic relationships.Macco s.s.lithofacies is a bioclastic calcarenite represented by packstone to rudstone and floatstone composed of a small volume (< 5 %-10 %) of micrite matrix, coralline algal branches, bryozoans, bivalves (pectinids and oysters), echinoids, benthic foraminifers, and skeletal debris.Rare non-carbonate grains are present.In the inner walls of the intergranular voids, microsparite cement precipitation and/or recrystallisation often occurs in a phreatic marine environment.The second lithofacies is represented by a poorly cemented, crudely stratified, hybrid sandstone.It is fine to medium grain-sized and grain-supported, with a small amount of micrite matrix (< 10 % vol), and nearly devoid of carbonate cement.This lithofacies is characterised by abundant bioturbation, the skeletal assemblage is dominated by small benthic foraminifers, echinoids, serpulids, and bivalves (mainly oysters).Planktonic foraminifers are common.The terrigenous fraction mainly consists of monocrystalline grains of quartz, sedimentary lithoclasts, and subordinate detrital micas and feldspar along with rare glaucony grains and opaques.Calcimetric analyses from the two lithofacies revealed that calcium carbonate contents range from 90 % to 98 % in the macco s.s.calcarenite and from 49 % to 59 % in the hybrid sandstone.Moreover, whole-rock major oxide and trace-element compositions highlight the geochemical difference between the two lithofacies (Table 1), mostly related to the higher proportion of terrigenous fraction in the hybrid sandstone.Helium pycnometry revealed a high open porosity in both lithofacies (∼ 43% for macco s.s.; ∼ 42 % for hybrid sandstone).The macco s.s.lithofacies show a dominance of vuggy porosity and abundant intraparticle, interparticle, and mouldic porosity.The main porosity of hybrid sandstone is represented by interparticle porosity, rarely by mouldic porosity.Thus, despite the differences in composition, the calcarenite and the hybrid sandstone show two essential characteristics required for moonmilk formation: high calcium content, which activates the microbial metabolism leading to organomineralisation, and high porosity, necessary for the exchange of fluids and nutrients in an oligotrophic environment, also providing the space for microbial colonisation.Noteworthily, the characteristics of bedrock porosity (vuggy and mouldic) indicate that the dissolution processes prevail on those of inorganic carbonate precipitation; indeed, meteoric cements as well as speleothems in the largest cavities are absent.Moreover, the bedrock where the tombs are carved is located in the shallow vadose zone (a few metres below the surface).The high magnesium calcite, the dominant mineralogy of the main components, the coralline algae, is a metastable mineral phase of calcite.This phase, upon exposure to meteoric water, dissolves, partially increasing the availability of Ca 2+ for the microbial metabolism and for the biogenic carbonate precipitation.It is notable that the area under investigation lies on a high flat relief, where the infiltrated water mostly derives from rainfall, without any groundwater input.Consequently, the infiltrated water can be reasonably assumed to be undersaturated with respect to calcite. First report of the inner location of moonmilk Mostly, moonmilk biogenic deposits develop indifferently within the two distinct lithofacies constituting the bedrock of the Etruscan necropolis of Monterozzi (Mura et al., 2021).Figure 1a shows an example of the hypogeal walls of tombs carved in calcarenite and hybrid sandstone and the walls covered by moonmilk in the Tomba degli Scudi and Tomba Maggi 2 (Fig. 1b).The moonmilk layer originating from hybrid sandstone is thinner than the one observed on a calcarenite substrate, but the scanning electron micrographs of the moonmilk sampled from the walls of the Tomba degli Scudi and Tomba Maggi 2 showed the same nanofibre structure (Fig. 1c).X-ray powder diffraction analysis (XRD) revealed that moonmilk is composed of calcite (Mura et al., 2020).So far, the moonmilk has only been considered to be a deposition covering rock surfaces (Borsato et al., 2000), but the analysis with transmitted polarised light microscopy of thin sections showed that the moonmilk is present inside the calcarenite in the vuggy and mouldic porosity of the calcarenite in the Tomba Maggi 2 (Fig. 2a) and in intergranular and mouldic pores of the hybrid sandstone bedrock collected inside the Tomba degli Scudi (Fig. 2b).The presence of moonmilk inside the rocks is a general phenomenon because it is observed in all samples, irrespective of the type of rock, calcarenite or hybrid sandstone, and collection site (outdoor or indoor) (Figs.S2, S3, and S4).These results are also supported by the discovery of moonmilk deep inside a calcarenite rock sampled at the entrance of the Tomba dei Vasi Dipinti (Fig. S5). The analysis of the rock substrate sampled outside of the Tomba dei Vasi Dipinti also suggests that the moonmilk may be contributing to the authigenic carbonate growth in the host rock, covering the inner walls of the voids (Fig. 3). Co-evolution of rocks and microorganisms If the moonmilk observed inside the rocks (a location that was not reported before) is of biogenic origin, traces of organomineralisation would be expected.Indeed, the SEM analysis on the thin sections of calcarenite sampled outside of the Tomba dei Vasi Dipinti revealed many structures (nanofibres that originate from bacteria encased in a calcite structure) corresponding to organomineralisation (Figs. 4 and S6).Bacterial organomineralisation was also detected in the sandstone sampled inside the Tomba degli Scudi (Fig. S7) and in calcarenite sampled outside of the Tomba delle Pantere and inside of the Tomba Maggi 2 (Fig. S8), suggesting that this is a common phenomenon.Such bacterial organomineralisation is also known as "entombment" (Barton and Northup, 2007), and it is easily observed in laboratory settings when bacterial strains are subjected to environmental conditions favouring calcium carbonate precipitation (Fig. S9a, b, c).In plates, the precipitation of calcium carbonate occurs even at a considerable distance from the bacterial colony, possibly by the diffusion of extracellular enzymes known to be involved in calcium carbonate metabolism (Dhami et al., 2014;Rodriguez-Navarro et al., 2019) (Fig. S9d). It remains unclear how the moonmilk nanofibres are produced in natural environmental conditions because to date it has been impossible to reproduce their formation in the laboratory.In fact, bacterial strains cultured from rocks represent only a negligible fraction of the total microorganisms present in the rocks.Instead, the entire microbial community, with a metabolism that sustains the growth in the rock environment, is needed to precipitate and/or dissolve calcium carbonate.Indeed, under laboratory conditions, we have evidence that the ground calcarenite, with its entire microbial community, when present in a medium containing urea and CaCl 2 , produced calcite.In the same conditions, calcium carbonate is not produced with sterile (autoclaved) ground calcarenite (Fig. S10) (Benedetti et al., 2023).These and previous results (Banerjee and Joshi, 2014) showed that inactivated (dead) cells were unable to precipitate CaCO 3 in the laboratory, suggesting that cells need to be metabolically active for calcification and that cell structure alone is not sufficient to promote bioprecipitation.The most studied bacterial metabolism for CaCO 3 precipitation is the ureolytic metabolic pathway.This process involves the production of carbamate (NH 2 COOH) by urea hydrolysis, which spontaneously hydrolyses to form ammonia (NH 3 ) and carbonic acid (H 2 CO 3 ).These products react with water to form carbonate (CO 2− 3 ), ammonium ions (NH + 4 ), and hydroxyl ions (OH − ), finally resulting in an increase of pH.In an alkaline environment, the presence of calcium ions and the bacterial cells, as a nucleation site, allow for the precipitation of calcium carbonate (Anbu et al., 2016;Hammes et al., 2002;Nigro et al., 2022).Thus, it is not the presence of the bacterium alone, as a structure, that provides evidence of precipitation, but also high pH, metabolism, and negative memhttps://doi.org/10.5194/bg-20-4135-2023 Biogeosciences, 20, 4135-4145, 2023 brane charge.In laboratory conditions (Fig. S9), if urea and calcium are present in the medium, the mechanism is stimulated and accelerated. In natural environments, bacteria are centres of nucleation for nanofibre formation.They do not control the mineralisation process directly but induce the precipitation of calcium carbonate by changing the chemistry of the environment as a consequence of their metabolic activity and also serve as nuclei for crystallisation.This mechanism is a result of pH increase, a direct effect of the negatively charged bacterial surface, and the presence of a metabolic ureolytic process (Omoregie et al., 2021). Overall, our results underscore the role of microorganisms in promoting moonmilk deposition, contributing to the rock formation processes.Nevertheless, to propose moonmilk as part of a geological process, the microbial communities of the rocks and those contributing to moonmilk deposition should have a similar composition.Aiming to identify the rock microbial communities, samples from calcarenite and hybrid sandstone were analysed together with the corresponding moonmilk samples from the Tomba Maggi 2 and Tomba degli Scudi.The results of the 16S small subunit (SSU) rRNA amplicon sequencing showed a high abundance of Actinobacteria, Bacteroidetes, Cyanobacteria, Firmicutes, and Proteobacteria (Fig. 5 and additional data).Of note, the Firmicutes phylum is abundant, and several members, such as the Lysinibacillus genus, have extremely high urease ac-tivity and therefore greatly enhance carbonate precipitation (Banerjee and Joshi, 2014;Benedetti et al., 2023;Zhu and Dittrich 2016), see also Fig. S9d. Bacterial community diversity was measured by an inverse Simpson index and Shannon index for moonmilk (Tomba degli Scudi and Tomba Maggi 2) and rocks (calcarenite and hybrid sandstone).The indices do not show any significant differences between the samples (Mann-Whitney test, P > 0.05) (Fig. S11).These results show that in moonmilk and rocks the microbial composition is similar, irrespective of rock type (calcarenite and hybrid sandstone) or the environment where the samples were collected (outdoor or indoor).It should be noted that 16S SSU rRNA analysis does not provide information about metabolic activity; thus, these data do not identify microorganisms that are active in CaCO 3 deposition, but the overall data demonstrate that the endolytic community of the rocks is promoting moonmilk deposition.The results presented also revealed the presence of organomineralisation and calcite nanofibres that originate from bacterial entombment, not only on the surface but also inside the rocks.The presence of a resident microbial community deep within the rocks possibly evolved with the rocks through geological time.Therefore, no habitat should be considered as extreme for the resident microbial community, and the rocks should not be considered as a "refuge" for escaping extreme environmental conditions.Biological research should focus on microbial community evolution with respect to the geologic substrate in which they are living, considering the natural co-evolution of microbes and rocks, and possibly abstaining from the consideration of the microbial metabolism as an adaptation to adverse environmental conditions. Moonmilk as a biosignature The search for traces of extra-terrestrial life is a complex task often ending with inconclusive results.The co-evolution of minerals and microorganisms has implications for the quest of evidence for life on other planets.The discovery of minerals of undisputable biological origin, rather than organic remains, may provide the most robust signs of biological activity (Hazen et al., 2008).Co-evolution of life and minerals throughout Earth's history lays the foundation for an inclusive search for the presence of life, not only because rocks arose from life but also because life itself may have formed from rocks (Bizzarri et al., 2021;Marshall, 2020;Saladino et al., 2018).Thousands of Earth's minerals owe their existence to the development of life on the planet, and calcium carbonate phases that are massively produced on Earth by microorganisms are the best example (Hazen et al., 2008). In this work, we have focused on the calcium carbonate nanofibres (moonmilk).Given the tight association and coevolution between rocks and microbial communities that results in the observed organomineralisation, calcite nanofibres are of interest in the field of astrobiology and are considered https://doi.org/10.5194/bg-20-4135-2023 Biogeosciences, 20, 4135-4145, 2023 to be a potential sign of life.The moonmilk production contributed to rock formation by filling the pores and the cracks in the rocks, while the rock composition and the porosity shaped a microbial community that copes with high calcium content by producing calcite nanofibres.The moonmilk is mainly found in karst caves, but there are also examples of moonmilk bioprecipitation in hypogeal environments carved in different geologic substrates, such as granitoid rocks or sandstone (Miller et al., 2018;Saladino et al., 2018).The moonmilk has also been found in lava tubes, where the microbial communities are similar to those present in the moonmilk that originated from calcarenite (Gonzalez-Pimentel et al., 2021;Miller et al., 2020), raising the possibility of positing the moonmilk as a biosignature also beyond the Earth's calcium carbonate rocks. ful and professional preparation of thin sections, and John Eduard Hallsworth for the discussion and help while preparing the paper.We deeply thank Pierre Zalloua for paper revision and scientific advice.Angela Cirigliano was awarded of the grant Regione Lazio PR FSE 2021-2027.This work was supported by Ateneo Sapienza, 2021 (Sara Ronca). Financial support.This research has been supported by Ateneo Sapienza 2021 and DTC Lazio PERGAMO Project 305-2020-35549. Review statement.This paper was edited by Chiara Borrelli and reviewed by two anonymous referees. Figure 1 . Figure 1.In Tarquinia, Italy, during the Iron Age, the ancient Etruscans carved hypogeal tombs into calcarenite and in hybrid sandstone bedrock, whose walls are covered of the moonmilk, a secondary speleothem.(a) Examples of hypogeal walls of tombs carved in hybrid sandstone and calcarenite; the absence of the moonmilk is due to the restoration interventions.(b) Through the centuries, the moonmilk speleothem precipitated as a white patina on the walls and ceilings of the tombs: as an example, the walls covered in moonmilk of the Tomba degli Scudi and the Tomba Maggi 2 are shown, carved in hybrid sandstone and calcarenite, respectively.(c) Scanning electron micrographs of the moonmilk sampled on the walls showed in panel (b) in the Tomba degli Scudi and the Tomba Maggi 2. Regardless of the rock substrate in which the moonmilk is formed, the structure of the nanofibres is similar. Figure 2 . Figure 2. The moonmilk is present on the surface and inside the calcarenite and sandstone rocks.Optical microscope thin-section micrographs in cross-polarised transmitted light.(a) A calcarenite sample collected inside the Tomba Maggi 2 and (b) a hybrid sandstone sample collected inside the Tomba degli Scudi. Figure 3 . Figure 3.The moonmilk contributes to the lithogenic processes.Optical microscope thin-section micrographs (cross-polarised transmitted light) of the moonmilk speleothems grown into the calcarenite sampled outside of the Tomba dei Vasi Dipinti. Figure 4 . Figure 4. Bacterial organomineralisation in the calcarenite.Scanning electron micrograph of a thin section of the calcarenite sampled outside of the Tomba dei Vasi Dipinti. Figure 5 . Figure 5. Phyla present in the microbial community from moonmilk samples of the Tomba Maggi 2, the Tomba degli Scudi, and their corresponding rocks (calcarenite and hybrid sandstone).The histogram shows the Phylum relative abundance (%) for the analysed samples.Community structure was determined by targeted amplicon sequencing of bacterial 16S rRNA genes.All samples show a high abundance of Actinobacteria, Bacteroidetes, Cyanobacteria, Firmicutes, and Proteobacteria. Density and porosity were measured using an Ultrapyc 5000 helium pycnometer from Anton Paar with an accuracy of 0.02 % and repeatability of 0.01 %.Bulk density was obtained by dividing the dry mass of the sample by its total volume.Grain density resulted from the calculation of the mass/measured volume ratio of the pulverised matrix.Both total and effective (open) porosity was measured.
2023-07-21T01:07:24.939Z
2023-10-09T00:00:00.000
{ "year": 2023, "sha1": "b331c27ef66dabd1ef9862b8561da252e4340ee4", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/20/4135/2023/bg-20-4135-2023.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3b319fab5eb5a420c0669c6c9dc857dc7ff86eba", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
2596381
pes2o/s2orc
v3-fos-license
Intrathoracic Ectopic Liver in a Cow ABSTRACT A solitary spherical mass was found in the caudal part of the cranial lobe of the left lung of a 28-month-old Japanese Black cow. The mass was circumscribed, embedded in the lung parenchyma and not connected to the liver or diaphragm. Histologically, the mass comprised hepatocytes, portal structures consisting of interlobular bile ducts, interlobular arteries and interlobular veins, and central veins. Based on the histological findings, a diagnosis of intrathoracic ectopic liver was made. Considering the absence of any previous history of traumatic diaphragmatic hernia or surgery, the mass might have resulted from a congenital abnormality. To our knowledge, this is the first report of intrathoracic ectopic liver in a cow that might have resulted from a congenital abnormality. An ectopic liver is generally asymptomatic and an incidental finding discovered during surgery or necropsy [9]. In humans, an ectopic liver is often located adjacent to the pancreas [6], gallbladder [8], spleen [4] or native liver [1]. However, very rarely, it is found in the thoracic cavity [2]. In veterinary medicine, intrathoracic ectopic liver caused by traumatic injury has previously been reported in a cat [5], but no reports have described intrathoracic ectopic liver in cattle. This study describes an intrathoracic ectopic liver in a cow that might have resulted from a congenital abnormality. A 28-month-old Japanese Black cow weighing 794 kg was brought to a meat inspection center in good condition. The cow had no previous history of traumatic diaphragmatic hernia or surgery and showed no clinical abnormalities before being submitted for meat inspection. On gross inspection, a solitary spherical mass (6 × 6× 5 cm) was found in the caudal part of the cranial lobe of the left lung, enveloped by the serous membrane and embedded in the lung parenchyma. The gross cut surface of the formalin-fixed mass was elastic, firm, well-circumscribed and brown in color (Fig. 1). The mass was not connected to the liver or diaphragm. No gross abnormalities were evident in any other organ. Histologically, the mass was circumscribed, surrounded by fibrous connective tissue and separated from lung tissue (Fig. 2). It comprised sheets of polygonal hepatocytes arranged uniformly and radially (Fig. 3). Portal areas consisting of interlobular bile ducts, interlobular arteries and interlobular veins were present. Central veins were also seen at the center of hepatic lobules. The histological structures and streams of interlobular arteries, interlobular veins, central veins and sublobular veins showed no significant differences in comparison to normal bovine liver tissues. No blood vessels could be seen connecting the mass with the lung tissue. Hall's method revealed no bile production. Watanabe's silver impregnation revealed clear hepatic cords and abundant reticular fibers lining sinusoids. Immunohistochemically, hepatocytes were positive for anti-hepatocyte monoclonal antibody and negative for anti-cytokeratin monoclonal antibody (Fig. 4). Interlobular bile ducts were positive for anti-cytokeratin monoclonal antibody and negative for anti-hepatocyte monoclonal antibody. These results were the same as in the normal bovine liver tissues used as positive controls. Based on the histological findings, intrathoracic ectopic liver was diagnosed. Intrathoracic ectopic liver is a rare finding in humans and other animals [1,4,5]. The ectopic liver typically consists of histologically normal liver tissue in humans [2,6,8]. In this study, the mass likewise comprised histologically normal liver tissue. But, the ectopic liver has a higher neoplastic potential than the native liver [1,3]. The anti-hepatocyte monoclonal antibody is a highly specific and sensitive marker for human and canine hepatocytes (including normal, hyperplastic and neoplastic hepatocytes), and hepatocytes show granular and diffuse intracytoplasmic immunoreactivity, but the bile duct epithelia do not react [7]. Similarly, in this case and normal bovine liver tissues used as positive controls, hepatocytes were positive for the anti-hepatocyte monoclonal antibody, and interlobular bile ducts were negative immunohistochemically. According to one previous report about human intrathoracic ectopic liver [9], the intrathoracic ectopic liver had an autonomous vascular supply and a biliary system draining into other organs or no apparent drainage system. The vascular supply to the mass seemed to come from the aorta during surgery in another previous report about human intrathoracic ectopic liver [8], but no vascular supply to the mass could be recognized during meat inspection in the present case and the histological findings could not show any vascular supply from lung tissue. Given the long-term survival of the mass that comprised histologically normal liver tissue, the mass must have received the vascular supply from the surrounding tissues or blood vessels. But, the vascular supply to the mass in this case remains uncertain, because of the absence of clinical examinations, such as ultrasonographic techniques and angiography. Additionally, considering the absence of histological findings suggesting bile production, no biliary drainage system seemed to be present in this case. Although the histological findings were similar to those of the native Bar=200 µm. Fig. 3. The mass comprises hepatocytes and portal structures consisting of interlobular bile ducts, interlobular arteries and interlobular veins. HE. Bar=50 µm Fig. 4. Immunohistochemistry of hepatocytes and interlobular bile ducts using anti-hepatocyte monoclonal antibody. Hepatocytes are positive, and interlobular bile ducts are negative. Mayer's hematoxylin counterstaining. Bar=50 µm liver, the intrathoracic ectopic liver in this case might not have performed functions similar to the native liver. The possible mechanisms of the development of an intrathoracic ectopic liver in humans may be congenital abnormality, as an abnormal development of both the liver and diaphragm during the embryonic period, acquisition secondary to traumatic diaphragmatic hernia and acquisition secondary to hematogenous dissemination of liver tissue following a heart transplantation procedure [6,9]. One of the possible mechanisms reported previously in human intrathoracic ectopic liver resulting from a congenital abnormality is the development of another liver bud independent of the main hepatic diverticulum that remains sequestered in the thoracic cavity without connection to the native liver [4,9]. In the present case, considering the necropsy findings and previous history, the mass might have resulted from a congenital abnormality, and the above-mentioned mechanism might explain the pathogenesis of the present case. In conclusion, a solitary mass found in the lung of a cow was diagnosed as an intrathoracic ectopic liver. To our knowledge, this is the first report of an intrathoracic ectopic liver that might have resulted from a congenital abnormality in cattle. The anatomical site and appearance of the solitary mass embedded in the lung in this case of intrathoracic ectopic liver were extremely characteristic. Because intrathoracic ectopic liver is also a rare finding in veterinary medicine, this case report might provide useful information when correct diagnosis of intrathoracic ectopic liver in cattle is difficult during necropsy or meat inspection.
2017-07-15T00:02:53.016Z
2014-01-10T00:00:00.000
{ "year": 2014, "sha1": "f7e45364c66a7f6ad5f4fbfbbca5b8b79bc58a5e", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/76/5/76_13-0532/_pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f7e45364c66a7f6ad5f4fbfbbca5b8b79bc58a5e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268179185
pes2o/s2orc
v3-fos-license
JAK/STAT3 signaling in cardiac fibrosis: a promising therapeutic target Cardiac fibrosis is a serious health problem because it is a common pathological change in almost all forms of cardiovascular diseases. Cardiac fibrosis is characterized by the transdifferentiation of cardiac fibroblasts (CFs) into cardiac myofibroblasts and the excessive deposition of extracellular matrix (ECM) components produced by activated myofibroblasts, which leads to fibrotic scar formation and subsequent cardiac dysfunction. However, there are currently few effective therapeutic strategies protecting against fibrogenesis. This lack is largely because the molecular mechanisms of cardiac fibrosis remain unclear despite extensive research. The Janus kinase/signal transducer and activator of transcription (JAK/STAT) signaling cascade is an extensively present intracellular signal transduction pathway and can regulate a wide range of biological processes, including cell proliferation, migration, differentiation, apoptosis, and immune response. Various upstream mediators such as cytokines, growth factors and hormones can initiate signal transmission via this pathway and play corresponding regulatory roles. STAT3 is a crucial player of the JAK/STAT pathway and its activation is related to inflammation, malignant tumors and autoimmune illnesses. Recently, the JAK/STAT3 signaling has been in the spotlight for its role in the occurrence and development of cardiac fibrosis and its activation can promote the proliferation and activation of CFs and the production of ECM proteins, thus leading to cardiac fibrosis. In this manuscript, we discuss the structure, transactivation and regulation of the JAK/STAT3 signaling pathway and review recent progress on the role of this pathway in cardiac fibrosis. Moreover, we summarize the current challenges and opportunities of targeting the JAK/STAT3 signaling for the treatment of fibrosis. In summary, the information presented in this article is critical for comprehending the role of the JAK/STAT3 pathway in cardiac fibrosis, and will also contribute to future research aimed at the development of effective anti-fibrotic therapeutic strategies targeting the JAK/STAT3 signaling. Introduction Cardiovascular disease is still the major cause of global death despite great progress in treatment methods.Myocardial fibrosis is a common pathology of most cardiovascular diseases at the end stage (Rockey et al., 2015).It can destroy the cardiac structure, impair cardiac excitation-contraction coupling, and impede cardiac function of both contraction and relaxation, thereby promoting the development of cardiovascular disease into heart failure (Gyöngyösi et al., 2017;Nguyen et al., 2017).The order of severity of cardiac fibrosis is related to higher long-term mortality of cardiovascular disease, particularly heart failure (Azevedo et al., 2010;Aoki et al., 2011).Due to the complex and incompletely elucidated mechanisms of fibrosis, there is currently no specific antifibrotic treatment available for cardiac fibrosis. The Janus kinase/signal transducer and activator of transcription (JAK/STAT) signaling pathway, as a central communication node within cells, plays an essential role in a variety of pathophysiological activities like cell division, differentiation, immune regulation and tumorigenesis (Zhang J. Q. et al., 2022).It has been reported that many upstream mediators can activate this pathway to exert their biological functions, comprising growth factors, hormones, and cytokines (Darnell et al., 1994;Liu J. et al., 2023).The JAK/STAT pathway consists of three parts: ligand-receptor complexes, JAKs, along with transcription factors STATs.Among the STAT protein family, STAT3 is the most well-studied member and its activation can play beneficial or detrimental roles in various diseases.On the one hand, STAT3 shows highly activated in most cancers and cardiac injuries (Xian et al., 2021;Zhuang et al., 2022) and is demonstrated to be a pathogenic regulator (Yu and Jove, 2004).On the other hand, STAT3 is also recognized as a protective molecule, and its activation may confer cardioprotection against several cardiovascular diseases including ischemia and ischemia-reperfusion injury (Negoro et al., 2000;Fuglesteg et al., 2008;Harhous et al., 2019) and cardiac hypertrophy (Enomoto et al., 2015).Recently, accumulating evidence has confirmed a novel profibrotic role of the JAK/ STAT3 signaling activation in multiple tissues and organs, including the heart (Bao et al., 2020), liver (Ogata et al., 2006), kidney (Zheng et al., 2019), lung (Celada et al., 2018), and skin (Dees et al., 2020).In this regard, the JAK/STAT3 pathway may emerge as a potential therapeutic target for treating fibrotic diseases (Barry et al., 2007).However, there is a lack of a comprehensive summary on the role of the JAK/STAT3 signaling in mediating cardiac fibrosis.In this review, we discuss the structure, transactivation and regulation of the JAK/STAT3 signaling pathway and review current progress on the role of this pathway in cardiac fibrosis and challenges and opportunities of targeting the JAK/STAT3 signaling for the treatment of fibrosis. The cellular and molecular mechanisms of cardiac fibrosis Cardiac fibrosis usually occurs when myocardial tissue is suffering from a pathological stimulus such as ischemia, hypoxia, overload, inflammation or other pathogenic factors.It serves a dual role: it protects myocardial tissue integrity as a normal reparative response during injury, yet persistent and excessive scar formation greatly impairs the heart's systolic and diastolic functions (Leask, 2015).Cardiac fibrosis not only increases ventricular stiffness but also induces the secretion of growth factors and cytokines to promote cardiomyocyte hypertrophy, ultimately leading to a decline in myocardial compliance, heart failure, and even sudden death (Mohammed et al., 2015;Francis Stuart et al., 2016). Cardiac fibrosis is a common pathological feature manifested by multiple cardiovascular diseases, such as heart failure, hypertension, arrhythmia, cardiomyopathy, and myocardial infarction, and also plays a significant role in their onset and progression (Tao et al., 2014;Chen et al., 2015;Chung et al., 2021;Qi et al., 2022).Cardiac fibrosis manifests as the over-proliferation and differentiation of CFs and massive accumulation of extracellular matrix (ECM) components in the myocardium, like fibronectin, type I collagen, and type III collagen (Schafer et al., 2017).Myofibroblasts differentiated from CFs can synthesize contractile proteins like αsmooth muscle actin (α-SMA), leading to the distortion of tissue and cell structure (Hinz, 2007;Hinz, 2010).On the other hand, myofibroblasts can express excessive amounts of ECM proteins, thus leading to the substitution of permanent fibrotic scars for normal tissues, increased cardiac stiffness, and varying degrees of cardiac diastolic and systolic dysfunction (Weber, 1989;Cleutjens et al., 1995;Dobaczewski et al., 2006;Liu et al., 2017;Wang et al., 2022b). The source of myofibroblasts in fibrotic hearts remains a disputed matter.Although some studies indicate that a significant proportion of myofibroblasts may originate from endothelial cells, epithelial cells or hematopoietic fibroblast progenitors (Möllmann et al., 2006;Zeisberg et al., 2007;Aisagbonhi et al., 2011), prevailing evidence confirms that the primary source of myofibroblasts in fibrotic heart tissue could be the activation of resident CFs (Ali et al., 2014;Moore-Morris et al., 2014;Kanisicak et al., 2016;Shinde and Frangogiannis, 2017;Moore-Morris et al., 2018).Furthermore, it has been suggested that pericytes could potentially serve as a reservoir of myofibroblasts, but the precise mechanism by which they operate remains uncertain, and there may be an overlap between pericytes and resident fibroblast subsets (Humphreys et al., 2010). Although the molecular mechanisms involved in cardiac fibrosis are complex and variable, the transformation of CFs to myofibroblasts plays a central role in the process of cardiac fibrosis.Acute cardiac injury initiates a robust inflammatory response.This process involves the infiltration of immune cells into the cardiac tissue, which subsequently release inflammatory cytokines such as transforming growth factor (TGF)-β1, tumor necrosis factor-α (TNF-α) and interleukins (ILs) (Bujak and Frangogiannis, 2007;Christia et al., 2013).These cytokines activate CFs and instigate ECM remodeling through diverse signaling cascades.Concurrently, neurohormones within the renin-angiotensin-aldosterone system (RAAS) and the sympathetic nervous system, particularly Angiotensin II (Ang II), aldosterone, and catecholamines, are upregulated (Zou et al., 2004;Ferreira et al., 2016;Azushima et al., 2020).Their activation compels myofibroblasts to ramp up collagen production, culminating in the deposition of fibrotic tissue in the heart, which is a hallmark of cardiac remodeling.Additionally, mechanical stress, often a consequence of increased cardiac afterload in conditions like hypertension or valvular disease, prompts cardiomyocytes and fibroblasts to adapt by modifying their ECM, which alters their size, shape, and function (Li et al., 2018).Moreover, oxidative stress in the cardiac environment, primarily characterized by the overproduction of reactive oxygen species (ROS), inflicts direct cellular damage and fosters inflammation and apoptosis.These effects collectively trigger signaling pathways that exacerbate myocardial fibrosis (Grosche et al., 2018).Lastly, metabolic imbalances, including the production of advanced glycation end-products (AGEs) and lipotoxicity in cardiomyocytes, along with vascular implications like endothelial dysfunction, significantly contribute to the progression of cardiac fibrosis (Huby et al., 2015;Chen et al., 2016;Marciniec et al., 2017). 3 Structure, function, transcriptional activity and regulation of the JAK/ STAT3 signaling pathway Molecular structure of STAT3 In mammals, there are seven proteins belonging to the STAT family, which consists of cytoplasmic transcription factors named STAT1-STAT4, STAT5a, STAT5b, and STAT6 (Hu et al., 2020b).Among these, STAT3 is the most extensively studied and plays pivotal roles in controlling various cellular biological processes.STAT3 was originally discovered in 1994 through a series of studies on cytokineinduced acute responses of target genes.Unlike other family members, global deletion of STAT3 can cause embryonic death.The STAT3 protein consists of 770 amino acid residues and, similar to other members of the STAT family, it can be divided into six distinct functional domains (Figure 1): an NH 2 -terminal domain (NTD), a coiled-coil domain (CCD), a DNA binding domain (DBD), a linker domain (LD), an Src homology 2 (SH2) domain, and a COOHterminal transactivation domain (TAD).Each domain has a specific function (Hu et al., 2021) (Table 1). STAT3 is expressed widely in different cell types within the heart, such as cardiomyocytes, fibroblasts, immune cells, and endothelial cells.Two isoforms of the STAT3 protein, STAT3α (92 kDa) and STAT3β (83 kDa), are produced through alternative splicing of the identical gene.STAT3β is missing the COOH-terminal 55 amino acids, which are correspondingly replaced by seven distinct amino acid residues (Schaefer et al., 1995;Caldenhoven et al., 1996).Research has shown that while STAT3β is not vital for survival, mice deficient in STAT3α do not survive past birth (Maritano et al., 2004).STAT3α possesses two phosphorylation sites, namely, Tyr705 and Ser727, whereas STAT3β only possesses one phosphorylation site, specifically Tyr705.When either Tyr705 or Ser727 is phosphorylated, STAT3 is activated and exerts its function.STAT3 can be activated by more than 50 extracellular ligands, which are commonly some cytokines, hormones, growth factors, and chemokines, such as ILs, interferons, colony-stimulating factors, epidermal growth factor (EGF), and platelet-derived growth factor (PDGF) (Darnell, 1997;Hu et al., 2021).STAT3's biological functions are complicated and diverse and its main physiological roles under normal conditions are summarized in the following section. STAT3 is an important intracellular signaling molecule that has multiple functions under normal physiological conditions.These functions include: (1) Regulating the proliferation and differentiation of various cell types by binding to specific DNA sequences and affecting gene expression.For example, STAT3 promotes the proliferation of corneal limbal keratinocytes via a ΔNp63-dependent mechanism, and inhibiting this pathway can increase cell differentiation (Hsueh et al., 2011).STAT3 also mediates megakaryocyte differentiation induced by RAD001 (Su et al., 2013).( 2) Regulating the activation, proliferation, and secretion of cytokines by immune cells, which can modulate immune responses and inflammation.For instance, STAT3 inhibition can induce apoptosis and/or activate effective immune responses in colon cancer cells, overcoming cancer-induced immune tolerance (Jahangiri et al., 2020).Likewise, systemic injection of penetrating c-Myc and gp130 peptides can inhibit pancreatic tumor growth and induce anti-tumor immunity (Aftabizadeh et al., 2021).( 3) Mediating the expression of inflammation-related genes in response to various cytokines and growth factors.One of the most prominent examples is IL-6, which we will discuss in detail later.(4) Maintaining the self-renewal and differentiation of stem cells by regulating the transcription of target genes.Phosphorylated STAT3 is functionally associated with the expression of self-renewal genes in embryonic stem cells (Bourillot et al., 2009).Moreover, constitutively activated STAT3 can sustain the self-renewal process in the absence of leukemia inhibitory factor (LIF) (Matsuda et al., 1999).(5) Participating in tissue repair and regeneration processes by modulating cell survival and growth.For instance, Transmembrane and ubiquitin like domain containing 1 (Tmub1) inhibits the phosphorylation and activation of STAT3, impairing liver regeneration in mice after partial hepatectomy (Fu et al., 2019).Conversely, Krüppel-like factor 4 (KLF4) deletion in vivo induces axonal regeneration in adult retinal ganglion cells (RGCs) through the JAK/STAT3 signaling pathway.This regeneration can be further enhanced by removing the endogenous JAK/STAT3 pathway inhibitor SOCS3 (Qin et al., 2013).( 6) Regulating the energy metabolism of cells by influencing the expression of mitochondrial oxidative phosphorylation-related genes.For example, icaritin inhibits the survival and glycolysis of glioblastoma (GBM) cells through the IL-6/STAT3 pathway (Li et al., 2019a).Additionally, STAT3 promotes mitochondrial respiration and reduces the production of ROS in neural precursor cells (Su et al., 2020).( 7) Playing an essential role in early embryonic development, as embryos with STAT3 gene defects will die in the early stages of development.In humans, LIF and STAT3 are expressed in decidual tissue during early pregnancy.LIF can induce STAT3 phosphorylation in non-decidualized and decidualized human endometrial stromal cells in vitro, suggesting that LIF/ STAT3 signaling is involved in human embryo implantation and decidualization (Shuya et al., 2011).Furthermore, conditional ablation of STAT3 in the uterus can result in embryo implantation failure (Lee et al., 2013). Molecular structure of JAK In mammals, the JAK family consists of four main members (JAK1-JAK3 and Tyk2), which are non-receptor tyrosine protein kinases (Schindler and Darnell, 1995).JAK1, JAK2, and Tyk2 have broad expression, whereas JAK3 is mainly present in cells of the hematopoietic lineage (Speirs et al., 2018).Upon interaction of cytokines or growth factors with their corresponding receptors, JAK tyrosine kinases are activated, thereby facilitating intracellular signal transduction. The JAK protein is made up of seven similar regions (JH1-JH7) and includes four functional domains: a domain for tyrosine kinase, a domain for pseudokinase, an SH2 domain, and an NH2-terminal FERM domain (Four-point-one protein, Ezrin, Radixin, Moesin) (Figure 2) (Banerjee et al., 2017).The carboxy-terminal portion of each JAK includes the catalytic kinase domain (JH1) and the pseudokinase domain (JH2).JH1, containing nearly 250 amino acid residues, is the active phosphotransferase domain needed for phosphorylation of cytokine receptors and downstream STAT proteins.JH2 is similar to JH1 in structure, but it is generally considered to have no catalytic activity and can regulate the kinase activity of JH1 (Zhao et al., 2018;Xin et al., 2020).According to reports, the JAK2 protein's JH2 exhibits a minimal level of kinase activity as stated by Ungureanu et al. (2011).The N-terminal region of each JAK contains the SH2 (JH3 with half of JH4) and FERM (JH5-JH7 and one-half of JH4) domains, which collectively facilitate the interaction between JAK proteins and the box1/2 regions of cytokine receptors located near the cell membrane (Saharinen et al., 2000;Wallweber et al., 2014;Hubbard, 2017;Morris et al., 2018;Xin et al., 2020;Raivola et al., 2021). Canonical JAK/STAT3 signaling pathway The JAK/STAT signaling pathway is activated by more than 50 cytokines and growth factors, including hormones, interferons The domain structure and phosphorylation sites of STAT3 protein.STAT3 has two splicing isoforms, STAT3α and STAT3β, and they are comprised of 770 and 722 amino acids, respectively.STAT3 contains six different functional domains, including the NH 2 -terminal domain, coiled-coil domain, DNA binding domain, linker domain, SH2 domain, and COOH-terminal transactivation domain (TAD)."Y" means a tyrosine phosphorylation site, and "S" means a serine phosphorylation site [adapted from ref. (Hu et al., 2021). Domain Function Kishore and Verma (2012) (IFN), ILs, and colony stimulating factors (Darnell, 1997).These molecules regulate various cellular events, such as hematopoiesis, immune adaptability, tissue repair, inflammation, cell apoptosis, and adipogenesis (Owen et al., 2019).The JAK/STAT3 pathway is activated when these extracellular ligands bind to their dedicated transmembrane receptors (Figure 3).The cytosolic domains of these receptors are constitutively interacting with receptor-related JAK tyrosine kinases.These JAK kinases are nonactivated before the ligand stimulation, while the coupling of the ligand with its receptor results in auto-phosphorylation of JAK kinases (Feng et al., 1997). Upon activation, the JAK molecules phosphorylate the cytoplasmic segment of the receptors at particular tyrosine residues, subsequently serving as binding sites for cytoplasmic STAT3 protein and attracting the recruitment of the STAT3 protein.After docking, STAT3 is phosphorylated by JAK kinase and subsequently associates with itself or other phosphorylated STAT monomers to create homodimers or heterodimers upon separation from the receptor.Ultimately, these dynamic molecular pairs migrate from the cytoplasm to the nucleus, where they attach to target gene promoters and stimulate the expression of target genes (O'Shea et al., 2015;Durham et al., 2019), often causing proliferation, differentiation, and apoptosis. Noncanonical JAK/ STAT3 signaling pathway The function of STAT3 is influenced by different post-translational modifications, including phosphorylation, methylation, acetylation, and ubiquitination, occurring at various amino acid sites.In addition to classical signal transduction, JAK/STAT3 may also play a role in nonclassical signal transduction.Research has indicated that STAT3, which is not phosphorylated on Tyr705, has the ability to move from cytoplasm to the nucleus and can activate various STAT3 target genes in the absence of Ser727 phosphorylation (Bharadwaj et al., 2020).Additionally, the process can be facilitated by Lys685 acetylation and NF-kB signaling activation, as suggested by previous studies (Yang et al., 2007;Dasgupta et al., 2014).Besides being activated in the cytosol, all STAT proteins (excluding STAT4) have the ability to localize to the mitochondrion, leading to an enhancement in oxidative phosphorylation and membrane polarization.For example, STAT3 monomers phosphorylated on Ser727 can translocate into the mitochondrion without dimerization to increase membrane polarization and ATP synthesis, and inhibit ROS production and mitochondrial permeability transition pore (MPTP) opening, thus exerting a protective role (Boengler et al., 2010;Garama et al., 2016;Avalle and Poli, 2018).Besides, STAT3 has also been reported to translocate to the endoplasmic reticulum and contribute to reduce oxidative stress-induced apoptosis (Avalle et al., 2019).In the nucleus, certain STAT molecules that are not phosphorylated interact with heterochromatin protein 1 (HP1) located on heterochromatin.Phosphorylation of STAT by JAK or other kinases can cause the detachment of HP1 from heterochromatin, leading to its destabilization.Subsequently, phospho-STAT can interact with particular regions on autosomes and regulate the expression of target genes (Shi et al., 2006;Shi et al., 2008b;Li, 2008).This noncanonical JAK/STAT signaling is critical for sustaining heterochromatin stability.Moreover, increasing evidence has shown that activation of JAK/STAT signaling can cause chromatin remodeling in mammals (Christova et al., 2007;Shi et al., 2008a).Besides being triggered by JAK, STAT3 can also be activated by alternative non-receptor tyrosine kinases or JAKindependent receptors.As an example, the c-Src enzyme is capable of phosphorylating STAT3, which then can promote the expression of oncogenes (Yu et al., 1995).EGF receptor and PDGF receptor can directly activate STAT3 (Ruff-Jamison et al., 1994;Liu et al., 2023a). The process of JAK/STAT signal transduction contains a series of intracellular tyrosine phosphorylation, so PTPs have a key role in regulating this pathway.PTPs can directly dephosphorylate and inactivate the STAT dimers, and block the JAK/STAT cascade.For instance, a receptor tyrosine phosphatase PTPRTR can bind to and dephosphorylate the tyrosine residue at site 705 in STAT3 (Zhang et al., 2007).SHP-2, a significant member of the PTP family and also a target gene for activated STAT3, can decrease the phosphorylation level of STAT3 (Schmitz et al., 2000).In addition, PTPs can dephosphorylate JAK and prevent the JAK/STAT signaling. The PIAS family comprises four transcription regulatory factors, namely, PIAS1-PIAS4.PIAS was originally identified to be a suppressor of STAT, and PIAS3 can combine with STAT3.PIAS only binds to phosphorylated STAT dimers rather than STAT monomers (Hu et al., 2021).PIAS mainly suppresses the transcriptional activity of STAT by means of three mechanisms. SOCS family proteins are considered as major triggers of the JAK/STAT signaling attenuation, and there are eight members in this family: SOCS1-7 and cytokine-inducible SH2 protein (CIS) (Minamoto et al., 1997;Piessevaux et al., 2008;Kazi et al., 2014).Cytokine-stimulated JAK/STAT signaling activation induces the SOCS proteins, which act as negative feedback suppressors to regulate this pathway (Naka et al., 1997;Kershaw et al., 2013b).For example, SOCS3 gene is quickly induced by phosphorylated STAT3 dimers in the nucleus, and in turn SOCS3 protein interacts with activated JAK and its receptor to suppress JAK activity, thus preventing further JAK/STAT3 signaling activation (Babon et al., 2012;Kershaw et al., 2013a).SOCS primarily inhibits the JAK/STAT cascade in the following ways.(1) It competes with STAT for binding to the phosphorylated receptor and prevents STAT recruitment.(2) It forms an E3 ubiquitin ligase complex via the COOH-terminal SOCS box and degrades JAK or STAT that binds to SOCS (Kamran et al., 2013).(3) The SOCS protein has the ability to directly and specifically interact with either JAK or its receptor in order to inhibit the activity of JAK kinase.An example is the presence of a distinct brief pattern known as the kinase inhibitory region (KIR) in SOCS1 and SOCS3.This pattern Signal transduction and negative regulation of the canonical JAK/STAT3 pathway.The JAK/STAT3 cascade is initiated by the interaction between a ligand and its corresponding receptor.This interaction leads to the auto-phosphorylation of the JAK kinase bound to the receptor.Once activated, JAK phosphorylates a tyrosine residue on the receptor, creating a docking site for cytoplasmic STAT3 and recruiting STAT3.At this docking site, JAK phosphorylates STAT3.The phosphorylated STAT3 then dissociates from the receptor and forms dimers.These STAT3 dimers move to the nucleus, where they bind to promoters and regulate transcription.The JAK/STAT3 cascade is controlled by three primary types of negative regulators: PTPs (protein tyrosine phosphatases), PIAS (protein inhibitor of activated STAT), and CIS/SOCS (suppressor of cytokine signaling).PTPs block the JAK/ STAT3 signaling mainly by interacting directly with the STAT3 dimers and JAK to dephosphorylate them.PIAS prevents the JAK/STAT3 signaling principally by inhibiting the binding of STAT3 to DNA.As a common objective caused by the activation of JAK/STAT3, CIS/SOCS mainly hinders the JAK/ STAT3 cascade through the following methods: (1) obstructing the recruitment of STAT3 to the phosphorylated receptor; (2) directly interacting with JAK to suppress its kinase function; (3) prompting the creation of an E3 ubiquitin ligase complex that breaks down JAK or prevents STAT3 from binding to the SOCS protein [adapted from refs.(Gurzov et al., 2016;Hu et al., 2021). enables these two proteins to hinder the catalytic activity of JAK by directly binding to JAK or its receptor (Sasaki et al., 1999;Yasukawa et al., 1999;Alexander, 2002). The JAK/STAT3 pathway induces fibrosis Studies have indicated that the JAK/STAT3 pathway plays a key role in the process of fibrosis.It can be activated by various profibrotic mediators, such as TGF-β1, PDGF, vascular endothelial growth factor (VEGF), IL-6, Ang II, serotonin (5-HT), and endothelin (ET-1), and then leads to fibrogenesis (Rane and Reddy, 2000;Zhang et al., 2015;Roskoski, 2016) (Figure 4A).The JAK/STAT3 pathway is also demonstrated to be a central integrator of multiple pro-fibrotic pathways and its activation can promote the activation of fibroblasts and the expression of fibrosisrelated genes, such as α-SMA, collagens, and fibronectin (Zhang et al., 2015;Chakraborty et al., 2017;Dees et al., 2020).In addition, once activated, STAT3 can induce the expression of hypoxiainducible factor-1α (HIF-1α), a transcription factor that responds to hypoxic conditions and stimulates the production of ECM (Yang et al., 2021) (Figure 4A).Activated STAT3 can also trigger epithelial to mesenchymal transition (EMT), a cellular process that allows epithelial cells to transform into mesenchymal cells with more power in migration and invasion, and facilitates the progression of fibrosis (Montero et al., 2021;Yang et al., 2021) (Figure 4B). The effects of the JAK/STAT3 pathway on different types of cardiac injury The JAK/STAT3 pathway plays a pivotal role in various aspects of cardiac physiology and pathology, exhibiting multifaceted roles in the heart (Figure 5).It mediates protective effects in different stages of ischemia, including ischemia pre-, post-, and remote conditioning (Hattori et al., 2001;You et al., 2011;Gao et al., 2017).Agents such as N-acetylcysteine (NAC) and allopurinol (Wang et al., 2013), and insulin (Fuglesteg et al., 2008) are known to protect against myocardial ischemia-reperfusion injury through activation of the JAK/STAT3 pathway.Their protective mechanism likely involves the reduction of ROS production, decrease in cardiomyocyte apoptosis, promotion of angiogenesis, and delay in MPTP opening.In the context of myocardial infarction, molecular factors like miR-124, IL-10, and growth arrest and DNA damage-inducible α (GADD45A) exert beneficial effects through the STAT3 pathway.Specifically, miR-124 offers anti-apoptotic benefits, IL-10 provides anti-inflammatory effects, and GADD45A enhances VEGF-mediated angiogenesis, collectively improving prognosis (He et al., 2018;Wang et al., 2022a;Tesoro et al., 2022).Conversely, conditional deletion of STAT3 in cardiomyocytes exacerbates cardiac remodeling during the subacute phase of myocardial infarction or under chronic β-adrenergic stimulation (Enomoto et al., 2015;Zhang et al., 2016).Furthermore, cardiomyocyte-specific transgenic expression of SOCS1 inhibits JAK/ STAT3 activation in enterovirus-induced myocarditis, but this is associated with increased mortality in mice, highlighting a complex interplay (Yasukawa et al., 2003). 4 Multiple mediators regulate cardiac fibrosis through the STAT3 signaling pathway ILs ILs are a type of cytokine proteins that various cells, mainly immune ones, produce.Cytokines modulate cellular functions such as growth, maturation, movement, adhesion, activation and differentiation (Zhang and An, 2007;Brocker et al., 2010).ILs are a large family of cytokines with more than 60 members, which can be grouped into four categories: IL-1 related, type 1 helical (IL-4 related, γ chain and IL-6/IL-12 related), type 2 helical (IL-10 related and IL-28 related), and IL-17 related (Brocker et al., 2010).ILs regulate homeostasis by influencing the cardiovascular, neuroendocrine and metabolic systems in the human body (Corwin, 2000). Recent research has demonstrated that ILs contribute to myocardial fibrosis via the STAT3 pathway.Some ILs play proinflammatory and fibrotic roles, and IL-6 is the most representative (Figure 6).In the absence of NF-E2-related factor 2 (Nrf2), IL-6 levels further increase in response to Ang II, thereby activating the IL-6/STAT3 pathway, which causes cardiomegaly and inflammation (Chen et al., 2019a).In addition, Ang II can induce Tolllike receptor phosphorylation of STAT3, increase IL-6 production, and continuously activate the JAK/STAT pathway, thereby providing positive feedback and promoting myocardial hypertrophy, fibrosis, and ventricular remodeling (Chen et al., 2017a;Han et al., 2018;Zhang FIGURE 4 (A).Different JAK/STAT3 activators that play important roles in the pathophysiology of myocardial fibrosis.(1) TGF-β interacts with its receptor (TGF-βR) on the cell surface, initiating receptor kinase activity.This activity leads to JAK phosphorylation and subsequent activation of STAT3.However, the precise mechanism underlying this process remains to be fully elucidated.( 2) IL-6 binds to its specific receptor, IL-6R, forming a complex.This complex then associates with the membrane protein gp130.Activation of JAKs, which are associated with gp130, is critical for phosphorylating specific tyrosine residues on gp130.These residues act as anchoring points for STAT3.(3) Ang II and ET-1 engage with the GPCR family, triggering the phosphorylation of tyrosine in JAK kinase and consequently activating STAT3.(4) PDGF and VEGF each bind to their respective tyrosine kinase receptors.This binding results in the phosphorylation of tyrosine residues on the receptors, which can indirectly or transactivate JAK, leading to the activation of the STAT3 pathway.Once phosphorylated, STAT3 dimerizes and moves into the nucleus.In the nucleus, these STAT3 dimers attach to specific DNA sequences, enhancing the transcription of genes that are pivotal in driving inflammation and fibrosis, including collagen, fibronectin, α-SMA, etc.In addition, the activation of STAT3 has the capability to stimulate the expression of HIF-1α and enhance the production of ECM in hypoxic environments.(B).Epithelial to mesenchymal transition (EMT).The activation of JAK/STAT3 signaling by pathological stimuli has the potential to induce a phenotypic transition of epithelial cells into mesenchymal cells.These mesenchymal cells exhibit enhanced migration and invasion capabilities.(By Figdraw).et al., 2019b).IL-6 enhances STAT3 phosphorylation in cultured CFs, whereas inhibiting STAT3 reduces IL6-induced collagen synthesis and reverses pressure overload-induced cardiac hypertrophy (Mir et al., 2012).In a transverse aortic constriction (TAC)-induced mouse heart failure model, inhibiting IL6/gp130/STAT3 with raloxifene alleviated TAC-induced myocarditis, cardiac remodeling and dysfunction (Huo et al., 2021).In mice with CVB3-induced dilated cardiomyopathy (DCM), IL-6 knockout reduced the phosphorylation level of STAT3 in myocardial tissue, thereby improving myocardial remodeling induced by DCM (Li et al., 2019b). TGF-β The TGF-β and STAT3 signaling pathways have a feedback loop that regulates the acute/chronic stress response in the heart.TGF-β signaling affects STAT3 as an important target in its downstream pathway (Pedroza et al., 2018;Chen et al., 2019b;Sun et al., 2022).Several studies have demonstrated the interaction between TGF-β and STAT3 in cardiac fibrosis.For instance, it has been reported that TGFβ-induced CD44/STAT3 signaling plays a crucial part in atrial fibrosis and fibrillation formation.CD44 is a membrane receptor that modulates fibrosis.Blocking CD44 signaling can reduce TGF-βinduced STAT3 activation and collagen expression in atrial fibroblasts, implicating a potential approach for treating atrial fibrosis and fibrillation (Chang et al., 2017).Moreover, Ephrinb2mediated myocardial fibrosis involves the activation of the TGF-β/ Smad3 and STAT3 pathways.Further study revealed that Ephrinb2 could enhance the interaction of TGF-β/Smad3 and STAT3 signaling to promote cardiac fibrosis (Su et al., 2017).Furthermore, tyrosine mutation at site 705 to glutamic acid constitutively activated STAT3, which could further enhance the interaction between Smad3 and STAT3 (Su et al., 2017).One previous study showed that a high-fat diet could activate the left ventricular renin-angiotensin system (RAS) and JAK1/2-STAT1/3 pathways in rats by increasing ROS and IL-6 production, ultimately causing cardiac fibrosis.This creates a positive feedback loop that activates the TGF-β1/Smad3 fibrotic pathway and enhances left ventricular collagen synthesis (Eid et al., 2019).In cultured CFs, TGF-β1 can activate STAT3 phosphorylation, increasing fibrosis-related protein expression, and relaxin can block STAT3 phosphorylation and reverse TGF-β1-induced fibrosis (Yuan et al., 2017).These results suggest that STAT3 either acts as a separate The role of activation of the JAK/STAT3 pathway in different types of cardiac damage.(1) In ischemia-reperfusion injury, agents such as NAC, allopurinol, and insulin may confer protective effects.They achieve this by reducing ROS production and cardiomyocyte apoptosis, promoting angiogenesis, and delaying the opening of the MPTP.( 2) In the case of myocardial infarction, certain molecular factors like miR-124, IL-10, and GADD45A exert beneficial effects through the STAT3 pathway.These include anti-apoptotic (miR-124), anti-inflammatory (IL-10), and VEGF-mediated angiogenic effects (GADD45A), collectively contributing to improved prognosis.(3) The situation of myocarditis is more complex.The upregulation of SOCS1 can inhibit inflammation.Meanwhile, the upregulation of complement C3 and Th17 cells, along with the downregulation of Piceatannol, may exacerbate inflammation.These findings highlight the multifaceted impact on the progression of myocarditis.(4) Cardiac hypertrophy is influenced by Ang II, HSF1, isoproterenol, and FNDC5, which collaboratively induce hypertrophy through increased oxidative stress and inflammation.( 5) Arrhythmias are closely associated with JAK/STAT3 activity, which contributes to myocardial sarcoplasmic reticulum Ca2+ overload, increased cardiac sympathetic nerve activity, and ventricular remodeling."↑" represents activation, upregulation or exacerbation, and "↓" represents inhibition, downregulation or relief.(By Figdraw).IL-6 causes myocardial fibrosis through the JAK/STAT3 signaling pathway.IL-6 binds to its receptor, IL-6R, forming a complex that activates the gp130 receptor.This activation triggers the JAK family of tyrosine kinases.Once activated, these JAKs phosphorylate STAT3, a crucial step in the signaling pathway.Phosphorylated STAT3 dimerizes and translocates into the nucleus.There, STAT3 dimers bind to specific DNA sequences, promoting the transcription of genes that are pivotal in mediating inflammation and fibrosis.(By Figdraw).signal molecule downstream of TGF-β or interacts with the TGF-β/ Smad pathway to regulate cardiac fibrosis (Figure 7). MicroRNAs (miRs) MiRs are a class of endogenous noncoding single-stranded RNAs that are about 19-25 nucleotides long.First, within the nucleus, RNA polymerase II transcribes the gene encoding the miR into the primary transcript (pri-miR).Then, the pri-miR is transported to the cytoplasm under the cooperative action of the Ran-GTP enzyme and transporter Exportin5, and the doublestranded RNA-specific nuclease Dicer enzyme cleaves the pri-miR, which is transported to the cytoplasm to form doublestranded miR of 21-25 nucleotides.The helicase unwinds the double-stranded miR, leading to degradation of one strand and the formation of a mature miR with a hydroxyl group at the 3′-end and a phosphate group at the 5′-end.Finally, the RNA-induced gene silencing complex binds the mature miR, thereby regulating target gene silencing post-transcriptionally (Lu and Rothenberg, 2018).In recent years, the relationship between miRs and pathological fibrosis has been examined, but the specific mechanisms by which miRs regulate fibrosis are still worth exploring.During the development of liver fibrosis induced by viral hepatitis, the levels of miR-16, miR-146a, miR-221, and miR-222 were markedly increased in the serum of patients with chronic hepatitis C (Abdel-Al et al., 2018).In the livers of mice treated with CCl 4 , miR-30c and miR-193 were specifically downregulated (Roy et al., 2015).Interestingly, other studies indicated that miR-29 could promote apoptosis in cardiomyocytes by downregulating antiapoptotic genes such as Bcl-2, CDC42 and Tcl-1, while miR-29 could prevent fibrosis by inhibiting the release of collagen from the ECM (Pekarsky et al., 2006;Mott et al., 2007;van Rooij et al., 2008).These results indicate that different miRs may have opposite effects on fibrosis regulation, and the same miR may have significant differences in fibrosis regulation. STAT3 and miRs have crosstalk that is crucial for maintaining cardiac function under normal and pathological conditions.This STAT3-miR crosstalk can mediate cardiac disease in several ways.First, STAT3 can directly bind to miRs to mediate a feedback regulatory relationship or mediate an indirect feedback regulatory relationship with miRs through a long noncoding RNA (lncRNA)/ protein.As an example, in oxygen-glucose deprivation-induced cardiomyocyte injury, lncRNA MIAT, which is associated with myocardial infarction, captures miR-181a-5p and boosts the expression of JAK2.This, in turn, amplifies myocardial inflammation and apoptosis through the JAK2/STAT3 signaling pathway (Tan et al., 2021).In addition, miR-21 activates the STAT3 signaling by targeting tumor suppressor cell adhesion molecule 1 (CADM1) and enhances cardiac fibrosis (Cao et al., 2017).Second, STAT3 can directly mediate the transcription of downstream miRs, and phosphorylated STAT3 can cooperate with other transcription factors to promote or inhibit the transcription of miRs.In diabetic hearts exposed to ischaemia/reperfusion, STAT3 has the ability to attach to the miR-17-92 promoter and stimulate the targeted inhibition of pro-apoptotic prolyl hydroxylase 3 (PHD3) by miR-17/20a, resulting in a decrease in apoptosis (Samidurai et al., 2020).Moreover, phosphorylated STAT3 can interact with NF-κB and inhibit miR-188-3p expression (Kuo et al., 2017;Sp et al., 2018;Masoumi-Dehghi et al., 2020).Third, miRs specifically recognize the 3′UTR of STAT3 mRNA and form incomplete complementary pairing, resulting in the inhibition of STAT3 mRNA translation, thereby blocking STAT3 expression.Following myocardial infarction, the expression of STAT3 mRNA is reduced by miR-17-5p and miR-124, which leads to the deterioration of autophagy, inflammation, myocardial remodeling, and apoptosis.These miRs bind to the 3′UTR of STAT3 mRNA (He et al., 2018;Chen et al., 2022).In summary, multiple miRs can interact with STAT3 through different mechanisms to enhance or inhibit cardiac fibrosis (Figure 7). Other mediators impact cardiac fibrosis through the STAT3 signaling pathway In addition to the above mediators that can affect cardiac fibrosis through the STAT3 signaling pathway, there are other mediators that can affect myocardial fibrosis caused by ischemia/reperfusion, atrial fibrillation, diabetic heart disease, DCM, and hypertensive heart damage through the STAT3 signaling pathway (Table 2). The regulatory role of STAT3 and autophagy in cardiac fibrosis Autophagy is widely present in eukaryotic organisms and is a process that degrades harmful substances in cells and promotes their recycling through the lysosome pathway.In general, moderate autophagy can maintain the stability of the internal environment, while excessive autophagy can induce cell damage (Kuma et al., 2017).The process is mainly divided into four stages: induction, initiation, elongation, and mature degradation, which are regulated by complex molecular mechanisms (Estrada-Navarrete et al., 2016;Liu et al., 2016;Lin et al., 2019;Kaushal et al., 2020).Autophagy recovers and removes damaged proteins and organelles, playing an important role in maintaining the normal function of myocardial cells (Mialet-Perez and Vindis, 2017).Interestingly, the role of autophagy in fibrosis may vary with fibrosis progression.Zhang et al. found that inhibiting autophagy could improve myocardial fibrosis in mice subjected to TAC surgery (Zhang et al., 2021).At 20 weeks after TAC in mice with endothelial leptin receptor gene knockout, myocardial fibrosis in these mice was improved by autophagy activation (Gogiraju et al., 2019).These research results demonstrate that the activation or inhibition of autophagy may occur during the process of cardiac fibrosis, and the role of autophagy in fibrosis has a dual nature. Autophagy could potentially be linked to numerous signaling pathways, one of which is the STAT3 signaling pathway that governs the fate of cells, determining whether they survive or perish.Yuan et al.'s research indicates that relaxin attenuates TGF-β1-induced autophagy in primary CFs by suppressing the phosphorylation of STAT3, thereby reducing cardiac fibrosis (Yuan et al., 2017).In septic cardiomyopathy, the reduced expression of miR-125b leads to excessive activation of STAT3/high mobility group box protein 1 (HMGB1), resulting in elevated ROS generation and impaired autophagic flow, ultimately leading to myocardial dysfunction (Yu et al., 2021).Additionally, the overexpression of Src-associated in mitosis 68 (Sam68) promotes the osteogenic differentiation of human valvular interstitial cells (hVICs) through the STAT3 signaling-mediated autophagy inhibition, thus inducing aortic valve calcification, while knockdown of Sam68 reduces the phosphorylation of TNF-α-activated STAT3 and the expression of downstream genes, thereby affecting autophagic flow in hVICs (Liu et al., 2023b).The activation of STAT3 is crucial for reducing cardiac autophagy and inhibiting cardiac ischemia/reperfusion injury, as demonstrated by the inhibition of soluble receptor for advanced glycation end-products on cardiac ischemia/reperfusion injury (Dang et al., 2019). 6 Challenges and opportunities for targeting the STAT3 signaling pathway for the treatment of fibrosis Targeting STAT3 for heart disease treatment presents significant challenges.STAT3 is widely recognized for its role in promoting myocardial fibrosis.However, myocardial fibrosis may not always be detrimental in certain heart diseases.fibrosis, for instance, can lead to adverse remodeling in myocardial infarction patients, potentially resulting in heart failure.Yet, in the early stages of myocardial infarction, fibrosis is crucial in maintaining the structural integrity of the infarcted ventricle (Prabhu and Frangogiannis, 2016).Moreover, STAT3 actively participates in the activation and proliferation of CFs, fostering fibrotic remodeling.In cardiomyocytes, STAT3 exhibits a dual nature.It can offer protective or adverse effects, such as enhancing survival and mitigating oxidative stress or mediating cardiac hypertrophy (Wang et al., 2021;Li et al., 2022).Despite cardiomyocytes not being directly involved in ECM production, they can influence the fibrotic response through paracrine signals (Qu et al., 2017).Additionally, the STAT3 signaling pathway interacts with other pathways, playing varying roles.JAK1, for example, binds to TGF-βR1, while JAKs also associate with gp130 and get activated by TGF-β (Itoh et al., 2018).Previous studies have shown that STAT3 works in tandem with Smad3 to induce connective tissue growth factor, contributing to fibrosis (Liu et al., 2013;Tang et al., 2017).Conversely, overactivated STAT3 signaling in lung fibroblasts diminishes SMAD signaling by reducing Smad3 phosphorylation, potentially due to Smad7 induction, although this theory requires experimental validation (O'Donoghue et al., 2012).Thus, identifying the optimal timing for STAT3 inhibition is crucial for maximizing therapeutic benefits and minimizing side effects.Targeting STAT3 in CFs could effectively reduce fibrosis, but its protective potential in cardiomyocytes warrants consideration.Overall, STAT3's role in cardiac biology is multifaceted.A thorough Abbreviation: SHP-1, tyrosine phosphatase 1; ECM, extracellular matrix; ROS, active oxygen; TGF-β1, transforming growth factor-β1; SMAD2, small mother against decapentaplegic 2; PTEN, phosphatase and tensin homologue deleted on chromosome 10; CF, cardiac fibroblasts; GPX4, glutathione peroxidase; PPAR, peroxisome proliferator-activated receptor. understanding of its function across various cell types and disease stages is essential for developing effective treatments.Despite the complexities in targeting STAT3 signaling for fibrosis treatment, recent advancements have yielded promising results (Table 3).Presently, methods to directly inhibit STAT3, aimed at targeting fibrosis, are categorized based on various target domains.These include the SH2, DBD, NTD, and TAD.In this section, we highlight key STAT3 inhibitors that specifically target these domains of the STAT3 protein. Inhibitors targeting the SH2 domain STAT3 homodimerization is facilitated by protein-protein interactions between the SH2 domains of the individual monomers, particularly via phosphorylation at Tyr705.This pivotal molecular interaction has been harnessed to develop inhibitors targeting STAT3 directly (Furtek et al., 2016).Inhibiting the SH2 domain not only disrupts STAT3 activation and dimerization but also impedes its subsequent nuclear translocation and the expression of genes regulated by STAT3. Several small molecule STAT3 inhibitors, notably Stattic, S3I-201, and S3I-201 analogs, play a significant role in mitigating myocardial fibrosis.These inhibitors function by binding to the SH2 domain of STAT3, thereby curtailing its activity.Elevated levels of fibroblast growth factor 23 (FGF23) are reported to induce atrial fibrosis in atrial fibrillation patients through enhancing ROS production and subsequent STAT3 and Smad3 phosphorylation.Stattic has been shown to counteract these effects (Dong et al., 2019).Moreover, administering S3I-201 to mice with myocardial infarction has demonstrated reduced left atrial fibrosis in vivo (Chen et al., 2017b). Another category of inhibitors targeting STAT3's SH2 domain comprises derivatives of natural compounds.Cryptotanshinone, a primary active component extracted from Salvia miltiorrhiza, suppresses the STAT3 pathway to reduce cardiac fibrosis and improve cardiac function in diabetic rats (Lo et al., 2017a).In vitro studies reveal that cryptotanshinone significantly curbs Ang II-induced cardiomyocyte hypertrophy and TGF-β-induced myofibroblast activation by impeding STAT3 phosphorylation and nuclear translocation (Li et al., 2023).Additionally, natural compounds like curcumin and resveratrol have been identified to possess properties beneficial in combating atherosclerosis (Zordoky et al., 2015;Ganjali et al., 2017). These inhibitors are crucial for their anti-inflammatory and anti-atherosclerotic properties, suggesting their potential as therapeutic agents for ameliorating fibrosis.However, these inhibitors are not without drawbacks.A primary issue is that most inhibitors targeting the SH2 domain lack specificity to STAT3, making it challenging to exclude the involvement of other STAT proteins in fibrosis (Szelag et al., 2016).Additionally, STAT3 monomers or unphosphorylated STAT3 proteins can interact with other proteins to transcribe downstream target genes, which limits the efficacy of targeting the SH2 domain.Further complicating matters, activating mutations in the SH domain have been identified in somatic cells.The impact of these somatic mutations on the binding efficiency of SH2 domain inhibitors to STAT3, and consequently on their effectiveness, remains to be fully understood (Qiu and Fan, 2016).Therefore, the precise targeting of STAT3's SH2 domain warrants further research focus. Inhibitors targeting the DBD domain The DBD of STAT3 specifically recognizes and binds to distinct DNA elements in target genes.This selective interaction facilitates the precise induction of target gene expression, characterized by high specificity. Research has uncovered that platinum compounds, including IS3-295, CPA-1, CPA-7, and platinum tetrachloride (IV), effectively block the DNA-binding activity of STAT3.These compounds can inhibit cell growth and induce apoptosis, while not affecting normal cells and avoiding prolonged STAT3 activation (Beebe et al., 2018).Additionally, Galiellalactone, a natural product, impedes STAT3's DNA-binding activity by interacting with its DBD domain.To enhance its oral bioavailability, N-acetyl L-cysteine methyl ester has been added to the thiol group, resulting in the creation of the prodrug GPA512.However, GPA512's lack of specificity, as it also disrupts other signaling pathways like NF-κB and TGF-β, could pose Frontiers in Pharmacology frontiersin.org13 challenges in its future development (Don-Doncow et al., 2014;Escobar et al., 2016).InS3-54, discovered through an advanced computer screening method, selectively binds to STAT3's DBD domain in vitro, inhibiting its DNA-binding activity.Its analog, InS3-54A18, exhibits improved solubility, specificity, and pharmacological properties, while showing minimal side effects in animal models (Huang et al., 2016). While virtual screening techniques, including molecular modeling, have demonstrated that certain inhibitors can directly bind to the DBD domain of STAT3, the scarcity of adequate assay systems has limited the identification of small molecule inhibitors in this category.This constraint has significantly impeded the drug development process.Additionally, inhibitors targeting the STAT3 DBD encounter similar challenges to those faced by SH2 domain-targeting inhibitors in terms of therapeutic application. Inhibitors targeting NTD and TAD domains Inhibitors targeting the NTDs and TAD of STAT3 can modulate the binding of STAT3 dimers and regulate DNA transcription, potentially contributing to anti-fibrotic effects.In the study of the selective STAT3 NTD inhibitor ST3-H2A2, Timofeeva et al. observed that this compound robustly activated apoptosis genes, leading to the induction of apoptosis in cancer cells (Timofeeva et al., 2013).Moreover, researchers have successfully identified the allosterically active small molecule K116, which binds to the TAD of STAT3 and effectively inhibits its activity (Huang et al., 2018). In summary, while numerous STAT3 inhibitors have demonstrated anti-fibrotic properties, identifying inhibitors that are highly efficient, low in toxicity, and have minimal side effects remains a challenge.Additionally, there is a scarcity of extensive animal studies on the pharmacology and toxicology of these inhibitors.Furthermore, only a limited number of these inhibitors have progressed to clinical evaluation.However, the integration of STAT3 inhibitors with other targeted therapeutic agents, particularly in combination with immunotherapy agents, offers promising potential.It is hoped that future research will lead to significant advancements, enabling the broader clinical application of STAT3 inhibitors. Conclusion Cardiac fibrosis results from the excessive accumulation of ECM in the myocardium and is central to many cardiac pathologies.Since JAK/STAT3 activation can increase fibrotic effector cells and ECM deposition through various pathways, it may be a potential target of antifibrotic therapy.As mentioned previously, we emphasized the promoting effects of various mediators on cardiac fibrosis through activation of the JAK/STAT3 signaling pathway.However, there may be many other mediators that have not yet been identified, and modern proteomics technology and protein identification will speed up the discovery.Regarding fibrosis, the antifibrotic effect of STAT3 inhibitors is receiving attention, but there has been little research on their ability to inhibit myocardial fibrosis.While further research is required to elucidate its role in various types of myocardial fibrosis, the JAK/STAT3 signaling holds promise as a therapeutic target for cardiac fibrosis due to its connection between cardiac inflammation and fibrosis. FIGURE 2 FIGURE 2Structure of JAK.(A).Domains and conserved phosphorylation sites of the JAK protein.The JAK protein family contains four members, JAK1-3, and TYK2.Each is composed of seven homologous regions, labeled JH1-JH7.These regions make up four distinct functional domains, of which, JH1 corresponds to the kinase domain; JH2 is the pseudokinase domain; JH3 and a portion of JH4 together form the SH2 domain; and the combination of JH5, JH6, JH7, and the rest of JH4 constitutes the FERM domain."P" represents conserved tyrosine phosphorylation sites of the JAK protein.(B).Threedimensional spatial structure of JAK in cells [adapted from ref.(Hu et al., 2021). FIGURE 7 STAT3 FIGURE 7 STAT3 influences cardiac fibrosis through multiple pathways.(1) The crosstalk between STAT3 and miR manifests in several ways: STAT3 can form either a direct feedback or an indirect feedback loop by binding with miR; it can also mediate the transcription of downstream miR; meanwhile, miR can influence the translation of STAT3 mRNA.(2) Positioned downstream of the TGF-β/SMAD signaling cascade, STAT3 might collaboratively regulate myocardial fibrosis with TGF-β.Their synergistic action could potentially be associated with the phosphorylation of STAT3.(By Figdraw). TABLE 1 Function of STAT3 domains. TABLE 2 Mediators regulate fibrosis through the STAT3 signaling pathway. TABLE 3 STAT3 inhibitors for treating organ fibrosis.
2024-03-03T16:10:39.555Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "7bacfcfa945da05dd8c6ded64a90f1884364c575", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fphar.2024.1336102", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "267f40e8e50635aacfa6f5f02a1960e4c84fdc10", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265047782
pes2o/s2orc
v3-fos-license
A Δ-learning strategy for interpretation of spectroscopic observables Accurate computations of experimental observables are essential for interpreting the high information content held within x-ray spectra. However, for complicated systems this can be difficult, a challenge compounded when dynamics becomes important owing to the large number of calculations required to capture the time-evolving observable. While machine learning architectures have been shown to represent a promising approach for rapidly predicting spectral lineshapes, achieving simultaneously accurate and sufficiently comprehensive training data is challenging. Herein, we introduce Δ-learning for x-ray spectroscopy. Instead of directly learning the structure-spectrum relationship, the Δ-model learns the structure dependent difference between a higher and lower level of theory. Consequently, once developed these models can be used to translate spectral shapes obtained from lower levels of theory to mimic those corresponding to higher levels of theory. Ultimately, this achieves accurate simulations with a much reduced computational burden as only the lower level of theory is computed, while the model can instantaneously transform this to a spectrum equivalent to a higher level of theory. Our present model, demonstrated herein, learns the difference between TDDFT(BLYP) and TDDFT(B3LYP) spectra. Its effectiveness is illustrated using simulations of Rh L3-edge spectra tracking the C–H activation of octane by a cyclopentadienyl rhodium carbonyl complex. I. INTRODUCTION [10][11] Supervised machine-learning/deep learning algorithms, 12 i.e., multilayer models aimed at extracting and learning patterns represented in data, have emerged as a potential approach for overcoming this challenge.Recently deep neural networks (DNN) capable of predicting the line shape of x-ray absorption (XAS) [13][14][15][16][17][18][19][20] and emission (XES) 21,22 spectra have been developed.The key to any machine learning model is the quality of the data with which it is trained.To achieve accurate DNNs capable of converting input structures into spectral lineshapes, in a manner akin to quantum chemistry calculations, two distinct approaches for generating training data have been explored.The first approach, referred to as "Type I", focuses on achieving generality in the sense that it is able to simulate an XAS/XES spectrum for an arbitrary absorbing atom in any coordination environment for a given absorption edge.][25][26] A general Type I model is preferable, as it avoids the timeconsuming requirement to develop a new model for each specific problem.However, the main challenge associated with developing accurate and generalizable training sets for prediction of x-ray absorption near-edge structure (XANES) spectra is scale.Indeed, recent DNN models for predicting XAS spectral lineshapes of transition metal K-edges 16 have been trained using molecules from the tmQM training set 27 containing a single geometry of the mono-metallic complexes harvested from the Cambridge structural database (CSD). 28hile this has been shown to be accurate when used to predict spectral shapes of compounds in a similar chemical space, large uncertainties arise when considering complexes with multiple absorbing atoms or a strongly distorted from their equilibrium geometry. 15,29Ultimately achieving comprehensive coverage of the chemical space is a significant challenge, especially when seeking to develop a training set using a high-level theory with large computational burden for each sample. One approach to overcome this is to use a composite strategy, D-learning, as introduced by Ramakrishnan et al. 30 The concept behind this is to use the machine-learning models to correct the properties obtained from a computationally inexpensive approximate quantum calculation to those corresponding to a higher-level, but ultimately more computationally expensive approach.2][33] In the present work, shown schematically in Fig. 1, we implement and deploy a D-learning strategy for simulating x-ray spectra, i.e., where lðEÞ H is the spectrum calculated at a high level of theory, lðEÞ L is the spectrum computed at the lower level of theory and DðEÞ ML is the correction learnt by our DNN.It is noted that this approach bears some resemblance to the spectral warping approach of Prentice and Mostofi 34 who applied a series of linear transformations to the semilocal TDDFT spectrum, in order to obtain a good estimate of the hybrid TDDFT spectrum.Our results, which are inherently non-linear due to the use of the DNN, applied to the Rh L 3 -edge, demonstrates that the D-learning strategy can quickly learn the difference between TDDFT(BLYP) and TDDFT(B3LYP) computed spectra, providing an composite method for obtaining accurate core-hole spectra at reduced computational cost, as lðEÞ H can be achieved using lðEÞ L and the predicted DðEÞ ML from the developed model.The accuracy of this approach is further exemplified by simulating Rh L 3 -edge spectra tracking the C-H activation of octane by a cyclopentadienyl rhodium carbonyl complex. 35This system has received significant interest as a model complex for transformation of saturated hydrocarbons through C-H bond activation. 36,37Recently, Jay et al. 35 II. METHODS AND COMPUTATIONAL DETAILS A. Training data and quantum chemistry simulations Our reference datasets comprise of 1124 x-ray absorption site geometries of Rhodium complexes harvested from the transition metal Quantum Machine (tmQM) dataset. 27,28This dataset was extracted from the 2020 release of the Cambridge Structural Database (CSD) and subsequently optimized at the GFN2-xTB level of theory.Full details of the construction and composition of the tmQM dataset can be found in Ref. 27. The Rh L 3 -edge spectra for all of the structures in our reference datasets were calculated using a Restricted Excitation Window FIG. 1. Schematic of the D-learning approach adopted in this work.The featurized local geometries around the Rh complexes used in the training set (I) are inputs, while the difference between their TDDFT(BLYP) and TDDFT(B3LYP) calculated Rh L 3 -edge XANES spectra are outputs (II).Once optimized, the predicted difference is added the TDDFT(BLYP) spectrum to recreate a spectrum equivalent to TDDFT(B3LYP). Time-Dependent Density Functional Theory (REW-TDDFT) 38 as implemented in the ORCA quantum chemistry package. 39All spectra were computed twice using the BLYP and B3LYP [40][41][42][43] exchange and correlation density functionals, with the difference between the two simulations used for training.It is noted that the choice of functional will systematically influence the absolute transition energies calculations 44 and therefore before taking the difference, all the spectra calculated using BLYP and B3LYP were shifted by þ19.5 and À5.5 eV respectively to match the absolute energy of the experimental white line.While this constant spectral shift applied to the whole training set could be a limitation to other types of spectroscopy, in the present case of x-ray spectroscopy, because the transitions derive from core orbitals, which are not involved in bonding and remain largely unchanged for different molecules, this approach ensure consistency for each sample.Scalar relativistic effects were described using a Douglas-Kroll-Hess (DKH) Hamiltonian of 2nd order. 45In all calculations an aug-cc-pVTZ-DK basis set was used for the Rh and all other elements used a DKH-def2-SVP basis set. 46,47The light-matter interaction was described using the electric dipole, magnetic dipole, and electric quadrupole transition moments. 44After calculation, each spectrum was broadened using a Gaussian function with a fixed width of 1.5 eV in all cases. Figure 2 shows the mean and standard deviation of the spectra within the training set calculated using TDDFT(BLYP) (a) and TDDFT(B3LYP) (b), while Fig. 2(c) shows the average and standard deviation of the D, i.e., lðEÞ B3LYP À lðEÞ BLYP .The mean difference shows a distinct derivative profile, indicating that the TDDFT(B3LYP) is generally shifted toward slightly lower energy.The positive feature at $3009 eV is associated with more pronounced features seen above the white line, as observed in Fig. 2(b). B. Network details and training Our deep neural network (DNN) is based upon the multi-layer perceptron (MLP) model and closely follows that presented in Ref. 16.Briefly, the model comprises an input layer, two hidden layers, and an output layer.All layers are dense, i.e., fully connected, and each hidden layer performs a nonlinear transformation using the hyperbolic tangent (tanh) activation function.The input layer contains the feature vector encoding the local environment around the absorbing atom performed via dimensionality reduction using the wACSF descriptor of Gastegger et al. 48Throughout this article, the input layer contains 49 neurons comprising a global (G 1 ) function, 16 radial (G 2 ) functions, and 32 angular (G 4 ) functions. Both hidden layers contains 256 neurons and the output layer comprises 250 neurons from which either the discretized Rh L 3 spectrum or the discretized D, i.e., lðEÞ B3LYP À lðEÞ BLYP is retrieved after regression.The internal weights, W, are optimized via iterative feedforward and backpropagation cycles to minimize the empirical loss, JðWÞ, defined here as the mean-squared error (MSE).Gradients of the empirical loss with respect to the internal weights, dJðWÞ=dW, were estimated over minibatches of 32 samples and updated iteratively according to the Adaptive Moment Estimation (ADAM) 49 algorithm.The learning rate for the ADAM algorithm was set to 1 Â 10 À4 .The internal weights were initially set according to the He et al. 50uniform distribution.Unless explicitly stated in this article, optimization was carried out over 240 iterative cycles through the network commonly termed epochs.Regularization was implemented to minimize the propensity of overfitting; batch standardization and dropout were applied at each hidden layer.The probability, p, of dropout was set to 0.15, unless otherwise stated. The XANESNET DNN is programmed in Python 3 with the TensorFlow 51 /Keras 52 API and integrated into a Scikit-Learn 53 (sklearn) data pre-and post-processing pipeline via the KerasRegressor wrapper for Scikit-Learn.The Atomic Simulation Environment 54 (ase) API is used to handle and manipulate molecular structures.The code is publicly available under the GNU Public License (GPLv3) on GitLab. 55raining of the neural network, shown schematically in Fig. 3 follows an approach inspired by curriculum learning (CL). 56CL is a strategy which aims to training a machine learning model from easier data to more complex data, which imitates the meaningful learning order in human curricula.In the present work, the complexity arises from the diversity in the training set.Consequently, we initially select 100 spectrum-structure pairs at random and train the DNN described above.Once completed, another 100 spectrum-structure pairs are added at random to the training set and the previous model used a guess for the subsequent training cycle.This is cycle is repeated until all the training data are included within the model.In contrast to the random sampling, we have also assess furthest-point and closest point sampling, 57 where by the most (dis)-similar spectra were chosen.We note that during testing this approach, we assessed four different sampling methods, namely,: random sampling, furthest point sampling, closest point sampling and uncertainty-based sampling.Both the furthest and closest point sampling calculates the Euclidean distance between the structural descriptors in the training sets and adds the next 100 based upon the those which are either furthest or closest to the existing samples in the training set.The uncertainty based sampling, estimates the uncertainty of samples not in the training set, using the bootstrapping approach, 29 it then adds spectra exhibiting either the largest or smallest uncertainty.During testing we found that while each method may slightly differ at small training sets (<500 samples), they all converge to the same performance when all training samples are included.In addition, the method could be sensitive to the initial 100 spectra chosen.As for the sampling method, a small difference can be observed for small training sets (<500 samples), but this difference disappears when all training samples are included. III. RESULTS In the following, we demonstrate the D-learning model proposed at the Rh L 3 -edge.Initially, we train the model and demonstrate its performance on a general dataset, before applying it to time-resolved Rh L 3 -edge spectra tracking the C-H activation of octane by a cyclopentadienyl rhodium carbonyl complex. 35 Performance of the D-learning model Figure 4 shows the relative performance of our DNN (i.e., the percentage difference between the calculated and predicted spectra relative to the best-performing model for that figure panel) as a function of the number of training samples for the models that directly learn the whole spectra (a) and the D-learning model.Both exhibit an initially rapid increase to $400 samples, followed by a slower decline.The remaining slow decline indicates that convergence is not entirely achieved and suggests that there is still scope to improve further on the results communicated here by growing/optimizing the dataset.However, the changes are small as chemical space (i.e., the diversity of structures included in the training set compared to the testing set) is well represented and therefore more targeted strategies are required to identify the areas of improvement.The gray dashed line in both figures indicates the performance of the model if CL is not used, and it is clear that this approach gives rise to a substantial improvement in performance for both models. To assess the performance of the D-learning model, we calculate the percentage difference between the calculated spectrum using TDDFT(B3LYP) and the predicted spectrum using the D-learning model for 124 held-out examples.The median percentage difference is 5.1%, with the lower and upper quartiles situated at 4.7% and 9.8%, respectively.The tight interquartile range of 5.1% testifies to the balanced performance of the D-learning model.To provide context to these percentage differences, Fig. 5 show six example Rh L 3 -edge XANES spectra.The upper three panels show spectra from the 0th-10th percentile, i.e., the best performers when held-out set is ranked by MSE.The lower three panels show spectra from the 90th-100th percentile, i.e., the worst performers.The percentage difference for the upper panels are all <3.2%, comparatively close to the median performance, while the worst performers all exhibit percentage differences >23%, and in these cases the main source of the error is in the intensity of the white line transition.In the case of the worst performers, the poor predictions can be rationalized by the small number of phosphorus, fluorine and arsenic containing molecules in the training set, and therefore this can likely be improved by increasing this in future dataset. Overall, these results demonstrate the ability of the MLP to operate within a D-learning strategy and facilitate accurate predictions of Rh L 3 -edge spectra at TDDFT(B3LYP) level with the computational expense of a TDDFT(BLYP) simulations.The median percentage error for the D-learning model is lower than that found for the direct model, using TDDFT(B3LYP) spectra, which is 6.5% and so in Sec.III B we seek to exemplify the performance of the model using simulations of the Rh L 3 -edge spectra tracking the C-H activation of octane by a cyclopentadienyl rhodium carbonyl complex. B. Tracking the ligand exchange dynamics of C-H activation Having developed and assessed the performance of the network in the previous section, we now apply our D-learning model to a recent time-resolved x-ray spectroscopic study to track the ligand exchange dynamics of C-H activation. 35In this work, the authors demonstrated that changes in oxidation state as well as valence-orbital energies and character, identified using changes in the Rh L 3 -edge spectra, could be used to follow the metal-alkane complex stability and how metal-to-alkane back-donation facilitates C-H bond cleavage by oxidative addition. The experimental ground state Rh L 3 -edge absorption spectrum of CpRh(CO) 2 [Fig.6(a)] shows a main peak at $3007.5 eV, with a shoulder at slightly lower energy, $3006 eV.This can be interpreted using the TDDFT(B3LYP) calculation, shown in Fig. 6(c) and Ref. 35, which provides good agreement between the experiment and theory.The low energy shoulder, as assigned in Ref. 35, arises from excitation of Rh 2p core electrons into the lowest unoccupied molecular orbital (LUMO) exhibiting Rh 4d character mixed with the C¼O ligands.The main band derives from transitions into the LUMO þ 1 and LUMO þ 2. These exhibit similar Rh 4d mixed with the C¼O ligands, but the latter exhibits a substantial Rh 4d and 5s character, which at the L 3 -edge is dipole allowed giving rise to the larger intensity. In contrast to TDDFT(B3LYP), the TDDFT(BLYP) calculation of the ground state spectrum shown in Fig. 6(b) does not reproduce the two peaks observed in the experiment.While the transitions described above remain present, they occur at the same energy and therefore are indistinguishable.Figure 6(d) shows the spectrum predicted using the D-learning model and in agreement with the experiment this provides the double peaked structure, demonstrating that the D-learning model is able to overcome the deficiencies of the BLYP calculated spectra and predict a spectrum close to that calculated by TDDFT(B3LYP). The transient Rh L 3 spectra at 250 fs (orange) and 10 ps (blue) both exhibit a new transition below the absorption edge.This arises from transitions into the LUMO, whose energy is significantly reduced upon loss of the strong-field C¼O.In the present work, seeking to demonstrate the performance of the D-learning approach, we have modeled these in these intermediates in their electronic ground state.However, note that in Ref. 35, the authors were not able to unambiguously assign the spectrum to the ground state CpRhCO, and the experimental transient at 250 fs, may also contain components associated with the excited state of CpRh(CO) 2 and CpRhCO.Therefore, despite the close agreement between experimental and theory in this case, it remains unclear if this state of association of octane occurs in the ground of electronically excited state of CpRhCO. Upon association of octane (10 ps transition, blue) to form the CpRh(CO)-octane r-complex, the spectrum shifts to slightly higher energy but remains lower than CpRh(CO) 2 .As shown in Fig. 6(d), the D-learning model clearly corrects deficiencies in the TDDFT(BLYP) calculations to provide very good agreement between the experiment, TDDFT(B3LYP) and the D-learning model.The two exceptions to this are the double peaked structure in the pre-edge feature of the 250 fs (orange) and the >190 ns transient spectrum (green trace).The former is likely associated with the low coordination environment of the Rh complex, which is rare within the present training set and the latter is, as shown In the calculated spectra [Figs.6(b) and 6(c)], a weak signal and therefore challenges the sensitivity of the model, i.e., if the changes are small, small errors will have a much greater impact than for larger spectral differences.We would expect both to improve upon expansion of the training data. For comparison, Fig. 7 shows the Rh L 3 -edge XANES spectra predicted from the models trained directly to translate structures into spectra lineshapes trained using the BLYP and B3LYP training spectra i.e., without D-ML, as performed in Ref. 16.Both models provide very similar predictions and fail to capture the spectral shape in either the ground state or transient spectra.Indeed the similarity between all of the transient spectra suggests the direct model cannot distinguish between any of the structures during the analysis of the experimental data in Ref. 35 which is likely due to the lack of sensitivity of the model arising from the smaller training dataset. To illustrative the sensitivity of the D-learning model to small structural changes, in contrast to the direct model, Fig. 8 shows the spectral changes (represented as a difference with respect to the starting structure of the reaction coordinate) along the two potential reaction coordinates namely, the dissociation of CO from CpRh(CO) 2 and the transformation of CpRh(CO)-octane to CpRh(CO)-H-R.Figures 8(a) and 8(b) show the dissociation of CO from CpRh(CO) 2 , with Fig. 8(a) being the spectra calculated using TDDFT(B3LYP), while Fig. 8(b) is predicted using our D-learning model.Overall, there is good agreement between the two with the derivative profile consistent with the generation of a pre-edge peak and it shifting to lower energies during dissociation, proceeds.The D-learning model exhibits a double peak in the pre-edge, but consistent with TDDFT(B3LYP), the main band loses intensity and shifts to lower energy.Above 3006 eV in the region of the white line, the D-learning reproduces the general double peaked shape observed in the spectra calculated using TDDFT(B3LYP), but these are slightly too close together.In comparison to the changes observed below 3006 eV, this region of the spectrum exhibits much smaller changes which is consistently reproduced between both models. Figures 8(c) and 8(d) show the spectral changes associated with the transformation of CpRh(CO)-octane to CpRh(CO)-H-R, with Fig. 8(c) being the spectra calculated using TDDFT(B3LYP) and Fig. 8(d) being predicted using our D-learning model.The first difference (the darkest blue line) shows excellent agreement between the TDDFT(B3LYP) calculated and D-learning predicted spectra.For spectral changes close to the CpRh(CO)-H-R structure (lighter blue lines) clear deviations begin to emerge.The TDDFT(B3LYP) calculated difference shows two principle positive features at 3007 and 3009 eV, which both increase in intensity and shift to higher energies closer to the CpRh(CO)-H-R structure.The D-learning predicted spectra also shows two main features, which both shift to higher energies, however their intensities are the wrong way round, which is expected as the difference spectrum associated with CpRh(CO)-H-R structure is the poorest agreement with experiment shown in Fig. 6. IV. DISCUSSION AND CONCLUSION In this article, we have introduced a D-learning strategy aimed at transforming spectral lineshapes from a low-level of theory to a higher-level of theory.This composite approach has the benefit of combining fast calculations with a simple correction scheme based upon our machine learning model which can achieve predictions comparable to higher levels of theory, without the additional computational expense.We have applied the developed models to timeresolved Rh L 3 -edge spectra tracking the C-H activation of octane by a cyclopentadienyl rhodium carbonyl complex 35 and demonstrated the effectiveness of the D-learning approach for translating the TDDFT (BLYP) spectroscopic observables to those of the TDDFT(B3LYP) level. The proof-of-concept D-learning work has demonstrated that one can reach the accuracy of a higher-level quantum chemistry corehole spectrum at lower computational burden.Future work should focus on extending this, especially in term of the size of the training set and the D, i.e., the difference in quality of the low and high level quantum chemistry methods used.For the latter, a more significant computational advantage could be obtained using the difference between a quasi-one-electron approach based upon Kohn-Sham orbitals 58 and the restricted open-shell configuration interaction (ROCIS) method, 59 the latter of which has shown to be highly effective for simulating L 3edge, 60 without the requirement for highly bespoke system specific inputs associated with the restricted active space methods. 11The larger expected size of the D in this case is likely to require a larger and more diverse training set, which will be the focus of future work. ACKNOWLEDGMENTS This research made use of the Rocket High Performance Computing service at Newcastle University and computational resources from ARCHER2 UK National Computing Service which was granted via HPC-CONEXS, the UK High-End Computing Consortium (EPSRC Grant No. EP/X035514/1).T.J.P. would like to thank the EPSRC for an Open Fellowship (No. EP/W008009/1) and Leverhulme Trust (Project No. RPG-2020-268). FIG. 2 . FIG. 2. Mean (solid black line) and standard deviation (6r; gray shaded region) of the 1124 Rh L 3 x-ray absorption spectra used in the training set calculated using TDDFT(BLYP) (a) and TDDFT(B3LYP) (b).(c) Mean (solid black line) and standard deviation (6r; gray shaded region) of the D between the TDDFT(BLYP) and TDDFT(B3LYP) spectra.The dashed line represents zero intensity. FIG. 3 . FIG. 3. Schematic of the curriculum learning based training adopted in this work.For the latter, 100 spectrum-structure pairs are selected at random and used to train a DNN.Once completed, another 100 spectrum-structure pairs are added at random, with the previous model used a guess for the subsequent training cycle.This is repeated until all the training data are included within the model. FIG. 4 .FIG. 5 . FIG. 4. Relative performance of the DNN at the Rh L 3 -edge as a function of the number of training samples.(a) The model trained on the TDDFT(B3LYP) spectra and (b) The model trained on the D, i.e., lðEÞ H À lðEÞ L .Data points are averaged over 50 K-fold cross-validated evaluations; error bars indicate one standard deviation.
2023-11-09T05:07:23.096Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "e9202eda5aaabab7f739cd112a2c2611188bdfb5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e9202eda5aaabab7f739cd112a2c2611188bdfb5", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
15969370
pes2o/s2orc
v3-fos-license
Biomed. Papers 146(2), 73–76 (2002) The influence of apolipoprotein E isoforms on fasting and postprandial lipid levels on Olomouc hemodialysispatients with uremic dyslipidemia is presented in this article. INTRODUCTION Uremic dyslipidemia is an important risk factor for accelerated atherosclerosis, contributing to the many times higher cardiovascular mortality of hemodialysis (HD) patients (pts) in comparison to nonuremic population 1 . The influence of genetic determinants in this disorder has been widely studied, including apo E genetic variation and its relationship to cardiac and cerebrovascular disease.Owing to different affinities of apo E isoforms for apo B/E and apo E receptors, apo E polymorphism has an impact on lipid metabolism and lipid plasma levels 2,3 . MATERIALS AND METHODS We performed apo E genotyping using PCR amplification and restriction-enzyme digestion RFLP in 87 uremic pts on chronic hemodialysis, 53 males and 34 females, of mean age 57.50±14.10years 4 .We followed up relative frequency of common alleles and apo E genotypes in HD pts, in a control group of healthy people and in a random sample of the Czech population (MONIKA). Than we investigated postprandial lipid metabolism in 23 dialyzed and 11 healthy men after a standard fat load per m2 of body surface.Fatty meal contained 51.2 g of total fat, 195 mg of cholesterol, 11.2 g of carbohydrates and 5.2 g of protein, which was 531 kcal of energy.Over 10 hours, at 4 th , 5 th , 6 th , 8 th and 10 th hour after fatty load TG, TC, HDL-C and apo A1 were evaluated.Areas under the time-dependent concentration curves (AUC) were calculated for each parameter.The chi-square test and analysis of variance (ANOVA) were used for statistical analysis. Ø The distribution of apo E genotypes and apo E allelic frequency among HD pts corresponded to common Czech population (Table 1, Table 2).Ø Fasting lipid profile of HD pts was characterized by numerous abnormalities in comparison with healthy controls (Table 3).Ø Apo E 2/3 uremic subjects (n = 9) had significantly higher fasting TG level comparing with E 3/3 (n = 57) and E 3/4 (n = 11) patients.(Table 4) Ø AUC-TG of E 2/3 uremic pts was significantly higher than that of E 3/4 pts (Fig. 4). DISCUSSION Apo E affects the levels of all lipoproteins, either directly or indirectly by modulating their receptormediated clearance or lipolytic processing and the production of very low density hepatic lipoproteins 5 .Apo E2 isoforms are considered to decrease remnant clearance because of reduced affinity for the receptors, which results in the accumulation of remnant particles in plasma. Clinical data show that not only homozygote apo E2/2 with type III HLP contributes to the development of early atherosclerosis, but apo E2/3 is a positive risk atherogenic factor too.It increases TG-rich lipoproteins and remnants and enhances macrophage cholesteryl ester syntesis when this is associated with hypertriglyceridemia 6 On the other hand the postprandial response is very heterogeneous, influenced by multiple factors, such as age, exercise, body weight, diet, fasting lipid levels and genetics too .Studies comparing postprandial triglyceride responses across different apo E genotypes have been done, with conflicting results. In our group of dialysis patients ten hours postprandial TG a HDL-C levels were highly pathological in all apo E3/3, E2/3 and E3/4 pts comparing with healthy men.These changes in lipid parameters indicated a delayed degradation of the administered fatty load 8 .The most significant changes were evident in the subgroup of apo E2/3 uremic pts, reflecting their higher fasting TG concentration. This abnormal postprandial lipid metabolism is of similar character observed in pts with confirmed ischemic heart disease or carotid atherosclerosis 9,10 and is another manifestation of an increased atherogenic risk of HD pts. CONCLUSION Apo E 2/3 genotype predisposes to hypertriglyceridemia in hemodialysis patients that is followed by impaired postprandial triglyceride tolerance.This could lead to an increased atherogenic potential of uremic dyslipidemia. Fig. 4 . Fig. 4. Area under the time-dependent concentration curve for triglycerides in HD men.Relation to ape E genotypes. Table 1 . Jana Zahálková a* , Helena Vaverková a , Dalibor Novotný b , Zdena Kosatíková aThe influence of apolipoprotein E isoforms on fasting and postprandial lipid levels on Olomouc hemodialysis patients with uremic dyslipidemia is presented in this article.Relative frequency of common alleles of apo E gene in dialysis patients, in the control group of healthy people and in the random sample of the Czech population ( MONIKA ) Table 3 . Lipid parameters ( mean ± SD ) in dialysis patients in comparison with healthy controls Table 2 . Prevalence of apo E genotypes in dialysis patients, in the control group of healthy people and in the random sample of the Czech population (MONIKA) Table 4 . Plasma lipid values (mean ± SD, median) versus apo E genotype in dialysis patients
2014-10-01T00:00:00.000Z
2002-12-01T00:00:00.000
{ "year": 2002, "sha1": "74be7a3432b5611d3fdd16eb31b4bbe8ff85a90d", "oa_license": "CCBY", "oa_url": "http://biomed.papers.upol.cz/doi/10.5507/bp.2002.015.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "74be7a3432b5611d3fdd16eb31b4bbe8ff85a90d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259361200
pes2o/s2orc
v3-fos-license
Hemodialysis as an Effective Treatment for Combined Amlodipine and Metformin Overdose: A Case Report and Literature Review The combined toxicity of amlodipine and metformin is a rarely reported phenomenon in the literature. The management varies depending on the clinical status of the patient. We present a case that was managed successfully with the early initiation of hemodialysis. Introduction We present a 50-year-old patient with combined toxicity of amlodipine and metformin. We believe it is vital to report this case as there is limited literature on the management of the combined toxicity of prescribed medications. The early recovery of these patients depends on the modalities used to manage them. We report the utilization of emergency hemodialysis and its impact on the outcome of a patient with combined toxicity of amlodipine and metformin. Case Presentation A 50-year-old African American male with a past medical history of an alcohol use disorder, diabetes, hypertension, hyperlipidemia, obesity, and obstructive sleep apnea was transferred from an outside facility to our ICU for metformin and amlodipine overdose requiring medical management. He reported life stressors including family, living situation, and a recent altercation with his long-term partner, after which he attempted suicide by ingesting 20 tablets of metformin 500 mg and 20 tablets of amlodipine 10 mg. He smokes cigarettes daily with a 34-pack-year history and endorses binge drinking more than 14 drinks of gin per week. His last drink was one week before presentation to the hospital. The patient's daily home medications include amlodipine 10 mg daily, ergocalciferol 1250 mcg weekly, latanoprost 0.005% drops, metformin 1000 mg twice a day, rosuvastatin 40 mg daily, telmisartan-hydrochlorothiazide 80 mg to 25 mg daily. Prior to his transfer from the local medical center, he was given charcoal lavage, calcium gluconate, 5 L of normal saline fluids, 10 units of insulin with dextrose 5%, and ondansetron. The patient was alert and oriented on arrival at our ICU. He was tachycardic with a heart rate of 99 to 107 beats/minute, tachypneic at 25 breaths/minute, blood pressure of 112/54 mmHg, with a mean arterial pressure of 69 mmHg, and oxygen saturation of 96% on 3 L oxygen via nasal cannula. Arterial blood gas showed a pH of 7.538, partial pressure of carbon dioxide (PCO2) of 37.6, and partial pressure of oxygen (PO2) of 62.8. Electrocardiogram (EKG) showed normal sinus rhythm with anterolateral infarct of undetermined age ( Figure 1). FIGURE 1: Electrocardiogram on presentation Normal sinus rhythm is observed with an anterolateral infarct of undetermined age The chest X-ray showed pulmonary vascular congestion, bilateral pleural effusions, and an enlarged cardiac silhouette ( Figure 2). Labs showed serum bicarbonate of 12, anion gap of 24, lactic acidosis of 12.4 mmol/L, troponin of 0.052 ng/mL, and creatinine of 2.61 mg/dL ( Table 1). Subsequently, the patient became hypotensive and vasopressors was initiated. Norepinephrine was titrated up to 35 mcg/kg/min, vasopressin at 0.3 units/min and sodium bicarbonate drip was initiated at 125 ml/hr. Twenty-four hours post-presentation, patient developed chest pain, shortness of breath and oxygen saturation dropped to 80% requiring none invasive bilevel positive airway pressure ventilation (BiPAP) of 12/5 cmH2O with 50% of inspired oxygen. The Quinton catheter was placed for urgent hemodialysis in the setting of worsening anion gap metabolic acidosis and pulmonary edema. Cardiology was consulted in the context of chest pain and elevated troponin and patient was placed on acute coronary syndrome protocol. Cardiothoracic surgery team were also alerted just in case patient deteriorates and requires extracorporeal membrane oxygenation (ECMO). After hemodialysis, follow up labs showed significant improvement. Serum bicarbonate improved from 12 to 30 mEq/L, the anion gap was corrected from 28 to 16 mEq/L, and lactic acid from 12.4 to 2.7 mm/L. However, troponin level was elevated initially from 0.052 to 0.178 ng/mL and trended down to 0.135 ng/ml. Similarly, serum creatinine initially trended up from 2.61 to 3.24 mg/dL then dropped to 2.32mg/dL. Fingerstick glucose was in the range of 133-287 which was managed to the goal of 180 with insulin low-dose sliding scale. On the third day of ICU stay, the patient's clinical status improved. His BiPAP and oxygen need was decreased and transitioned to a nasal cannula. Vasopressors was weaned off. A psychiatry evaluation was obtained, and recommendations were followed which included peer recovery and education on substance use. On the fourth day of ICU care, the patient had normal hemodynamics. Metabolic profile significantly improved, and chest X-ray showed improvement in pulmonary edema (Figure 3). FIGURE 3: Chest X-ray on the fourth day of ICU stay showing significant improvement The IV maintenance fluid was held, and diuresis with intravenous furosemide was started. A repeat EKG ( Figure 4) showed no changes from the previous EKG. Echocardiography reported preserved ejection fraction (EF) at 55% to 60%, without any wall motion abnormalities. FIGURE 4: Electrocardiogram on day four of hospitalization On the fifth day of ICU care, the patient continued doing well and denied any shortness of breath, chest pain, abdominal pain, nausea, or vomiting. The nuclear stress test showed no evidence of ischemia. Creatinine continued to downtrend to 1.33 mg/dL. On the sixth day of hospitalization, his clinical status significantly improved with the normalization of kidney function, resolution of metabolic acidosis, and cardiogenic shock. The psychiatry team determined the patient was safe to discharge with close follow-up with his psychiatrist, therapist, and primary care provider. Discussion Amlodipine is a dihydropyridine calcium channel blocker with the most prolonged half-life of its class at 30 to 50 hours. The FDA-approved indications include hypertension, chronic stable angina, vasospastic angina, and coronary artery disease (CAD) [1]. The mechanism of action involves inhibition of the voltage-gated Ltype calcium channels, thereby lowering intracellular calcium, decreasing smooth muscle contraction, increasing smooth muscle relaxation, and vasodilation [2]. Side effects include refractory hypotension secondary to vasodilation, decreased chronotropy, tissue ischemia, subsequent lactic acidosis, and impaired pancreatic-islet insulin secretion. [3] Metformin is a biguanide medication used to treat type 2 diabetes mellitus and the management of prediabetes [4]. It reduces blood glucose levels by decreasing gluconeogenesis, decreasing intestinal absorption, and increasing insulin sensitivity. Although generally safe and well-tolerated, side effects include diarrhea, nausea, vomiting, chest discomfort, headache, diaphoresis, weakness, rhinitis, hypoglycemia, and a blackboxed risk of lactic acidosis leading to metabolic acidosis [5]. A literature search on PubMed and google scholar showed three reported cases of combined toxicity of calcium channel blockers with metformin. Besides conservative management, the essential interventions used in the cases were the utilization of ECMO and continuous renal replacement therapy (CRRT) [6]. Another reported case used L-carnitine as a significant mode of management [7]. Among three reported cases, one succumbed [8]. In our case, early management with emergency dialysis addressed the metabolic acidosis earlier, which decreased the pressors requirement and gain early hemodynamic stability. The discussion will be limited to the reported cases where patients survived. In the previously reported cases, the ingested doses of amlodipine were 300 mg and 400 mg ( Table 2). In our case, it was 200 mg. The ingested units of metformin were 6.5 g and 20 g, while in our case, it was 20 . Besides pressors, the primary interventions used in case 1 was ECMO and CRRT; however, case 2 used L-carnitine as a modality to manage the patient. Details of Patients and Study Toxicity Labs Pressor and Interventions Status Jeong et al. [6] 40-year-old male | Patients with combined toxicity of amlodipine and metformin can be managed with urgent dialysis. Metformin leads to the production of lactic acidosis which causes metabolic acidosis. Urgent dialysis helps in the removal of metabolic acidosis from the body [9]. Metabolic acidosis impacts a patient's hemodynamics by lowering cardiac output and arterial blood pressure. Additionally, low pH also modulates the vascular tone [10,11]. The relaxation in the vascular tone by metabolic acidosis is mediated by nitric oxide i.e., (NO)/cyclic guanosine monophosphate (cGMP)-dependent, and prostacyclin (prostaglandin I2 (PGI2)/cyclic adenosine monophosphate (cAMP)-dependent) mediated pathways, which also leads to hyperpolarization of the cell membrane [12,13]. Therefore, early correction of metabolic acidosis decreased the pressors requirement in our patient. The metabolites of amlodipine remain in the body for a longer duration, which requires conservative management. The significant secondary findings in our cases were elevated troponins. The previously reported cases had normal troponins levels or had not been reported in their cases. The amlodipine-mediated vasodilation explains the demand ischemia in our patients. Our patient received the acute coronary syndrome protocol; however, the echocardiogram and nuclear stress test did not show wall motion abnormalities or ischemic changes. Therefore, the elevated troponins were likely due to demand ischemia. Furthermore, we did not utilize high-dose insulin and lipid emulsion therapy. Emergency hemodialysis and conventional management improved the hemodynamics in our patient. The need for vasopressors was also reduced on the third day of admission. The patient needed diuresis to manage the pulmonary edema. He improved clinically and was ultimately discharged on the seventh day of his admission. Conclusions Introducing emergency dialysis will lead to significant outcomes in mortality and morbidity in patients with combined toxicity of amlodipine and metformin. Moreover, non-ST elevation myocardial infarction is also expected to be seen in such patients, which occurs as an outcome of amlodipine-mediated vasodilation. Therefore, there is a dire need for monitoring and managing cardiac complications as well. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-07-08T05:16:22.903Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "ed50b9cc21a51ca28ed7472400c16574dc0274f6", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/159690/20230606-9995-128jdlg.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed50b9cc21a51ca28ed7472400c16574dc0274f6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252026785
pes2o/s2orc
v3-fos-license
Improved Properties of Composite Edible Films Based on Chitosan by Using Cellulose Nanocrystals and Beta-Cyclodextrin : The aim of this study was to produce innovative edible films and coatings with various combinations of materials, in order to achieve the best possible resulting properties. More specifically, the effect of cellulose nanocrystals (CNC) or beta-cyclodextrin (CD) addition to chitosan (CH) films and the development of composite CH–CNC–CD films were investigated. According to the results, most properties of both CH–CNC and CH–CD edible films were improved. The viscosity of the solutions was decreased up to 50% while the surface tension was minimally changed even at high levels of CNC or CD addition. Furthermore, oxygen and water vapor permeability of the CH–CNC and the CH–CD edible films was decreased, whereas transparency and heterogeneity were increased. On the other hand, the study of the composite CH–CNC–CD films, showed that CNC improved viscosity, supporting thus the coating procedure. Moreover, CNC led to more stable structures with enhanced mechanical properties. Finally, CD mostly contributed to the improvement of the optical properties (lighter color and increased transparency). Introduction Edible films and coatings are thin layers of edible materials that are formed directly on the surface of the food to enhance preservation, and they are consumed as part of the whole product.This is why their application is considered a physical and environmentally friendly method of food preservation [1][2][3].Various materials are used for the preparation of edible films or coatings, which are classified into three basic categories: polysaccharides, proteins and lipids [4,5]. These materials are desired to possess specific properties, which can be classified into the following categories: barrier, mechanical, microstructural, optical, physicochemical and thermal properties.Of these, the barrier, mechanical and microstructural properties are considered more important and therefore should be examined before the use/application of each material. Edible films and coatings can also act as carriers for additives (antimicrobial, antioxidant and antifungal), which may further extend the shelf life of the coated products by reducing or inhibiting the microbial growth or oxidation on the food surfaces [6,7].Especially, chitosan has attracted attention as a potential food preservative due to its antimicrobial activity against a wide range of fungi, yeasts and bacteria.For example, films containing chitosan can exhibit antimicrobial activity against Staphylococcus aureus [6,8]. Each category of edible materials can impart specific properties to the final formed film or coating.Thus, their choice should be made based on the desired characteristics for a certain food application.Polysaccharides, such as cellulose, starch, pectin and chitosan, are widely available materials which possess good mechanical properties but poor water barrier properties [9].Protein materials, such as gelatin, casein and whey protein, provide effective barriers against O 2 and CO 2 , but not for water.Proteins have satisfactory mechanical properties as well.Lipids, such as paraffin wax and beeswax, are excellent barriers against moisture migration, but they present some disadvantages such as fragility, lack of homogeneity and presence of holes and cracks on the surface of the coating [4,10]. It is clear from the above, that edible films or coatings consisting of only one component may present deficiencies in some of their properties.Therefore, the development of composite films is proposed, as the combination of two or more components, preferably from different categories, can significantly improve their individual properties [4,5,[10][11][12].For example, the low mechanical strength of lipids can be enhanced by the addition of water-soluble proteins or polysaccharides (hydrocolloids) [9].More specifically, mixing of gelatin with chitosan results in improved barrier properties [13].The addition of a second component may, also, be selected to provide antimicrobial or/and antioxidant activity [14]. Based on the above, the current study investigates the use of chitosan (CH), betacyclodextrin (CD) and cellulose nanocrystals (CNC) for the development of efficient composite edible films and coatings. Chitosan has the ability to form edible films and coatings with advanced mechanical properties.Beta-cyclodextrin, on the other hand, can provide homogeneity, transparency and higher mechanical properties [6,7,[15][16][17]. In the recent years, the use of nanomaterials is also investigated, in order to provide enhanced or even new properties to the edible films and coatings [18][19][20][21].The nanomaterials have specific physical, optical and chemical properties.In addition, they can be synthesized to improve the performance of conventional materials, due to their particle size and their large surface area in respect to their shape.More specifically, the addition of nanomaterials, such as nano-starch or cellulose nanocrystals, may modify several properties such as flexibility, durability, thermal stability, barrier properties, and mechanical properties [3,[22][23][24].The nanomaterials, due to their size, offer a new way of modifying the gas transport in natural products, while increasing the mechanical resistance, transparency, functionality, and antioxidant and antimicrobial activity.Nano-systems are more stable and biologically active, allowing the incorporation of hydrophobic and/or active substances, without particularly affecting the final appearance or transparency [24][25][26].Nanomaterials in the food industry can altogether provide: (i) sensory enhancement (taste/color improvement or texture modification), (ii) increased absorption and targeted supply of nutrients and bioactive compounds, (iii) stabilization of food, (iv) packaging and product innovation to increase shelf life, (v) food safety, and (vi) antimicrobial protection against foodborne pathogenic bacteria [27][28][29][30]. The current study first examines the use of cellulose nanocrystals or beta-cyclodextrin in composite edible films and coatings containing chitosan as their basic material.More specifically, the effect on the properties of the resulting CH/CNC and CH/CD films is examined.In addition, the basic characteristics of the films formed by the combination of all three materials (CH-CNC-CD) is examined.These three components have been investigated in the literature alone or along with various other materials [31][32][33][34][35].For example, Xu et al. [34] and Ye et al. [35] used gelatin as their main film forming material and incorporated CD or CNC as an additive respectively. The innovation of the current study lies in the investigation of the CH/CNC, CH/CD and CH-CNC-CD combinations.Chitosan was chosen as the basic material due to its ease of use, its wide availability, its ability to form a film, as well as its high biocompatibility and biodegradation.Moreover, according to the literature review, the triple combination of CH-CNC-CD materials has not yet been studied. Additionally, most of the available studies, so far, focus on the effect of edible films and coatings on the properties of food products [36].For example, Fang et al. [31] and Reyna et al. [37] investigated the effect of chitosan edible films on papaya and broccoli, respectively.However, the current study focuses on the properties of films and coatings themselves.Before being applied to a food product, the optimal proportions of the components of edible films and coatings, as well as their resulting properties should be examined, in order to enhance their functionality and characteristics according to the intended food application.For that reason, this study aims to fully investigate the properties of the final formed films and the solutions from which the films are formed. Materials For the preparation of the edible films, the following materials were used: high molecular weight CH (Deacetylated chitin, ≥75% deacetylated, molecular weight 310,000-375,000 Da, Poly[D-glucosamine]) from Sigma-Aldrich (St. Louis, MO, USA), acetic acid (glacial 99-100% a.r.) from Chem-Lab NV (Zedelgem, Belgium), CNC from CelluForce (Montreal, QC, Canada) and CD from Acros Organics (Geel, Belgim).The specifications of CNC and CD used in the current study are presented in the Supporting Material. Preparation of Edible Films CH/CNC and CH/CD Edible Films Initially, the concentration of the primary film solutions was selected based on preliminary experiments.The films were prepared as follows: solutions of CH 1% w/v were produced by dispersing chitosan and acetic acid in distilled water (1 g chitosan/100 mL water and 1 mL acetic acid/100 mL water) and stirring with a magnetic stirrer for 2 h at 80 • C and subsequently overnight at room temperature.The CNC and CD solutions were prepared by dispersing cellulose nanocrystals and beta-cyclodextrin, respectively, in distilled water (both 1 g/100 mL water).Following, part of the CH 1% w/v solutions was mixed with the CNC 1% w/v or CD 1% w/v solutions in proportions of 75/25, 50/50 and 25/75.CH/CNC and CH/CD solutions were degassed using an Elmasonic Ultrasonic device S30H (Elma Schmidbauer GmbH, Singen, Germany) (280 W/60 Hz) for 15 min at 30 • C.After that, 20 mL of the final solutions were poured into Petri dishes (with diameter of 9 cm) and allowed to dry in a vacuum oven at 50 • C for 24 h.Eventually, the dried films were kept at 20% relative humidity and room temperature. CH-CNC-CD Edible Films Solutions of CH/CNC and CH/CD (both in proportion of 50/50) were used.These were prepared as described in the previous paragraph.The CH/CNC and CH/CD solutions were mixed together using a magnetic stirrer, in proportions of 75/25, 50/50 and 25/75.Thus, three new solutions were produced, having the following proportions of CH-CNC-CD, respectively: 50-37.5-12.5, 50-25-25, 50-12.5-37.5.Finally, they were degassed, poured into Petri dishes, allowed to dry and stored as described above. Measurements 2.3.1. Density and pH In order to determine the density (g/mL) of the solutions, the weight of 20 mL samples was measured (seven repetitions for each solution at 25 • C).The samples were taken using a laboratory pipette.Each weight was divided by 20 mL and the average density values were calculated. The pH of the solutions was measured using a digital pH-meter 3310 (Willis Towers Watson, London, UK) at 25 • C.This measurement was repeated three times. Surface Tension The surface tension of each solution was measured with the wihelmy plate method, using a Sigma 700 digital Force Tensionmeter (Attention, Biolin Scientific AB, Västra Frölunda, Sweden).Ten measurements were performed for each sample solution at 25 • C and the average values were calculated. Viscosity and Rheological Parameters The rheological parameters: consistency index (k) and flow behavior index (n), were analyzed by means of a RC1 rotational rheometer (Rheotec GmbH, Radeburg, Germany).The rheological curves (shear stress, τ-shear rate, γ) were obtained after a stabilization time of 5 min at 25 • C. Shear stress was determined as a function of shear rate between 0 and 300 s −1 , with the following procedure: 3 min to attain the maximum shear rate and 2 min at the maximum shear rate. Finally, k and n were exported by applying the following equation to the rheological curves obtained through the rheometer's software: where τ is the shear stress, γ is the shear rate, k is the consistency index and n is the flow behavior index.The procedure was repeated three times for each case at 25 • C and the final values of k and n were calculated as an average of the individual values.Apparent viscosity was calculated at 256 s −1 . Thickness The thickness of the films (mm) was determined using a stack of 7 films per sample.Measurements were made with a hand-held micrometer at five different points for each film. Color The color of the films was determined with a CR-200 Colorimeter (Konica Minolta, UK).The L (Luminosity), a (Red-Green) and b (Yellow-Blue) color parameters of the Cielab scale were used for the calculations.A white plate was used also, as a standard (L 0 , a 0 and b 0 ). The following calculations were made [38]: Chrome: Whiteness index: where Transparency The transparency of the films was determined by measuring their absorbance with a U-2900 UV-Vis spectrophotometer (Hitachi, Tokyo, Japan) at 550 nm through the following formula: where Abs 550 is the absorbance's value at 550 nm and l is the film's thickness (mm). Moisture The moisture of the films was determined by measuring their weight loss (2 cm × 2 cm) upon drying in an oven at 110 • C for 24 h (dry sample weight).Moisture (g H 2 O/100 g film) was calculated as follows [39]: Mechanical Properties For the study of the mechanical properties, a TA-XT2i Texture Analyzer (Stable Micro Systems, Godalming, UK) was used, with a cylindrical probe having a diameter of 5 mm.The initial grip separation and the crosshead speed were set at 20 mm and 10 mm/s, respectively.Measurements were made on circular samples with a diameter of 5 cm each.The Texture Analysis software provides the curves of force (N) versus deformation (mm). Based on the ASTM D882-10 [40] standard, the following parameters can be obtained from force versus deformation curves: maximum breaking force (N); -breaking factor (maximum breaking force divided by film thickness, N•mm −1 ); -deformation at break (extension at the moment of rupture, mm); -percent of elongation at break (deformation divided by the initial probe length and multiplied by 100%); -elastic modulus (slope of force-deformation curve, N•mm −1 ). Breaking stress (MPa) was calculated by dividing maximum force by the film crosssection (thickness × width). Oxygen Permeability The oxygen permeability was measured as follows: an edible film is sealed between two specially designed metallic cups, each of which has a diameter of 6 cm and a depth of 3 cm.Both cups have two channels.Oxygen enters the lower cup from the down left entrance and exits from the down-right channel of the same cup.A stream of nitrogen enters the upper cup from the up-right entrance and comes out from the up-left exit of the same cup.The nitrogen acts as a carrier that transfers the oxygen permeating through the film (from its one side to the other) into a wet analysis system.The design of this system is mainly based on iodimetry according to the ASTM D3985-05 [41] standard.It consists of a conical flask containing an aqueous manganese II sulphate and alkaline iodide solution [42].The gas mixture (N 2 and permeated O 2 ) passes through the wet system for a specific time period and, in the presence of oxygen, manganese II hydroxide is, rapidly and quantitatively, converted to manganese III hydroxide by the following reaction: 4 Mn(OH The brown precipitate formed, is rapidly dissolved causing the oxidation of the iodide ions present to iodine: The liberated iodine is then titrated with a standard thiosulphate solution: 2 S 2 O 3 2− + I 2 → S 4 O 6 2− + 2 I − Finally, the Oxygen Permeability (OP) of the film is calculated by the equation: where m is the mass of O 2 permeated through the film with a thickness of d and an area of A, over the measured time interval t, and ∆P is the difference in the O 2 pressure between the two sides of the film. Water Vapor Permeability The method of Bertuzzi et al. [43] was used to determine water vapor permeability.The films were sealed in glass cups containing 10 mL of distilled water.Each glass cup was placed in a desiccator and kept at 40 • C and 75% relative humidity, using a saturated sodium chloride solution.The water vapors' transfer through the films was determined by measuring cups' weight periodically for 24 h.The weight's changes of the glass cups versus time were calculated and plotted.Linear regression was used to calculate the slope of the fitted straight line, representing the ∆m/ ∆t ratio. The Water Vapor Permeability (WVP) was calculated by the following equation: where ∆m is the amount of water vapor transferred through a film of area A and thickness l during a finite time ∆t, and ∆P is the vapor's pressure difference across the film. Fourier-Transform Infrared Spectroscopy (FTIR) Analysis The FTIR measurements were performed to evaluate the structural interactions of the films.The spectrum of the films was recorded using an ATR-4200 FTIR instrument (Jasco, Easton, MD, USA) at a wavenumber range of 700-4000 cm −1 and resolution of 1 cm −1 . Scanning Electronic Microscopy (SEM) Analysis The morphology of the surface and the cross-section of the films were examined using scanning electron microscopy (SEM) (Edwards Sputter Coater, Crawley, UK) at an accelerated electron energy of 25 kV.Prior to scanning, the films were coated with a thin layer of gold. Statistical Analysis Experiments for edible films' preparations were carried out twice, as well as the measurement of all their properties (unless otherwise indicated), and the results were expressed as mean ± standard deviation (SD). The statistical processing of the results was carried out with the Statistica 13.0 software (StatSoft, Inc., Tulsa, OK, USA).The significance of each experimental factor (the addition of CNC or CD to CH, the proportion of CH/CNC or CH/CD and the proportion of CH-CNC-CD) was assessed through the ANOVA variance analysis.Significant differences were considered at the p < 0.05 level.In such cases, the Duncan's Test was further applied. Results and Discussion In the current study, the effect of CNC or CD addition to CH edible films and coatings, in proportions of 75/25, 50/50 and 25/75 (CH/CNC or CH/CD), was examined.Their combined effect was subsequently examined in various CH-CNC-CD mixtures, in order to achieve the best possible results for specific properties of the composite films.In each case, the properties of the primary solutions (from which the edible films were formed), as well as those of the final films, were studied.The parameters examined are analyzed extensively in the following paragraphs. CH/CNC and CH/CD Edible Films The properties of the primary solutions and the final formed edible films for CH and the CH/CNC and CH/CD mixtures are presented and evaluated in the following paragraphs. Thickness and Physicochemical Properties Both the formation of edible films and coatings and their application in foods, are affected by physicochemical properties [44]. The effect of CNC or CD incorporation on the thickness and physicochemical properties of the CH samples is reported in Tables 1 and 2. The physicochemical properties measured were density (ρ), viscosity (η), consistency index (k), flow behavior index (n), pH, surface tension and moisture.Thickness and moisture were studied in the final formed edible films, while the rest properties were studied in the primary solutions.Results show that the standard CH solution exhibited high surface tension (52.017 mN/m, Table 1), which is desirable for a coating, but, on the other hand, it presented a significantly high viscosity (167.34 mPa•s, Table 1), which may hinder its handling.In general, the combination of high surface tension and low viscosity is considered ideal for a coating [45].The high viscosity of the CH sample explains the high values of the other two rheological parameters (k and n).As it is obvious, it is required to reduce the viscosity of the solutions in order to improve the coating process. The addition of CNC or CD to the CH solutions changed their physicochemical properties and, especially, their rheological parameters (Table 1).In particular, they both reduced the consistency of the CH solutions and led to reduced viscosity values (from 167.34 mPa•s to 90.67-54.83mPa•s), which facilitates handling and processing.This can be easily explained as CNC and CD form solutions with lower viscosity than CH.Furthermore, the surface tension presented small changes and remained in high values, which is beneficial for the coating procedure [45]. The final formed edible films with only CH had relatively high moisture (11.0875 g H 2 O/100 g film, Table 2), probably because chitosan is a hydrophilic substance and holds a considerable amount of water in its three predominant absorption sites: hydroxyl group, amino group and polymer chain end [14].High moisture can favor the growth of microorganisms acting negatively on the antimicrobial activity of the edible films and coatings and consequently on the shelf life of the food product it is applied to [46].On the other hand, the thickness of the CH films was relatively low (0.04 mm, Table 2), which is a positive attribute.It should be noted that edible films with thickness less than 0.25 mm are considered thin [47]. In contrast to the addition of CNC to CH, which maintained the moisture content of the final formed edible films at the same levels, the addition of CD to CH reduced films' moisture.This can be explained by the final structure of the CH/CD films.More specifically, they consist of hollow truncated cone structures with an external hydrophilic character and an internal hydrophobic character, which means that the interactions between CD and CH include hydrophobic forces [48].When compared to the studies of Xu et al. [34] and Ye et al. [35], where the basic film forming material was gelatin (another widely used basic material), lower moisture values were observed in the present study, which is obviously considered beneficial.Finally, the thickness of the final formed edible films was not significantly changed by the addition of either CNC or CD to the CH, regardless of the level of addition, with all films presenting a thickness between 0.03 mm and 0.04 mm. The above results show that both CNC and CD, provide composite films with improved physicochemical properties.With the only exception in the case of humidity, CD proved slightly more advantageous than CNC. Barrier Properties Barrier properties are extensively studied in edible films as they are involved in reactions that may result in food deterioration.OP and WVP were measured for this purpose.These are both affected by the materials used, the preparation method, the type, level and dispersion quality of additives, and the voids, cracks and chains' order of polymers [44,49]. Oxygen and water vapor can be transferred from the indoor/outdoor environment through the polymer, resulting in appreciable changes in the products' quality and shelf life.The target is to achieve low values in both OP and WVP because this leads to lower mass transfer between the food product and the environment, resulting in lower weight loss, oxidation, and microbial contamination. Water vapor barrier properties can be quantified by WVP, which indicates the amount of permeating water per unit area and time (kg/m•s•Pa) [8,44].Additionally, the oxygen barrier properties can be quantified by OP, which indicates the amount of permeating oxygen per unit area and time (kg/m•s•Pa) [50,51]. The effect of CNC or CD on the barrier properties of the final formed edible films is shown in Table 2. The final formed edible films with only CH presented low OP values and, despite their hydrophilic nature, they exhibited low WVP values, as well (Table 2).Nevertheless, the addition of CNC or CD led to a further significant reduction (p < 0.05) in these values, which in some cases exceeded 50%.These results are in agreement with related studies claiming that the addition of nanomaterials to edible films improves their barrier properties [24,52].This is attributed to the high crystallinity of CNC [53].Oxygen and water vapor diffuse more easily through the amorphous areas of the polymer matrix, so the increase of the crystalline region, formed by a network of hydrogen bonds, results in a stable polymer which improves the block permeability behavior of the nanocomposite edible films [54].In the case of the CH/CD films, the effect on the barrier properties is due to hydrophobic nature of these materials, which provides higher moisture barrier and water resistance, and due to the forces between CD and CH, which enhance the stability of the composite films [6]. Finally, when comparing results from the addition of CNC or CD, CD led to superior barrier properties, which makes it a better choice between the two materials in question. Optical Properties Optical properties are crucial features that affect the suitability, the appearance and the marketability of edible films and coatings for various applications.They include color and transparency and can be easily detected by human vision.They characterize the surface and affect certain aspects of foods' quality [43,55].Therefore, it is desirable to produce films with lighter color and increased transparency.For this reason, the difference of color with a white plate (∆E), the white index (WI), the chrome (C*), the yellow index (YI) and the transparency was measured in all film samples. The effect of CNC or CD on the optical properties of the final formed edible films is shown in Table 3.The ∆E values of the films indicate differences that can be perceived with a naked eye, as ∆E > 1 [56].In particular, the final formed edible films with only CH presented the highest ∆E value, which was expected as these films had a light-yellow color.For the same reason, these films presented the lowest WI value.The incorporation of CNC and CD caused significant changes (p < 0.05) in most of the optical properties, leading to the improvement of the final formed films (Table 3).Both CNC and CD reduced the color difference between the final formed edible film and the white plate (∆E).This is indicated by the increased WI and the decreased YI values, as well as the increased transparency, which is desired for edible films and coatings.In general, the transparency of the films was increased with the addition of both CNC and CD.The C* value, which is indicative of the color intensity, also remained low in all cases [38]. The above experiments show that both materials (CNC and CD) contributed to the improvement of the optical properties with comparable results.Nevertheless, the CH/CD samples showed slightly lower C* values giving a small advantage in choosing CD over CNC. Mechanical Properties Mechanical properties are very important as they provide a direct indication of the strength and cohesion of the films or coatings.They are related to the structural coherence and the mechanical resistance against the destruction of the food product during its transport, storage or preservation.Commonly, the mechanical resistance of films is studied according to (i) the breaking stress (σ), which shows the maximum traction force per film's cross section required to break the film, and (ii) the elongation at break (ε), which provides the degree to which a film can be stretched before it breaks [55,57,58]. The mechanical properties of the films measured in this study were the maximum breaking force (F), the breaking factor (Puncture Strength), the deformation at break (D), the percent of elongation at break (ε) and the elastic modulus and the breaking stress (σ).Mechanical properties concern not only the hardness of the films but also their stability and homogeneity.The target was to produce edible films with the ideal combination of these characteristics. Table 4 shows the effect of CNC and CD addition on the mechanical properties of the formed edible films. Results show that the films with only CH were flexible and presented high mechanical strength.In fact, they presented high values in all the properties examined (Table 4).This is in agreement with similar studies, according to which chitosan is a material that creates edible films and coatings with good mechanical properties [6,59].This is also confirmed by comparison with other popular film forming materials, such as gelatin, which exhibit significantly lower values in their mechanical properties [34,35].Results show that the addition of CNC or CD to the CH samples slightly degraded the mechanical properties of the composite films.This degradation, however, was not statistically significant.An exception was the sample with CH/CD (50/50) which presented similar or higher values in its mechanical properties compared to the control sample.This is attributed to the same ratio of the two components (CH and CD), which allows them to be more evenly distributed on the surface of the film and results in a structure that facilitates the uniform distribution of the entered stress across the film and reduces stress concentration areas [60]. Additionally, results indicated that the addition of CD had a significant influence (p < 0.05) on the elastic modulus (Table 4).This is associated with the interactions between CH and CD and the final structure of the formed edible films.The comparison of the two materials in question shows that in some cases the CNC containing films presented higher mechanical strength than the CD containing films.This is probably due to the ability of nanocrystals to facilitate load transfer and stress distribution, thus increasing the stability of the edible films [61]. Microstructure Properties The understanding of the microstructure of a film or coating is very useful as it determines its mechanical, physicochemical and barrier properties, and furthermore specifies its application.The structure of edible films can be studied by: (i) scanning electron microscopy (SEM), to investigate the structure changes of the films and obtain their surface and cross-sectional topography and (ii) Fourier-transform infrared spectroscopy (FTIR) analysis, to examine the interactions between the components of the films [45,55]. FTIR Analysis The types of bonds between (i) CH and CNC and (ii) CH and CD, were studied with FTIR measurements on the final formed edible films. Figure 1 depicts the FTIR spectra of the different final formed edible films. The FTIR spectrum of pure CH film, shows a broad absorption peak, with its center at about 3615 cm −1 , which was related to the stretching vibration of O−H groups and to the intermolecular and intramolecular hydrogen bonds.Two absorption peaks appeared at 2945 and 2860 cm −1 , which were attributed to the symmetrical and asymmetric stretching vibrations of the carboxylate group (−CH 2 ) respectively.The absorption peaks at 1462, 1547 and 1653 cm −1 were attributed to the amides III (HN-CO), II (NH) and I (−C=O).The absorption peak at 1375 cm −1 was due to the bending vibrations of O−H.Finally, the absorption peak at 1032 cm −1 was attributed to the C-O-C stretching vibration [61]. Results in Figure 1 indicate that no chemical reactions occurred between CH and CNC or CH and CD during the production of composite films.There were only peaks around the absorption peak of O−H (centered at 3615 cm −1 ), indicating the formation of intermolecular hydrogen bonds between the components.Moreover, a significant increase in the intensity of the absorption peaks at 1032 cm −1 occurred by the addition of both CNC and CD to the CH samples.The FTIR spectrum of pure CH film, shows a broad absorption peak, with its center at about 3615 cm −1 , which was related to the stretching vibration of O−H groups and to the intermolecular and intramolecular hydrogen bonds.Two absorption peaks appeared at 2945 and 2860 cm −1 , which were attributed to the symmetrical and asymmetric stretching vibrations of the carboxylate group (−CH2) respectively.The absorption peaks at 1462, 1547 and 1653 cm −1 were attributed to the amides III (HN-CO), II (NH) and I (−C=O).The absorption peak at 1375 cm −1 was due to the bending vibrations of O−H.Finally, the absorption peak at 1032 cm −1 was attributed to the C-O-C stretching vibration [61]. Results in Figure 1 indicate that no chemical reactions occurred between CH and CNC or CH and CD during the production of composite films.There were only peaks around the absorption peak of O−H (centered at 3615 cm −1 ), indicating the formation of intermolecular hydrogen bonds between the components.Moreover, a significant increase in the intensity of the absorption peaks at 1032 cm −1 occurred by the addition of both CNC and CD to the CH samples. SEM Analysis The SEM images of the CH, CH/CNC and CH/CD final formed edible films depict the distribution of the composing materials on the surface of the films (Figure 2). SEM Analysis The SEM images of the CH, CH/CNC and CH/CD final formed edible films depict the distribution of the composing materials on the surface of the films (Figure 2). The surface of the CH/CNC films was rougher compared to those with only CH, probably due to the formation of a polyelectrolyte-macroion complex (PMC) between CNC and CH [61,62].The 75/25 and 50/50 proportions of the CH/CNC films showed lower heterogeneity and a more dense structure compared to the 25/75 ones, with less CNC agglomerates and/or PMC crystals on the film surface, indicating a better dispersion of the CNC within the CH matrix at the lower CNC concentration [61,62].Additionally, the surface of the 25/75 CH/CNC films contained more crystals, apparently due to the increased CNC and/or PMCs concentration.On the other hand, the addition of CD to the CH samples created a heterogeneous structure in which crystals of CD were entrapped in the continuous polymer network.It is obvious that the CD crystals became more intense as the amount of CD was increasing. Summary of Results for CH/CNC and CH/CD Edible Films The overall evaluation of the above results shows that the addition of CNC as well as CD had an enhancing effect on the CH films, both decreased the viscosity, leading to better coating applicability.Moreover, they both improved the barrier properties of the final formed edible films.Furthermore, the addition of CNC led to films with a more stable structure and higher mechanical strength, while CD improved their barrier and optical properties by increasing the transparency and reducing the color index.Finally, the 50/50 proportions for both CH/CNC and CH/CD films provided the best overall results.All the above-mentioned findings supported the idea of combining the two materials (CNC and CD) in a ternary mixture with CH, in order to examine whether each material retains its beneficial properties in the final mixture.The results of this research are analyzed below. CH-CNC-CD Mixture Characteristics In order to study the combined effect of CNC and CD, samples of CH/CNC and CH/CD, both in proportion of 50/50, were mixed at three levels (75/25, 50/50 and 25/75).Therefore, the final proportions of the CH-CNC-CD samples were 50-37.5-12.5, 50-25-25 and 50-12.5-37.5 respectively.The following properties were studied: viscosity, surface tension, thickness, oxygen permeability, water vapor permeability, color difference, elastic modulus and breaking stress (σ).Viscosity and surface tension were evaluated in the primary solutions and the rest in the final formed edible films. It was observed that most properties (viscosity, oxygen permeability, water vapor permeability, color difference, elastic modulus and breaking stress) underwent statistically significant changes (p < 0.05) in the composite films.Samples with CH-CNC-CD in the proportion of 50-37.5-12.5 displayed the best combination of viscosity and surface tension (Table 5) for coating (low viscosity and high surface tension) and better mechanical properties.As far as the barrier properties (Table 5) are concerned, samples with CH-CNC-CD in the proportions of 50-25-25 and 50-12.5-37.5 showed better results compared to samples with CH-CNC-CD in the proportion of 50-37.5-12.5.As expected, the color difference (Table 6) decreased as the CD proportion increased, due to the higher transparency of the CH/CD solution.Therefore, samples with CH-CNC-CD in the proportion of 50-12.5-37.5 excelled in their optical properties.Correspondingly, films with increased CNC proportions, presented higher mechanical properties (Table 6), as nanocrystal compounds create more stable structures.Apparently, there is no ideal ratio of CH-CNC-CD to achieve the best results in all properties.Thus, the choice of the most suitable combination depends on the priorities set in each case.More specifically, the 50-25-25 mixture is considered optimal when it comes to barrier properties.The 50-37.5-12.5 mixture provided enhanced coating characteristics and high mechanical properties.Finally, regarding optical properties, the 50-12.5-37.5 mixture offers the most appropriate solution. Figure 2 . Figure 2. SEM images of the surface of different CH, CH/CNC and CH/CD edible films.Figure 2. SEM images of the surface of different CH, CH/CNC and CH/CD edible films. Figure 2 . Figure 2. SEM images of the surface of different CH, CH/CNC and CH/CD edible films.Figure 2. SEM images of the surface of different CH, CH/CNC and CH/CD edible films. Table 1 . Physicochemical properties of CH, CH/CNC and CH/CD primary solutions.Values are given as mean ± standard deviation.Different letters in the same column indicate significant differences (p < 0.05) when analyzed by the Duncan's Test.Small letters indicate differences based on the films' materials while capital letters show differences based on their proportions. Table 2 . Thickness, moisture and barrier properties of CH, CH/CNC and CH/CD final formed edible films. bAValues are given as mean ± standard deviation.Different letters in the same column indicate significant differences (p < 0.05) when analyzed by the Duncan's Test.Small letters indicate differences based on the films' materials while capital letters show differences based on their proportions. Table 3 . Optical properties of CH, CH/CNC and CH/CD final formed edible films.Values are given as mean ± standard deviation.Different letters in the same column indicate significant differences (p < 0.05) when analyzed by the Duncan's Test.Small letters indicate differences based on the films' materials while capital letters show differences based on their proportions. Table 4 . Mechanical properties of CH, CH/CNC and CH/CD final formed edible films.Values are given as mean ± standard deviation.Different letters in the same column indicate significant differences (p < 0.05) when analyzed by the Duncan's Test.Small letters indicate differences based on the films' materials while capital letters show differences based on their proportions. Values are given as mean ± standard deviation.Different letters in the same column indicate significant differences (p < 0.05) when analyzed by the Duncan's Test.Different superscripts indicate differences based on the proportions of the materials. Table 6 . Properties of CH-CNC-CD final formed edible films.Values are given as mean ± standard deviation.Different letters in the same column indicate significant differences (p < 0.05) when analyzed by the Duncan's Test.Different superscripts indicate differences based on the proportions of the materials.
2022-09-03T15:10:31.996Z
2022-08-31T00:00:00.000
{ "year": 2022, "sha1": "2ecadec4eb8d4df5bf4270f720f064aec9fb5aed", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/17/8729/pdf?version=1662001576", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7d723ef6a73fd5103892968424f9aa8312c679ba", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
155280501
pes2o/s2orc
v3-fos-license
An Outranking Multicriteria Method for Nominal Classification Problems with Minimum Performance Profiles In recent years, nominal classification problems have gained importance, especially in the context of strategic management of organizations. In this sense, this paper presents a novel multicriteria nominal classification method, derived from the concepts of PROMETHEE, applied to use in problems characterized by minimum performance profiles (MMP) for the classes. The main advantages of this proposal are criterion and alternative flexibility for classes; robustness, because it uses the concepts of a wellknownmethod (PROMETHEE); and usefulness, because many real situations are characterized byMMP for the classes. Moreover, a real-world example is presented: a retailer’s assignment in a bank, showing the applicability of the method.The proposal of a new multicriteria nominal classificationmethod emerges from a need to devise amore flexible and realistic procedure for characterizing classes because the feature of criterion and alternative flexibility for classes has not been addressed in any extant multicriteria nominal classification procedure. The present proposal thereby endeavors to address this deficit in the multicriteria field. Introduction Multicriteria decision problems represent situations in which the decision maker (DM) confronts at least two alternatives, and the decision aims to achieve multiple objectives that are, most of the time, conflicting [1]. In these problems, the DM may pose the problem by choosing, ranking, classifying, or describing the alternatives. These modes of framing are referred to as problematics [2]. Classification allows a DM to assign alternatives to predefined classes, a process known as supervised assignment, or to nonpredefined classes, which is known as unsupervised classification and typically referred to as clustering. In both cases, according to [3], this problematic has compelling implications in numerous areas related to practical or scientific issues, such as the following fields: inventory classification [4][5][6]; supplier classification [7]; risk analysis in pipelines [8]; and cooperation classification [9]. For more detailed information on clustering approaches that permit allocation of alternatives into nonpredefined classes, see [10][11][12][13][14][15]. In the context of a supervised assignment, the predefined classes can either be ordered or not ordered. Sorting applies to cases involving ordered classes, and classification applies to problems involving nonordered classes, also known as nominal classification problems [3]. According to [17], in sorting problems, classes are either represented by the lower and upper bounds of a limiting profile (as in the case of ELECTRE TRI) or by a central profile as in [18]. The method proposed herein enables the allocation of alternatives into predefined classes. The proposal of a new multicriteria nominal classification method based on MPP (minimum performance profiles) emerges from a need to devise a more flexible and realistic procedure for characterizing classes, using concepts already associated with multicriteria methods. It is, however, worth noting that the method is easily adapted to apply to other types of problems with classes characterized by maximum performance profiles, central profiles, or alternatives representing the typical element of a class and pursuant to a proximity index. As such, this proposal's main advantages are as follows: criterion and alternative flexibility for classes; robustness, conceptualized in terms of a well-known method (i.e., PROMETHEE); and usefulness. Mathematical Problems in Engineering The paper is structured as follows. The next section-Theoretical Contributions-highlights the importance of methods devoted to nominal classification problems by outlining several potential applications and presents the gap of the literature which motivated the development of the method proposed herein. The section Materials and Methods comprises two subsections: the first subsection is devoted to describe nominal classification problems and the aim of the proposed method; the second subsection-Proposed Method: Features and Definitions-first presents this proposed method in detail, along with a summary of PROMETHEE concepts, assumptions, and notations, and goes on to describe the conditions and features used in the proposed nominal classification method. The section Application presents an illustrative example and a comparison among several nominal classification methods, and it is followed by a robustness analysis of the proposed method. The following section provides a discussion of the results. Finally, the section Conclusions presents some conclusions and final remarks. Theoretical Contributions Although [19] assert that, in recent years, nominal classification problems have grown more important, mainly in the context of managing of organizations and institutions, the same authors also acknowledge that this has not yielded a correspondingly vast literature on multiple criteria nominal classification. Indeed, due to its competitiveness, modern society is on a constant quest for patterns or homogeneity aimed at more effective implementation of its policies and strategies. Five potential applications of multicriteria nominal classification problems are described in [19]. One such application is the problem of identifying or determining the most accurate disease class(es) for a given patient, based on his/her symptoms. Thus, patients assigned to the same class (es) of disease may be subject to identical medical procedure(s). Alternatively, the process of recruiting soldiers could also be handled as a nominal classification problem, as each candidate is assessed according to multiple individual features (i.e., physical fitness, intelligence, motivation, teamwork skills, and mental faculties) and subsequently assigned to one of several special core skill task units, where they will undertake special training courses. Another potential application relies on the fact that alerting people to information about public health events and risks, via social media, should be pursued differently, according to the specific type of user targeted. Whenever possible, users are characterized in terms of various features, such as age, health condition, frequency of travel, and degree of dependence on social networks. Users can then be assigned to one of several social groups, like "younger," "middle-aged," or "elderly." The fourth potential application concerns the problem of assigning responsibilities to risk owners (i.e., a person or entity responsible for managing an assigned risk). This is normally performed in risk management. Finally, the fifth potential application involves the task of determining the type of instrument(s) for issuing environmental policy best suited to manage each environmental issue in a way that achieves desired outcomes strategically, effectively, and efficiently. This is especially important, because policies play a key role in addressing complex environmental and health problems, and consequently, in improving the state of the environment. In the multicriteria field, many approaches have been proposed to address the sorting problematic. Evidently, the ELECTRE TRI method [20,21] is "the most popular", according to [16], and "the most used", according to [3], method of ordinal classification and based on limiting or boundary profiles. Adaptations of this method are exemplified by many works, including ELECTRE TRI-C, based on characteristic or central reference profile [22]; ELECTRE TRI-NC, where each class is defined by several central reference actions [23]; and ELECTRE-SORT [24], where classes are defined by central limiting profiles that can also be incomparable. It is possible to cite additional methods, along with the ELECTRE TRI, that deal with ordinal classification: PROMSORT [25]; AHP-Sort [26]; THESEUS [27]; TRICHOM [28]; N-TOMIC [29]; FlowSort [17]; a pairwise comparison-based method [18]; ORCLASS [30]; and a hybrid method based on AHP method, a veto system, and the K-means algorithm [31]. However, there are substantially fewer methods developed to address nominal classification problems than methods, proposed in the last few decades, intended to aid DMs in choosing, ranking, and even sorting problems. Most current methods designed to handle nominal classifications problems are procedures based on reference actions, also called central profiles. Indeed, [32] argued that such problems usually require determination of whether an alternative a is close or similar to alternative b, or to an alternative representing a typical element of a class-also known as a prototype, and [33] explained preferences for criteria in terms of weights reflecting the importance of the criteria, relative to all classes. As such, the latter mode does not rely on a reference profile, as the weights define the classes. Moreover, [34] treated a problem defining nonordered classes by the least typical representative of each, referred to as the entrance threshold, and [35] defined each class by a given number of features, conditions, or constraints. Problems characterized by MPPs have drawn the attention of multiple researchers. For instance, [36] employed a nominal classification method aimed at enabling a construction company to select managers for different roles (i.e., the classes), according to different competencies and MPPs for classes; [37] applied the NeXClass nominal classification method to the project of assigning military students to one of multiple classes, characterized by MPPs consistent with predefined criteria; and [34,38,39] presented a real-world application of a classification method, using MPPs, to a problem in a banking environment. Indeed, according to a literature review on classification methods, the feature of criterion and alternative flexibility for classes has not been addressed in any extant multicriteria nominal classification procedure, except in the method proposed by [33]. Although the proposal of [33] evaluates the alternatives, according to some criteria, the classification method relies on a binary linear programming approach, akin to a portfolio problem maximizing a valued objective Mathematical Problems in Engineering 3 function. The present proposal thereby endeavors to address this deficit in the multicriteria field. The feature of criterion and alternative flexibility for classes will be fully discussed and described in the second subsection of the next section. According to Figure 1, an alternative must meet the minimum performance profile for each criterion , defined for a specific class k, to be able to belong to this class. The problem presented in Figure 1 is a sorting problem, as, when comparing two classes and −1 for all criteria, the MPP required by is always greater than that required by −1 . Nominal Classification The main feature distinguishing nominal classification from sorting problems is that, in the first, classes are nonordered regarding the criteria. Figure 2 illustrates this idea for a nominal classification problem characterized by MPPs. As is observable in Figure 2, the MPP required by some criteria in some classes does not follow an order. To wit, the classes are nonordered. For example, the MPP required for one alternative, to be assigned to class , is greater than the MPP required for the same alternative to be assigned to class −1 for criterion 1 . However, the MPP requirement in the case of criterion 2 is greater for the −1 class than it is for the class. Regarding the methods applicable to multicriteria nominal classification problems, researchers have proposed some modes of assigning alternatives to classes, including the following: [41] proposed the fuzzy nominal classification method PROAFTN; [33] presented a multicriteria decision method with an additive linear function, based on SMART and with linear constraints; [32] developed a method based on the concepts of concordance and discordance; and [19] proposed a nominal classification method based on the concepts of similarity and dissimilarity. There are certainly more nominal classification proposals, such as those from the following researchers: [42], with TRINONFC; [43], with CLOSORT; [34], with NeXClass; [35], with a method based on selectability/rejectability measures; and [44]. As can be seen, there are numerous potential applications of multicriteria nominal classification problems. This is a clear motivation driving the development of the proposal presented in this paper. The problem stated here consists of assigning an alternative to a specific class, considering a set of alternatives, a set of predefined nonordered classes, and a set of evaluation criteria. Also, for each predefined class, the DM defines a MPP for each evaluation criterion, which represents the minimum requirements for the inclusion of an alternative in this class. In that way, the method proposed here differs from the methodological contributions described previously. Our proposal aims to assign each alternative to the most suitable class, or rather, the alternative that outranks the reference profile with a greater magnitude, thereby ensuring coherent classification is coherent. The next subsection details the aspects of the proposed method, after presenting the general features of the outranking multicriteria approach-more specifically the PROMETHEE, in which the proposal is based. Proposed Method: Features and Definitions. Following an outranking multicriteria approach, where two alternatives 1 and 2 are compared, the result must be expressed as a preference. Therefore, a preference function F / F: A × A → (0, 1), representing the intensity of preference of alternative 1 regarding alternative 2 , must be recognized, such that [2,[45][46][47][48]. It is worth stating that the symbol ∼ stands for "close to" in the multicriteria literature [45][46][47][48]. Among methods for outranking multicriteria, the PROMETHEE, proposed by [48], is particularly simple and suitable method for achieving accuracy, where multiple evaluation criteria are involved [49]. The PROMETHEE methods use six types of preference functions associated with each criterion, as detailed by [48]. These were based on previous methods, such as ELECTRE III (linear criterion), or on preference modeling structures (usual, U-shape, and level criterion). In most practical applications, the six preference types provide the DM with a sufficient level of flexibility [40]. The six types of criteria and their respective descriptions are provided in Table 1. As presented in Table 1, most types of preference functions used in PROMETHEE have a double threshold: and . Reference [50] has noted the importance of defining the structure of criteria in classification methods, by a double threshold (i.e., preference and indifference thresholds). According to this author, a double-threshold structure prevents improper classification. To wit, the absence of preference and indifference thresholds can lead to improper judgements between strict preference and indifference among alternatives and profiles of classes. In fact, several multicriteria classification methods, such as ELECTRE TRI or NeXClass, rely on the double-threshold structure. Further, another justification for the double-threshold structure is that it facilitates avoidance of weak outranking Figure 1: Profiles in ordinal classification problem [16]. relations between alternatives and profiles of classes that produce improper assignments to classes. Moreover, given that there must be imprecise and uncertain information about the MPPs, setting indifference and preference thresholds is recommended. Finally, in a case where the DM is absolutely sure about the values for the MPPs, the preference and indifference thresholds can equal zero. Therefore, our approach is flexible in the sense that it can use or not use a doublethreshold structure. For each pair of alternatives 1 and 2 , one first defines a preference index^for 1 regarding 2 over all the criteria. Suppose every criterion ( = 1, 2, . . . ) has been identified as one of the six types considered (Table 1), so the preference functions ( 1 , 2 ) have been defined for each . The multicriteria preference index^for 1 with regard to 2 over all the criteria in the PROMETHEE method is therefore defined as the weighted average of preference functions :^( ( 1 , 2 ) represents the preference function of alternative 1 regarding 2 over the criterion . represents the weight of criterion . ( 1 , 2 ) represents the intensity of preference of the DM of alternative 1 over alternative 2 , given all the criteria simultaneously. It is a value between 0 and 1: (i)^( 1 , 2 ) ∼ 0 denotes a weak preference of 1 over 2 for all the criteria, (ii)^( 1 , 2 ) ∼ 1 denotes a strong preference of 1 over 2 for all the criteria. In classification problems, nominal or ordinal, the outranking relationships are then generated by comparing alternatives to profiles. This comparison, in the approach proposed in this paper, is made through two indices that validate the claim . These indices are defined in the following set of terms. Definition 1 (intensity of membership). For any alternative from and any MPP representing class ,^( , ) represents the intensity of the membership of in ; to wit, the amount of evaluation criteria supports this membership. ( , ) represents the preference function of alternative regarding the profile over the criterion for the class . Given that not all criteria are necessarily considered across all classes, and even when they are, they may vary in their preference functions, weights, or thresholds, depending on their relevance to and influence on each class, and the sets , , , and may differ for each class. Based on these two indices (intensity of membership and intensity of nonmembership), the assignment of an alternative to a class is determined by the intensity of the assignment^( , ) described in the following. Definition 3 (intensity of the assignment). For any alternative from and any MPP , representing class ,^( , ) = ( , ) -^( , ) represents the intensity of the assignment of to . Thus, in the proposed method, the objective is to max^( , ). In the Evaluation phase, each single alternative is compared with each MPP , and^( , ) is calculated using (2). Further, each MPP is compared with each single alternative , and^( , ) is calculated, using (3). Then,( , ) is defined for all alternatives as it bears on all classes. The Assignment phase is performed via the allocation of each alternative to a specific class as a way of maximizinĝ ( , ). In the proposed method, it is possible to apply different criteria subsets to different classes, given the possibility that some criteria may be applicable to characterizing some classes but unnecessary for other classes. It is worth stating that this is a specific characteristic of nominal classification problems and thus does not apply to sorting problems in which classes are ordered and are characterized by the same criteria. Therefore, a unique set of criteria, including all criteria considered for at least one class, is generated. Thus, the set of criteria weights of a class k, for example, is represented by the set = ( 1 , 2 , . . . , ). The value of a given criterion weight represents its relevance to each class. So, when a criterion 2 , for example, is neither relevant to nor even considered by a specific class k, 2 assumes a null value. Indeed, other researchers have pointed out the property of criteria flexibility for classes. For instance, [41] claims that criteria weights should be defined in terms of the following two conditions: criterion is not pertinent to the assignment of alternative to class and criterion is the only criterion pertinent to the assignment of alternative to class . In fact, there are several classification problems where some criteria characterize more than one class and some criteria are specific to one class. In medical diagnosis, for example, patients are assessed on the basis of different symptoms (e.g., fever, pain, headache, and cough) characterizing a very heterogeneous group of diseases (classes). According to the medical evaluation of the patient (alternative), given these various symptoms (criteria), the appropriate treatment is prescribed, to maximize the chances of success [19,41]. In addition to the criteria flexibility for classes, given that it may not be appropriate to apply the same criteria set to different classes, another important feature in nominal classification problems, exemplified by the proposed method, is the alternative flexibility for classes, which means that some alternatives may be assigned to more than one class, and others may not be assigned to any class. As in the case of medical diagnoses, the patient could have symptoms that characterize different diseases and require different treatments. However, for the disease that represents the worst condition afflicting patient, the correspondent treatment takes priority. In this way, the minimum profile approach is used to identify the class ( ) to which an alternative gives the maximum contribution (the worse condition in the medical example), using the expression max^( , ), by means of assessing the alternative for the criteria that characterize the class . Therefore, it is important to present formal and explicit definitions of criteria and alternative flexibility for classes. Definition 4 (criteria flexibility for classes). For each class , the criterion weight ( ) can assume the following values: (a) 0, when the criterion is not pertinent for the assignment of an alternative to class . (b) 0 < < 1, when the criterion is not the only pertinent criterion for the assignment of an alternative to class . (c) 1, when the criterion is the only pertinent criterion for the assignment of an alternative to class . Definition 5 (alternative flexibility for classes). An alternative ∈ , will be as follows: (c) not assigned to any class , if^( , Despite the importance of flexibility, in relation to both criteria and alternatives, there are only a few works described in the literature, such as [33], which approach this flexibility in the context of proposing models for nominal classification. This flexibility is a particularly strong characteristic of the method proposed in the present work. Other important properties, regarding nominal classification methods, are proposed by Costa et al. (2018) and regard the operations of merging, splitting, adding, and removing. Definition 7 (splitting operation). If one class , characterized by the MPP , is separated into two different classes, and , characterized by two new MPPs and , respectively, then one of new classes is characterized by the MPP , that is, or = , for all criteria . Consequently, all the alternatives previously assigned class will be assigned to the new classes and . Definition 8 (adding classes operation). If one class is included in the problem, this operation leads to build a new MPP as well as the set of criteria weights for this class. Such a new class may receive alternatives previously assigned to other classes and alternatives which were previously not assigned to any class. Definition 9 (removing classes operation). If one class is removed from the problem, alternatives previously assigned to this class may be assigned to one, more than one, or none of the remaining classes. The next section furnishes a better understanding of the proposed method through the application of the method to a real-world problem. Application To illustrate the proposal, this paper presents a real-world application that uses real data presented by [34], concerning the problem of assigning retailers to use bank services. The real-world problem involved a Greek bank aiming to reorganize its electronic payment network of retailers equipped with terminals for online payments. To improve service efficiency, the bank wants to assign retailers to four predefined nonordered classes that represent the potential and profitability characteristics, according to specific criteria. The bank uses a two-dimensional evaluation framework, which comprises the retailer's site potential and profitability dimensions to classify retailers. The four classification classes (Table 2), defined on the basis of this segmentation, depict the importance of the retailer to the bank. Further, the classes are also linked to a marketing strategy that the bank will follow as a result of the classification. Data regarding the evaluations of alternatives, criterion weights, and MPP of the classes are shown in Table 4. Figure 4 illustrates the idea of MPPs, considering the minimum performance profiles ( 1 and 2 ) required for 2 classes ( 1 and 2 ) for 5 of the 13 criteria. As can be seen in Figure 3, the MPPs for this application do not set boundaries between classes, as expected in nominal classification methods. It worth noting, further, that the profile of class 2 (blue line) is below the profile of class 1 (red line) for criteria 1 , 2 , and 3 , but the profile of class 2 is above the profile of class 1 for criteria 4 and 5 . (i) 4 sets of preference thresholds, = ( 1 , 2 , . . . , 13 ) and (ii) 4 sets of indifference thresholds, = ( 1 , 2 , . . . , 13 ). The data regarding preference and indifference thresholds for each class according to each criterion are shown in Table 5 and in Table 6. The values determined for this problem are exactly the same for all criteria and classes. The results of the Evaluation phase, where^( , ),( , ), and^( , ) are calculated, can be seen in Appendix. Finally, the Assignment phase is performed through the allocation of each alternative to a specific class as a way to maximize^( , ). Table 7 summarizes results of the comparison of this nominal classification proposal with three methods: the NeXClass by [37], the method presented by [35], and the one proposed by [33]-adapted for this example. The methods used in these three papers, as proposed in this paper, aim to help the DM address a nonordinal classification problem. Details about them were presented in the initial sections. Definition Retailers with relative low potential and medium to high profitability. Retailers with relative high potential and medium to high profitability. Retailers with minimum to high potential and medium to low profitability. Retailers with medium to low potential and low profitability. Strategy Bank will allocate substantial resources to strengthen retailer's potential. Bank will allocate maximum resources to provide high added value innovative services. Bank will minimize resource allocation and focus to top retailers of the class. Bank will screen retailers for potential development, allocating a minimum level of resources. Exclusivity (index based on retailer's exclusive collaboration; normally a retailer has installed at the same place EFT/PoS terminals from several competing banks) 1-100 10 Location (Index based on retailer's distance factors from areas with high traffic) 1-100 11 Opening hours (index based on retailer's opening hours) 1-100 12 Training of employees (index expressing employees' expertise on EFT/PoS) 1-100 13 Alternative channels (index expressing usage degree of bank's alternative payment channels from retailer) 1-100 * EFT/PoS: Electronic Fund Transfer at Point Sale. Source: [37]. As can be seen, NeXClass [37] differs in three classifications, [35] in one classification, [33] in one classification, and this proposal in one classification, relative to the current procedure. It is important to note that [33,35] did not apply thresholds to the problem; however, the structure with a double threshold (preference and indifference thresholds) used in this paper prevents improper classification, as stated before. Although our results are the same as those seen [35] and differ only in one classification from the results of [33], it is extremely important to analyze the results with different data. A Scenario Analysis Therefore, this paper addresses the robustness of the results obtained by the nominal classification method proposed herein, using this first illustrative example. According to [51], robustness is a key issue in the field of decision-aiding, as well as in operations research. As a result, numerous researchers have recently addressed this issue [51][52][53][54][55][56][57][58][59][60][61][62] and have proposed the use of performance measures for classification and clustering methods [63,64]. The term robustness refers to a capacity for withstanding "vague approximations" and/or "zones of ignorance" to maintain certain properties [51]. In general, the values assigned to the parameters in multicriteria methods are not perfectly defined. Indeed, according to [57], a critical challenge faced by analysts utilizing a multicriteria decision aid (MCDA) framework is the elicitation of the criteria weights. In the proposed method, the aim is to provide recommendations concerning the classification of retailers that remain acceptable for a wide range of values of the parameters. Thus, robustness with respect to different scenarios was assessed by changing some preference parameters, such as criteria weights, profiles of classes, and preference and indifference thresholds. As a result, a total of 138 scenarios were tested: the combination of changing the values of 13 criteria weights, four profiles of classes according to each criterion, and preference and indifference thresholds in ±10%, following similar procedures to those presented in [19,56]. The results of the analysis of 134 scenarios are shown in Table 8 overall allocation, i.e., to assign the alternative to the class that leads to max^( , ). Another important characteristic of the present proposal concerns alternative flexibility for classes, which refers to an alternative-according to the intensity of assignment parameter^( , )-that can be in zero, one, or more classes. The first possibility could occur when the^( , ) for each class is below a DM's given minimum, such that the alternative does not belong to any class of the problem. A real-life example might involve some candidates, under consideration for employment by a company, one or more of whom cannot be assigned to any job vacancy, given the lack of required skills. The second possibility is the most common: an alternative is assignable to one, and only one, class. The last possibility refers to a situation where an alternative could be assigned to more than one class, due to a difference between two or more of the biggest^( , ) that is too small or possibly even zero. This was the case for some scenarios considered in the robustness analysis, and it would be the case, in the context of the aforementioned real-life example previously presented, where one or some of the candidates have the skills required for more than one job. To deal with the alternative flexibility, this work proposes assignment thresholds to be discussed and determined by the DM. These thresholds would be in accordance with a minimum-intensity assignment parameter and indifference between more than one of the biggest intensities of assignment parameters. One can observe that the proposed method requires the definition of several parameters (criteria weights, preference and indifference thresholds, and MPP) which is a common requirement of most multicriteria methods. For this reason, in the last decades, there has been an increase in research dedicated to elicitation of parameters because the elicitation process is one of the most complex and critical tasks facing research and applications within the field of decision analysis [59]. Indeed, this is especially critical because such parameters can change the position of any alternative in a class [9]. Reference [16], for example, proposed a methodology for the ELECTRE TRI that encompasses this problem, by substituting assignment examples by direct elicitation of the parameters of the model. The values of the parameters are inferred via a certain form of regression on assignment examples, which can be extended to apply to our method. Another important point is that more than one DM may participate in the nominal classification process and consequently a potential conflict can emerge regarding the numerical values of parameters. An interesting discussion regarding group decision process is provided by [60][61][62]. Finally, it is worth noting that it is possible to incorporate those methodologies related to the elicitation of parameters for group decision in our method. Conclusions As it can be seen, the type of classification problems which aims to assign alternatives in different classes according to particular characteristics is getting much attention from researchers and practitioners. The method proposed herein has three unique features, namely: flexibility, criterion and Table 9 ( , )^( , )^( , ) alternative flexibility for classes; robustness, because it uses concepts of a well-known method (PROMETHEE); and usefulness, many real problems are characterized by MPP for the classes; thus, this novel approach demonstrably addresses this problem. Moreover, because our method deals with nominal classification problems using the concept of MPP, the alternatives are designed to classes according to the concept of maximizing the overall performance of the assignment taking into account particular characteristics (criteria) of the classes. For instance, suppose that one is analyzing the health condition (class) of a patient (alternative) according to several symptoms (criteria). Using the proposed method, the patient would be assigned to a class in which the treatment would be efficient for all possible diseases. For future work, given the relative ease of the proposed method and its practical utility, this research may be extended, by using interval operations to deal with imprecise data. Further investigations may account for the study of the proposed assignment thresholds. Yet, some problems may require classifying alternatives by similarity, to allow for comparisons to the profiles of the classes related to a proximity index. An important item to remember is that this proposal is easily modified to address all problems with maximum performance profiles. Finally, another subject for future research is the development of a decision support system (DSS) with the proposed multicriteria method to make it available in a convenient way.
2019-05-17T14:13:36.704Z
2019-05-05T00:00:00.000
{ "year": 2019, "sha1": "fbc4411a5aef62489b012a000c6f3ec9a96579d2", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mpe/2019/4078909.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fbc4411a5aef62489b012a000c6f3ec9a96579d2", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
246863986
pes2o/s2orc
v3-fos-license
Assessing the Influence of Input Magnetic Maps on Global Modeling of the Solar Wind and CME-driven Shock in the 2013 April 11 Event In the past decade, significant efforts have been made in developing physics-based solar wind and coronal mass ejection (CME) models, which have been or are being transferred to national centers (e.g., SWPC, CCMC) to enable space weather predictive capability. However, the input data coverage for space weather forecasting is extremely limited. One major limitation is the solar magnetic field measurements, which are used to specify the inner boundary conditions of the global magnetohydrodynamic (MHD) models. In this study, using the Alfven wave solar model (AWSoM), we quantitatively assess the influence of the magnetic field map input (synoptic/diachronic vs. synchronic magnetic maps) on the global modeling of the solar wind and the CME-driven shock in the 2013 April 11 solar energetic particle (SEP) event. Our study shows that due to the inhomogeneous background solar wind and dynamical evolution of the CME, the CME-driven shock parameters change significantly both spatially and temporally as the CME propagates through the heliosphere. The input magnetic map has a great impact on the shock connectivity and shock properties in the global MHD simulation. Therefore this study illustrates the importance of taking into account the model uncertainty due to the imperfect magnetic field measurements when using the model to provide space weather predictions. Introduction The broad topic of space weather represents the constantly changing physical conditions in the near-Earth environment, which are significantly influenced by the solar wind and coronal mass ejections (CMEs). Fast CMEs drive shocks in the corona and heliosphere (e.g., Sime & Hundhausen, 1987;Vourlidas et al., 2003) that are believed to be responsible for gradual solar energetic particle (SEP) events (see reviews by Reames, 1999;Desai & Giacalone, 2016) primarily through the diffusive shock acceleration (DSA) mechanism (e.g., Lee, 1983;Zank et al., 2000). Large SEP events can pose major hazards to technology and life in space. If the interplanetary CME is directed at Earth, its embedded magnetic field, especially in the presence of a strong southward-directed component B z , can interact with Earth's magnetosphere and trigger non-recurrent geomagnetic storms (Gosling, 1993). Due to their critical importance to space weather prediction, significant efforts have been made in developing physics-based solar wind and CME models (see reviews by e.g., Cranmer et al., 2017;MacNeice et al., 2018;Gombosi et al., 2018). By specifying the radial component of the magnetic field at the inner (photospheric) boundary from observational magnetograms, these models can achieve a steady state solar wind and reproduce with some success the large-scale wind streams at 1 AU. Several simpli-fied analytical flux rope models have also been developed to address the erupting magnetic structure of CMEs (e.g., Titov & Démoulin, 1999;Gibson & Low, 1998;Titov et al., 2014. For example, by initiating a Titov-Démoulin (TD) flux rope, Manchester et al. (2008) simulated the 2003 Halloween CME event and made a quantitative comparison between the synthetic coronagraph images and LASCO observations, in which the strong CME-driven shock was reproduced. simulated the 2011 March 7 CME event using the Gibson-Low (GL) flux rope with the Alfvén wave solar model (AWSoM; van der Holst et al., 2014) from the Sun to 1 AU and performed detailed comparisons with remote-sensing and in-situ observations. The results show that the simulation can reproduce many of the observed features near the Sun and in the heliosphere. A recent study by Török et al. (2018) simulated the famous 2000 July 14 "Bastille Day" eruption using a modified TD (TDm) flux rope (Titov et al., 2014) with the Magnetohydrodynamic Algorithm outside a Sphere model (MAS; Lionello et al., 2013). Starting from a stable magnetic flux rope and initiating the eruption by boundary flows, the simulation is able to reproduce the morphologies of the observed flare arcade, halo CME, and associated EUV wave and coronal dimmings. Although several differences are found between the simulated and observed flux rope at 1 AU, their model successfully captured core structure of the flux rope and the negative B z component, which led to a very strong geomagnetic storm. Note that there are other approaches for modeling the CME flux rope including the Non-linear Force Free Field extrapolation (e.g., Wiegelmann, 2004;Wheatland, 2006;Malanushenko et al., 2014) and magnetofrictional models (e.g., van Ballegooijen et al., 2000;Yeates et al., 2008;Cheung & DeRosa, 2012;Jiang et al., 2016). With increasing sophistication of data-driven models, we are taking steps toward achieving physics-based space weather forecast capability. However, the available input observations for solar wind and CME modeling are severely limited, thereby requiring (often significant) assumptions both for the "missing data" and the physical conditions of the corona. The current data-driven Magnetohydrodynamics (MHD) solar wind models use the photospheric magnetic field observations as inputs. However, all the magnetic field observations (except those started only recently by Solar Orbiter ) are from the Sun-Earth line. From this perspective, we can adequately observe only less than one half of the solar surface, and need to make assumptions for the remaining areas, including the far-side and polar regions. There are two major types of magnetic maps that are widely used to drive the solar wind models. One is the synoptic or diachronic magnetic maps that assembles 27-days of magnetic field observations into a single map. One issue with this approach is that the magnetic fields on the same diachronic magnetic map are observed at different times and thus the map contains data which are up to 27-days old. The other type is synchronic magnetic maps that are based on surface flux transport models which simulate the evolution of the surface magnetic elements while assimilating new observations (e.g., Schrijver & DeRosa, 2003). Additionally, a synchronic map may incorporate the newly emerged flux on the far-side of the Sun as inferred from helioseismology (Lindsey & Braun, 1997;Braun & Lindsey, 2001;González Hernández et al., 2007). A preliminary study shows that by including the far-side flux in the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) -Wang-Sheeley-Arge (WSA) model, the observed solar wind conditions could be better reproduced (Arge et al., 2013). However, doing so is not straightforward given the number of assumptions needed for the total fluxes and tilt angles of such emerging flux elements. Since most of the current solar wind models rely on either of these two types of magnetic maps, it is important to quantitatively evaluate the differences in ambient solar wind solutions as obtained using the diachronic orsynchronic magnetic map inputs. We note that some studies have examined the uncertainties of the solar wind solutions due to magnetic data from different observatories (Gressl et al., 2014;Riley et al., 2014;Jian et al., 2015;Hayashi et al., 2016). There are also recent studies that compare the solar wind solutions from the diachronic and synchronic magnetic inputs based on either PFSS model (e.g., Wallace et al., 2019;Caplan et al., 2021) or MHD models (e.g., Linker et al., 2017;H. Li et al., 2021). However, most of the previous studies have focused on comparing the ambient solar corona and wind solutions, and not transient structures, driven by the different magnetic inputs. It is widely known that the ambient solar wind can have significant influence on the evolution of the CME and the properties of the CME-driven shock wave, as has been studied both in simulations (e.g., Riley, 1999;Riley et al., 2003;Odstrcil et al., 2004;Jacobs et al., 2005;Hosteaux et al., 2019) and in observations (e.g., Gopalswamy et al., 2000;Temmer et al., 2011). Here, we propose to investigate the influence of different background solar wind solutions, due to the different magnetic inputs (i.e., diachronic vs. synchronic), on the CME properties in 3D. In particular, the parameters of the CMEdriven shock (e.g., compression ratio, Mach number, shock speed, and shock angle θ Bn ) are critical for understanding the particle acceleration in gradual SEP events. However, these shock parameters are difficult to determine directly from remote sensing observations (e.g., Rouillard et al., 2016;Lario et al., 2017;Kwon & Vourlidas, 2018). Furthermore, the field line connectivity plays an important role in understanding the in-situ SEP observations (e.g., Lario et al., 2017), although it is also difficult to determine from current measurements. Therefore, to infer the link between the observer and the shock, most of the current SEP modeling efforts use Parker-spiral magnetic connectivity outside the source surface, which is often set at the heliocentric distance of 2.5 R . Due to the simplicity of this technique, such models do not always explain the observed characteristics of some SEP events (e.g., Cairns et al., 2020). With the dynamic magnetic connectivity and shock parameters available from an MHD model, additional information relevant to the shock acceleration can be obtained through a physics-based approach. In this study, we quantitatively assess the influence of the magnetic input, by modeling the CME on 2013 April 11 and its shock, which was associated with an SEP event (e.g., Cohen et al., 2014;Lario et al., 2014). This article is organized as follows. In Section 2, we describe the models and methods used in this study, followed by results in Section 3 and discussion and conclusions in Section 4. Data Description The CME was clearly associated with an M6.5 class flare starting at 06:55 UT in AR 11719 (N07E13, Carrington longitude ∼73 • ). At the time, the Carrington longitude for Earth, STEREO A (STA), and STEREO B (STB) were ∼86 • , ∼219 • , and ∼304 • , respectively. We use both diachronic and synchronic magnetic maps based on SDO/HMI observations to specify the model's inner boundary condition of magnetic field. The synchronic magnetic map in use is maintained at the Lockheed Martin Solar and Astrophysics Laboratory and based on a flux transport model (Schrijver & DeRosa, 2003), which assimilates new observations within 60 • from disk center. These magnetic maps are updated every six hours and can be downloaded directly from the PFSS package in SSWIDL. The latest documentation of the LM flux transport model can be found online (https://www.lmsal.com/forecast/surfflux-model-v2/). The diachronic magnetic map is obtained from the Stanford HMI Carrington Rotation Synoptic Charts. At each Carrington longitude in the magnetic map, the data are averaged from 20 contributing magnetograms made within 2 hours of central meridian passage (i.e., ±1.2 • of the central meridian) with the outliers (values which depart from the median by 3σ) removed. See Liu et al. (2017) for more details. We choose the HMI diachronic magnetic map with polar field correction (Sun, 2018) accessible at JSOC (http://jsoc.stanford.edu/) under dataseries hmi.synoptic mr polfil 720s (3600×1440 resolution). Solar Wind and CME Models The MHD solar wind model used in this study is the Alfvén Wave Solar Model (AWSoM; van der Holst et al., 2014), which is a data-driven model with a domain starting from the upper chromosphere and extending to the corona and heliosphere. The AWSoM model has been implemented at NASA's Community Coordinated Modeling Center (CCMC). The inner boundary condition of the magnetic field can be specified by different magnetic maps as mentioned in §1. The inner boundary conditions for electron and proton temperatures T e and T i and number density n are set to be T e = T i = 50,000 K and n = 2×10 17 m −3 . The fixed density and temperature at the inner boundary do not otherwise have an evident influence on the global solar corona and wind solution (Lionello et al., 2009). The detailed model validation on coronal density has been conducted by comparing numerical results with spectral line (e.g., EIS and SUMMER) observations , EUV line intensities , and more recently through the density derived from the Differential Emission Measure Tomography (Sachdeva et al., 2019). The Parker solution (Parker, 1958) is used to specify the initial conditions for the solar wind plasma, while the initial magnetic field is based on the Potential Field Source Surface (PFSS) model with the Finite Difference Iterative Potential Solver (FDIPS; Tóth et al., 2011). In this study, the source surface is set at 2.5 R . The global solar wind solution is obtained by coupling the solar corona (SC; from 1 to 24 R ) and inner heliosphere (IH; from 18 to 250 R ) components within the Space Weather Modeling Framework (SWMF; Tóth et al., 2012). Alfvén waves are prescribed as outgoing Alfvén wave energy density that scales with the surface magnetic field. The solar wind is heated by a phenomenological description of Alfvén wave dissipation and accelerated by thermal and Alfvén wave pressure. Electron heat conduction (both collisional and collisionless) and radiative cooling are also included in the model, which are important for creating the solar transition region selfconsistently. In addition, the model electron and proton temperatures are treated separately for producing physically correct solar wind and CME structures (e.g., CME-driven shocks), in which the electrons and protons are assumed to have the same bulk velocity but heat conduction is applied only to electrons due to their much higher thermal velocity Jin et al., 2013). By introducing the phenomenological description of Alfvén wave dissipation as well as the wave reflection and heat partitioning between the electrons and protons based on the results of linear wave theory and stochastic heating (Chandran et al., 2011), the AWSoM model has demonstrated the capability to reproduce the solar corona environment with three free parameters that determine the Poynting flux (S A /B), the wave dissipation length (L ⊥ √ B), and the stochastic heating parameter (h S ) (van der Holst et al., 2014). To initiate a CME eruption, we use the Gibson-Low (GL) flux rope model (Gibson & Low, 1998) which has been successfully used in numerous modeling studies of CMEs (e.g., Manchester, Gombosi, Roussev, Ridley, et al., 2004;Lugaz et al., 2005aLugaz et al., , 2005bSchmidt & Ofman, 2010;Manchester et al., 2014). Analytical profiles of the GL flux rope are obtained by finding a solution to the magnetohydrostatic equation (∇ × B) × B − ∇p − ρg = 0 with the solenoidal condition ∇ · B = 0. To get the solution, a mathematical stretching transformation r → r − a is applied to an axisymmetric, spherical ball of twisted flux b with r 0 diameter centered at r = r 1 relative to the heliospheric coordinate system. The field of b can be expressed by a scalar function A and a free parameter a 1 that determines the magnetic field strength (Lites et al., 1995). The flux rope acquires a tear-drop shape of twisted magnetic flux after the transformation. Also, Lorentz forces are introduced that lead to a density-depleted cavity in the upper portion and a dense core at the lower portion of the flux rope. This flux rope structure mimics the 3-part density structure of the CME seen in observations (Illing & Hundhausen, 1985). The GL flux rope profiles are then superposed onto the steady-state solar wind solution: i.e. ρ = ρ 0 +ρ GL , p = p 0 + p GL , B = B 0 + B GL . The combined background-flux rope system is in a state of force imbalance, and thus erupts immediately when the simulation is advanced forward in time. The GL flux rope is mainly controlled by five parameters: the stretching parameter, a, determines the flux rope shape; the distance of torus center from the center of the Sun, r 1 , determines the initial position of the axisymmetric flux rope before it is stretched; the radius of the flux rope torus, r 0 , determines the flux rope size; the flux rope field strength parameter, a 1 , determines the magnetic field strength of the flux rope; and a helicity parameter to determine the positive (dextral) /negative (sinistral) helicity of the flux rope Borovikov et al., 2017). developed a new method, Eruptive Event Generator Gibson-Low (EEGGL), to calculate GL flux rope parameters through a handful of observational quantities (i.e., magnetic field of the CME source region and observed CME speeds from white-light coronagraphs) so that the modeled CMEs can propagate with the desired CME speeds near the Sun. Summary of Approach To quantitatively assess the influence of magnetic input on the global modeling, we choose two different magnetic maps: the Lockheed Martin (LM) synchronic magnetic map for 2013 April 11 06:04:00 UT (referred to as input to Case I) and the diachronic Carrington magnetic map of CR2135 (referred to as input to Case II). Both types of magnetic maps have been widely used in the solar and heliospheric physics community for space weather nowcast/forecast purposes. The original magnetic maps are first resized to 360×180 resolution that matches simulation grid while preserving the flux. For the LM synchronic map, this is done directly through the PFSS package in SSWIDL. For the HMI diachronic map, we resize the data from the original map in 3600×1440 resolution. In addition, both magnetic maps have flux imbalance (i.e., zero-point error), which is calculated to be -6.9×10 21 Mx (∼-2% of the total unsigned flux) and 5.4×10 21 Mx (∼1.3% of the total unsigned flux) for the LM synchronic map and HMI diachronic map respectively. This zero-point error is corrected by removing the average field (i.e. monopole) from the original magnetic maps (Tóth et al., 2011). Other than this correction, we do not apply any scaling factor to the input magnetic maps in this study. Figure 1 shows the two magnetic maps used in this study. The LM synchronic magnetic map contains magnetic field observations only ±60 • from the disk center on 2013 April 11 06:04:00 UT (marked with white dotted box in Figure 1) while the rest of the magnetic map is based on the flux transport model. Due to the different methods used to produce these magnetic maps mentioned in §2.1, one can see evident differences between the two magnetic maps for areas with longitude >200 • . However, the magnetic fields around the source region (∼73 • ) are similar between the two maps. To quantitatively compare the two magnetic maps, we further calculate the unsigned flux for both the whole map and the assimilating window, and the mean polar field strength. The results are summarized in Table 1. First, we need to note that for newly assimilated observation (marked as white dashed window in Figure 1) in the LM synchronic map, the HMI flux is multiplied by a factor of 1.4 in order to match the previous MDI observations . Even with that enhanced magnetic flux, the total unsigned flux in the synchronic map (3.5×10 23 Mx) is still ∼17% less than that in the diachronic map (4.1×10 23 Mx), which is mainly due to more and stronger magnetic structures involved in the diachronic map outside the assimilating area of the synchronic map where the flux diffuses in the flux transport model. However, we find that the total unsigned flux within the assimilating window is similar between the two maps (1.3×10 23 Mx for synchronic map vs. 1.2×10 23 Mx for diachronic map). Considering the factor of 1.4 applied to the field in the assimilating window of synchronic map, this also means there must be considerable field evolution around 2013 April 11 (e.g., newly emerged fluxes after 2013 April 11). Note that based on the start/end times of Carrington Rotation 2135, the assimilating window area in the synchronic map corresponds to ∼9 days of observation (2013 April 6 to 2013 April 15) in the diachronic map. On the other hand, similar unsigned flux also means that about the same amount of Poynting flux is initiated in the AWSoM model for heating the corona. Another noticeable difference between the two maps is the polar field (Carrington latitude > 60 • ). For LM synchronic map, the polar field is simulated from the flux transport model over many solar rotations. For the HMI diachronic map used here, the polar field is extrapolated from previous observations when part of the polar region could be seen due to the Sun's tilt angle. Therefore, we can see many small-scale structures in the polar region of the synchronic map while the polar field in the diachronic map is smoothed due to the extrapolation. Nevertheless, we found that the mean magnetic fields in the south polar region are very similar (2.7 Gauss for synchronic map vs. 2.6 Gauss for diachronic map). For the north polar region, the synchronic map has a lower mean field of -1.5 Gauss comparing with the -2.2 Gauss field in the diachronic map. We run two steady-state simulations with the inner boundary condition of the magnetic field specified by the two different magnetic maps. After reaching the steady-state, we initiate a CME eruption by inserting a Gibson-Low flux rope into both steady-state solutions and integrate the model equations forward in time. The GL flux rope parameters are identical in the two simulation cases. We run the two cases for a duration of one hour in total simulated time, trace the shock location in 3D and calculate the shock parameters in order to compare the two simulations. The 3D shock front is determined by computing the derivative of entropy along the radial rays originating from the center of the Sun. The entropy is evaluated as s = ln(T p /ρ γ−1 ) where T p is the proton temperature, ρ is the plasma density, and γ is the polytropic index (γ=5/3). Once the shock front is determined, the shock normal n is calculated by using the magnetic coplanarity condition (B 2 − B 1 ) · n = 0 (Lepping & Argentiero, 1971;Abraham-Shrauner, 1972): where 1 and 2 represent the shock downstream and upstream conditions, respectively. The shock speed is determined by the conservation of mass across the shock: v s = ρ2u2n−ρ1u1n ρ2−ρ1 . The shock Alfvén Mach number is defined as where v A is the local Alfvén speed. The shock angle θ Bn is obtained by measuring the angle between the upstream magnetic field and the shock normal. To get the shock connectivity to different spacecraft locations, we extract field lines from the outer boundary of SC at 24 R to the surface of the Sun. Since in this study, we did not extend the domain to include the IH component for the CME simulation, the connectivity from the spacecraft locations to the outer boundary of SC is determined by the steady-state solution that did include the heliospheric domain. Considering the relative short simulation time, this simplification has minimal influence on our results. To get the shock evolution profiles, for each time step (in 1 minute temporal resolution), we extract the shock parameters on the shock surface closest to the spacecraft-connecting field lines. Comparison of Steady-state Solutions In order to compare the two steady-state solar wind solutions constructed using the two magnetic maps, we calculate the locations of the positive (marked in green) and negative (marked in purple) open flux from both MHD and PFSS solutions as shown in Figure 2. The open flux is identified by tracing the field lines from the outer boundary at 24 R in the simulations in a uniform latitude/longitude grid with one degree resolution back to the surface of the Sun. To quantitatively compare the results, we also calculate the open field area for both the positive and negative polarities and the results are summarized in Table 1. For both of the magnetic maps, the MHD and PFSS solutions are quite similar with the total open area slightly higher in the PFSS solution. The similarity between the MHD and PFSS solutions also suggests that the inherent properties of the different magnetic maps play a major factor in generating the different topological features instead of other model-related parameters. However, we need to note that the statistics in this particular case may not be generalized for all the MHD/PFSS models. Other adjustable parameters of both MHD and PFSS models could influence the open field area. For example, a lower source surface radius in the PFSS model could lead to larger open field area , while the coronal heating (i.e., Alfvén wave heating) related parameters in the MHD model could also influence the field opening (Linker et al., 2017). Comparing between the synchronic and diachronic map, both MHD and PFSS solutions show similar total open field area. However, there are also noticeable differences: For example, the positive flux area is larger in Case II than in Case I, while the negative flux area is larger in Case I. We extract the field lines connecting to Earth, STEREO A (STA), and STEREO B (STB) and mark the footpoints on the magnetic maps (indicated with colored circles). It is also apparent in Figure 2 that the connectivity to the three 1 AU locations are quite different between the two solutions. For Earth, although both calculated footpoints are around the same longitude, they are ∼20 • different in latitude. For STA, the footpoints are at approximately the same latitude in northern hemisphere, but they differ by ∼50 • in longitude. The largest difference is found in the STB footpoints, which vary by ∼70 • in longitude and ∼40 • in latitude. Moreover, the polarities are different for the two STB footpoints. In Figure 3, the plasma-β at 2.5 R from the MHD solutions are shown for the two cases; the location of the heliospheric current sheet (HCS) is indicated by the high plasma-β values. Although there are a number of differences between the two cases, of note is that STB's footpoint shifts from one side of the HCS to the other (this is not the case for STA or Earth). This connectivity difference significantly influences the shock profile evolution after the CME eruption observed at each location, as is discussed in the next section. Comparison of CME-driven Shocks With the same CME flux rope running through the two different ambient coronal and solar wind solutions, we obtain two CME simulation cases. The Cartesian coordinate system used in the simulation is the heliographic rotating coordinates (i.e., Carrington coordinates) with the X-axis pointing to the Carrington longitude 0 • and Y-axis pointing to the Carrington longitude 90 • . In Figure 4, we show radial velocity field, total magnetic field strength, and plasma density on the X = 0 and Z = 0 planes at t = 30 minutes for the two CME simulation cases, from which we can also see the background solar wind conditions based on the two magnetic maps. In general, there are similar patterns we can identify between the two cases. However, there are also evident differences in the two solar wind conditions that lead to different shock parameters as discussed below. In Figure 5, we show the 3D shock parameters (compression ratio, shock Alfvén Mach number, shock speed, and shock angle θ Bn ) extracted from the two cases at t = 30 minutes. The yellow field lines represent the connectivity to Earth, STA, and STB. Due to the inhomogeneous background solar wind, the CME-driven shocks in both cases are highly structured in shape and all the shock parameters vary significantly across the shock surface. By comparing the two cases with different magnetic field inputs, we can see several major differences: 1) The morphology of the shock surface, in that the latitudinal expansion is larger in Case I than in Case II; 2) The spatial distribution of Mach number along the shock surface is evidently different between the two cases. We find that the high Mach numbers in both cases are due to the smaller local Alfvén speeds in the shock upstream as shown in Figure 6b and d (area indicated by white arrows). These smaller Alfvén speeds are related to both low magnetic field strength (Figure 4b and e) and enhanced plasma density (Figure 4c and f); 3) The spatial distribution of shock speed differs. The larger shock speed found at the leading front of the shock surface in Case II is related to the faster background solar wind around that region (marked by white arrows in Figure 4a and d). See also the higher solar wind speed in the upstream of the shock in Case II (Figure 6a and c). Finally, the footpoint differences seen in Figure 2 re-sult in the Earth, STA, and STB being connected to different parts of the shock in the two cases. To quantitatively evaluate the differences of the CME-driven shock parameters in the two cases, we derive the evolution of shock parameters connected to the Earth and STB locations in the two cases and show the results in Figure 7 and 8. Note that no shock connection is developed for STA in either cases and therefore, the STA profiles are not shown. The temporal resolution of the shock profiles is 1 minute. We found that the shock compression ratio calculated for STB location in Case I is slightly larger than 4 (strong shock limit), which is due to the nonideal process (e.g., heat conduction) involved in the MHD model. Also, we need to note that due to the unstable flux rope insertion used in this study, the shock parameters derived at the beginning could be unrealistically stronger, especially right in front of the flux rope driver. However, as the flux rope starts interacting with the global corona, the resulting CME tends to acquire a speed that agrees with the observations reasonably well as found in our previous studies (Jin et al., , 2018. As we mentioned before, the footpoints connected to Earth in the two cases are around the same heliospheric longitude; this results in similar trends in the evolution of the shock parameters for the two cases. Also, the shock has a perpendicular nature in both cases throughout the first hour of the simulation. However, comparing the absolute values of the shock parameters, they are quite different between the two cases. In addition, the shock connection time is different in the two cases; in Case I, the connection is developed ∼10 minutes later than in Case II with the connection to Earth established only ∼20 minutes after the eruption onset. In contrast, due to the large difference in the locations of the STB footpoints, the CME-driven shock properties connecting to STB in the two cases are significantly different as shown in Figure 8. The shock connecting to STB in Case I is a much stronger shock than that in Case II with a much higher compression ratio, shock Alfvén Mach number, and shock speed. Also, the shock in Case I has a parallel nature (i.e., shock angle < 20 • ) while the shock in Case II is more oblique and has a perpendicular nature in the initial ∼10 minutes after first contact. Furthermore, the connection in Case I is established ∼20 minutes earlier than in Case II, which might have an effect on the properties of the resulting SEP event. For example, based on the DSA theory (Drury, 1983), the instantaneous particle spectral index depends only on the shock compression ratio and the higher shock compression ratio in Case I will lead to a harder SEP spectra. The higher shock compression ratio and quasi-parallel nature found in the Case I also suggests the shock is a more efficient accelerator than in the Case II (Ding et al., 2020). However, due to the complex physical processes of the particle acceleration and transport involved, to quantitatively link the shock properties near the Sun to the SEP spectra at 1 AU requires advanced coupling between the MHD model and a particle acceleration/transport model (Young et al., 2021;G. Li et al., 2021), which is beyond the scope of this work and will be examined in a future study. As shown in Figure 5, one major reason for the completely different shock profiles connecting to STB is the field line connectivity in the two cases, which results in STB being connected to different parts of the shock. In Case I, STB connects more closely to the front of the shock, while in Case II the connection is closer to the shock flank. To make a more comprehensive comparison, we also extract the field line with a footpoint rooted in the same region as the field line connecting to STB in Case I (shown as a white field line in Figure 5e-h). The shock evolution profile from this extra field line is overlaid in Figure 8 (marked in black). We can see that when comparing shock profiles around the similar longitude/latitude location in the two cases, the shock parameters and their evolution are more similar mainly due to the same flux rope driver initiated in the two simulations. However, we want to emphasize that this similarity does not otherwise reconcile the issue we raised in this study as the different spacecraft connectivity in the two cases is largely attributable to different input magnetic maps. Furthermore, even at a similar location on the shock surface, certain shock parameters (e.g., Shock Alfvén Mach number) still show clear differences between the two cases, which is caused by the different background solar wind solutions. Discussion & Conclusions In the previous section, we have shown that the CME-driven shock parameters and connectivity can be significantly different when different types of magnetic maps are used to drive global MHD models. Here, we briefly discuss how these differences could influence our interpretation of observations in the 2013 April 11 SEP event. As shown in Figure 2, one major difference found in field line connectivity between the two cases is the footpoint connecting to STB. In Case I, we can see that the STBconnecting field line can be traced back to the flare site, while in Case II, the footpoint is far from the flare site. The SEP observations of this event show that the Fe/O ratio is higher at STB than at Earth (Cohen et al., 2014). Based on Case I, one possible explanation for the high Fe/O ratio could be that there is a direct contribution from the flare-accelerated, Fe-rich material (Cane et al., 2003(Cane et al., , 2006. However, Case II does not support this interpretation as the STB and Earth footpoints have about the same longitudinal separation from the source region and therefore presumably more likely to measure similar Fe/O ratios. Another feature that can be compared with observations is the shock-connection time. We found that in Case I, STB develops connection to the shock ∼30 minutes earlier than Earth, while in Case II, STB and Earth develop connections to the shock around the same time (∼20 minutes after the eruption). The shock-connection time can be related to the particle release time calculated from the SEP in-situ observations. For this event in particular, Lario et al. (2014) found that the estimated proton release time at STB from velocity dispersion analysis is 07:10 UT±4 minutes and 07:58 UT±9 minutes at Earth, which suggests that STB developed a connection to the shock earlier than the Earth did by 48±13 minutes, consistent with the ∼30 minutes found in Case I. However, there are also features the two cases agree on. For example, in both cases, the shock connecting to STB is stronger than the shock connecting to Earth. This feature is consistent with the SEP observations that the energy spectra of He, O, and Fe are harder at STB than at Earth (Cohen et al., 2014). Also, the shock geometry connecting to the Earth location has a quasi-perpendicular nature (i.e., shock angle >45 • ) in both cases, although the shock is more oblique in Case I. For the shock connecting to STB, although the shock starts with a quasi-perpendicular nature in Case II, the shock angle quickly decreases with time. After ∼40 minutes, the shock in both cases has a quasiparallel nature with Case II more oblique. We need to note that in this study, we are mainly focused on the influence of input magnetic maps on the MHD model output. But there are other parameters that could influence the modeling result. For example, the coronal heating parameters used in the MHD model (e.g., input wave energy density and length scale of energy dissipation) can have great influence on the final field topology (Linker et al., 2017). The enhanced coronal heating or shorter dissipation length could lead to more field opening in the MHD simulation. For the PFSS model, when using a lower source surface location, it also leads to more open field in the solution . Moreover, the solar corona is a continuously changing environment, which may not be correctly represented by either PFSS model or a relaxed steady-state MHD solution. In contrast, the global magnetofriction model could provide an approach to account for such time-dependent factors (Yeates et al., 2010;Cheung & DeRosa, 2012;Fisher et al., 2015). Last but not least, the CME flux rope models used for initiating the eruptions could also play an important role for different CME-driven shock properties. Therefore, more work is needed for a comprehensive assessment on these factors and their relative importance on the model output. In this study, using the CME on 2013 April 11 associated with a SEP event as an example, we demonstrated how the choice of magnetic inputs can result in significantly different simulated background solar wind and, therefore, the CME-driven shock parameters and spacecraft connectivity in a global MHD model. There are multiple differences in the details of the synchronic and diachronic maps that may contribute to the differences in the results of the two cases: the characteristics of the active regions (especially on the far-side), the strength of the polar field, and the flux imbalance between the polar regions. For the case in this study, we found that the flux imbalance between the two polar regions is much larger for the LM synchronic map (ratio of 1.8) than for the diachronic map (ratio of 1.2). This flux imbalance could influence the global field topology, position of the HCS, and magnetic field connectivity. The different polar fields also have impact on the resulting solar wind speed. As shown in Figure 4 and Figure 6, the region of fast solar wind in the south is much wider and stronger in Case I than in Case II. Given the many differences between the two cases resulting from the two magnetic map inputs, one could potentially use the multi-wavelength remote-sensing observations and in-situ measurements to help select the more appropriate input magnetic map, for instance, by comparing the on-disk structures shown in the EUV observations (e.g., coronal holes) or the large-scale solar wind structures (e.g., helmet streamers) seen in the whitelight observations. In addition, the in-situ measurement of plasma parameters could be used to validate the solar wind solutions. However, we want to emphasize that the purpose of this study is not to distinguish which type of magnetic map is better but rather to illustrate the model uncertainty due to the imperfect magnetic field observations, which has to be taken into account whether the model is utilized for space weather prediction or for scientific research. In the meantime, we suggest that the magnetic input source should be explicitly mentioned in research papers that use global MHD models and the associated model uncertainties be discussed wherever possible. This study also emphasizes the need to have better observational coverage of the solar magnetic field for improving space weather forecasting, and should be a substantial consideration in the development of future missions. All the differences in the evolution of shock parameters could lead to significantly different inferred particle acceleration processes and result in different expected SEP spectra, complicating the interpretation of the SEP observations. Acknowledgments We are very grateful to the referees for invaluable comments that helped improve the paper. We thank Marc DeRosa at LMSAL for the helpful discussion on the LM synchronic magnetic maps. MJ, NVN, and CMSC are supported by NASA HSR grant 80NSSC18K1126. We thank the The simulation results were obtained using the Space Weather Modeling Framework (SWMF), developed at the Center for Space Environment Modeling (CSEM), University of Michigan (https://github.com/MSTEM-QUDA/SWMF). We are thankful for the use of the NASA Supercomputer Pleiades at Ames and for its supporting staff for making it possible to perform the simulations presented in this paper. SDO is the first mission of NASA's Living With a Star Program.
2022-02-16T06:47:57.623Z
2022-02-15T00:00:00.000
{ "year": 2022, "sha1": "0cac7449b927baefb091619972718995d5991afd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0cac7449b927baefb091619972718995d5991afd", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
254020007
pes2o/s2orc
v3-fos-license
Regulation of salt tolerance in the roots of Zea mays by L-histidine through transcriptome analysis Soil salinization is an important worldwide environmental problem and the main reason to reduce agricultural productivity. Recent findings suggested that histidine is a crucial residue that influences the ROS reduction and improves the plants’ tolerance to salt stress. Herein, we conducted experiments to understand the underlying regulatory effects of histidine on maize root system under salt stress (100 mM NaCl solution system). Several antioxidant enzymes were determined. The related expressed genes (DEGs) with its pathways were observed by Transcriptome technologies. The results of the present study confirmed that histidine can ameliorate the adverse effects of salt stress on maize root growth. When the maize roots exposed to 100 mM NaCl were treated with histidine, the accumulation of superoxide anion radicals, hydrogen peroxide, and malondialdehyde, and the content of nitrate nitrogen and ammonium nitrogen were significantly reduced; while the activities of superoxide dismutase, peroxidase, catalase, nitrate reductase, glutamine synthetase, and glutamate synthase were significantly increased. Transcriptome analysis revealed that a total of 454 (65 up-regulated and 389 down-regulated) and 348 (293 up-regulated and 55 down-regulated) DEGs were observed when the roots under salt stress were treated with histidine for 12 h and 24 h, respectively. The pathways analysis of those DEGs showed that a small number of down-regulated genes were enriched in phytohormone signaling and phenylpropanoid biosynthesis at 12 h after histidine treatment, and the DEGs involved in the phytohormone signaling, glycolysis, and nitrogen metabolism were significantly enriched at 24 h after treatment. These results of gene expression and enzyme activities suggested that histidine can improve the salt tolerance of maize roots by enriching some DEGs involved in plant hormone signal transduction, glycolysis, and nitrogen metabolism pathways. Introduction Soil salinization is one of the most well-known threats to the development of global agriculture. The Food and Agriculture Organization of the United Nations estimates that salt has affected more than 6% of the land area (Ilangumaran and Smith, 2017), and about 20% of the world's irrigated land is impacted by salt, with a direct economic loss of $12 billion per year (Tang et al., 2014). The monitoring of soil salinity and land cover revealed that Asia is one of the continents most affected by salt, especially the northwest of China (Hassani et al., 2020). The total area of saline soil in China is about 3.6×10 7 ha, accounting for 4.88% of the country's total available land base. The excessive accumulation of water-soluble salts in soil, such as K + , Mg 2+ , Ca 2+ , Cl -, SO 4 2-, CO 3 2-, HCO 3 -, Na + plasma, etc., can damage the soil structure, produce toxic effects on plants, hinder the growth of crops, cause the decline of soil fertility, and seriously reduce the productivity of the land (Parida and Das, 2005). Salinization of soil results from a combination of evaporation, salt precipitation and dissolution, salt transport, and ion exchange. Excessive soluble salt in the soil can lead to plant salt stress, one of the most detrimental environmental stresses, osmotic stress, ionic toxicity, and oxidative stress (Bojoŕquez-Quintal et al., 2014;Tanveer and Shabala, 2018). High salt stress can increase the levels of the reactive oxygen species (ROS) and result in oxidative stress, which in turn affects the plants both at cellular and metabolic levels (Duan et al., 2015). ROS play important roles in maintaining normal plant growth and improving their tolerance to environmental stresses. It has been implicated as second messengers in plant hormone responses and function as important signaling molecules that regulate normal plant growth and response hormones to stresses, such as salt stress (Yu et al., 2020). At high levels, active oxygen species can lead to impaired physiological function through cellular damage and oxidization of DNA, protein, and lipid membrane of plant cells. For example, the accumulation of Ca 2+ can activate the ROS signals and change their phospholipid components, induce plant hormone signal transmission, and regulate cytoskeleton dynamics and cell wall structure, which will slow down the plant root growth and increase the metabolites (Bybordi, 2010;Zhao et al., 2021). Nitrogen is an essential element in plant growth. Plant absorbs nitrate from soil and transforms it into amino acids through a series of assimilation processes such as nitrogen metabolism. These processes are affected by abiotic stress, especially salt stress. The change of enzyme activity in nitrogen metabolism depends on the species of plants and the sensitivity of plants to salt stress (Gouia et al., 1994;Li et al., 2019;Yin et al., 2020). In addition, salt stress also affects sugar metabolism, changes sugar levels, such as sucrose and fructose, and causes changes in enzyme activity in glycolysis (Shumilina et al., 2009). The hormones in plant play an important role in mediating salt stress signals and controlling the balance between growth and salt stress response. Plants will regulate growth and development through hormone signal transduction and improve the adaptability to salt stress (Yu et al., 2020). Root is the first organ to undergo salinity stress and plays an important role in salt sensing. Root system can dynamically adjust its development and performance in response to biotic and abiotic stresses, including modulation of root growing, branching, forking, and redirecting. These responses can be manifested differentially at the cellular, tissue, or organ levels. To fully capture the responses of root system to salt stress, it is critical to explore the key genes and associated functions for development of salt tolerance in crops. L-histidine is one of the standard amino acids in proteins and critical for plant growth and development. Histidine kinases (HK) play important roles in the regulation of plant development in response to hormones, as well as environmental stimuli (Nongpiur et al., 2012). The study of the interaction between histidine and membranes and macromolecules confirmed that histidine plays a unique role in enabling protein/peptide-membrane interactions that occur in marine or other high-salt environments (Xian et al., 2022). Previous studies also showed that histidine takes an important role in regulating the biosynthesis of other amino acids, the chelation transport of metal ions, and the development and growth of plant embryos in different plants, including rice (Arabidopsis) and maize (Zea mays) (Radwanski and Last, 1995;Kramer et al., 1996;Noutoshi et al., 2005;Stepansky and Leustek, 2006;Irtelli et al., 2009). However, the genetic effects of salinity on maize's root development at the gene level remain unclear and the genetic machinery underlying maize root responses to salt stress remains uncharacterized. Maize is the most widely planted grain in the world and ranked second in the important crops of China, which accounts for 29.5% of total grain production (Medeiros et al., 2021). Soil salinization is one of the major abiotic stresses negatively impacting growth, development, yield, and seed quality of maize production. The critical challenge for enhancing the quality and productivity of maize is how to improve its tolerance or to incorporate resistance to different stresses, including salt stress. In the present study, maize seeds were grown in lab. After gemination, we treated the seedling roots with salt stress and histidine for different time periods to 1) compare the natural traits and morphological variations of seedling roots among the control and treatment samples, 2) evaluate the antioxidant roles and activities of histidine in antioxidant enzymes and nitrogen metabolism in the seedling roots under salt stress, 3) identify differentially expressed genes under control, salt stress, and histidine treatment conditions by RNA-seq analysis, and 4) investigate the effects of histidine on the potential pathways responsible for differences in root salt stress responses by gene co-expression network and pathway analyses. Our study will provide accurate quantitative analysis of an effective treatment of soil salinization and scientific guidance for the future formulation of salinization control in crops. Plant materials and treatments All plant materials used in the experiments were from the maize variety "ningdan 33". All experiments were conducted at the Ningxia Key Laboratory for the Development and Application of Microbial Resources in Extreme Environments, North Minzu University, China. The seeds from the variety were disinfected with 0.1% HgCl 2 and inoculated in 0.8% agar medi um ( c onta ining hoagla n d nu t ri e nt so lu ti on ) supplemented with sterile water and 0.1 mM histidine for 12 h at 4°C. The samples were retained in a box with the relative humidity at 75% and the temperature at 28°C. When seed roots grew up to 3 cm, all samples were randomly divided into 4 groups, including (i) the non-stress control (CK0) treated with hoagland nutrient solution, (ii) the non-stress treatment group (T0) treated with 0.1 mM histidine+hoagland nutrient solution, (iii) the salt stress control (CK1) treated with100 mM NaCl +hoagland nutrient solution, and (iv) the salt stress treatment group (T1) treated with 0.1 mM histidine+100 mM NaCl +hoagland nutrient solution. Each group has 15 hydroponic plastic bottles of 4 seeds and 450g sterilized small stones each. The incubator was set with the relative humidity 75% and conditions of 13,000 LX at 28°C for 4 h, 11,000 LX at 25°C for 4 h, 9,000 LX at 20°C for 3 h, 6,000 LX at 20°C for 2 h, 9,000 LX, at 25°C for 3 h and dark at 18°C for 8 h. Our previous experiment confirmed the alleviating effect of the histidine at 100 mM, 10 mM, 1 mM, 0.1 mM, 0.01 mM, 0.001 mM on the NaCl induced damage to the root, with the treatment of 0.1 mM of histidine resulted in the most substantial resistance in the maize seedling roots ( Figure S1). After treatment for 12 h and 24 h, 12 young roots were randomly selected and divided into three sub-groups at each time point. All samples were snapfrozen with liquid nitrogen and stored at -80°C. RNA was extracted from each sample for transcriptomic analysis. After 7 days treatment, the root tips were stained with DAB and NBT. After 14 days treatment, the total root length, root projected area, root volume, root surface area, numbers of root tips, and forks from each treatment were measured with an Analyzer (GXY-A, Sichuan, China), and the activities of root related enzymes were simultaneously measured. Nitroblue tetrazolium and diaminobenzidine staining of root tips The DBA reaction was conducted in the dark at 28°C for 6 h following the method of Li et al. (2013). The root tips of each treatment group were collected and plated into the culture plate with 1mg ml -1 DAB reaction solution (pH 5.5, 50 mM Tris HCl). The samples were then transferred to 90% (v/v) ethanol for decolorization in a 70°C water bath and stored in 50% glycerol. The highly localized accumulation of H 2 O 2 was observed as dark brown at X100 magnification. The NBT staining was based on the method of Khokon et al. (2011). The root tips of each treatment were placed into the culture plate with adding 0.5 mg ml -1 NBT reaction solution (pH 7.8, 50 mm PBS). The culture plates were kept in dark at 28°C for 4 h, and then transferred to a 70°C water bath containing 90% (v/v) ethanol for decolorization and stored in 50% glycerol. The localization of O 2 was observed as blue at X100 magnification. Measurement of O 2 ·and H 2 O 2 content The contents of O 2 and H 2 O 2 of the samples were determined following the method of Jambunathan (2010). One gram of the frozen root tissue of each sample mixed with 5mL of 50 mM PBS (pH 7.8) was grinded and homogenized by centrifugation. One mL of extraction solution of each sample was mixed with 0.5 mL PBS and 10 mM hydroxylamine hydrochloride, and then reacted at 25°C for 30 min. After added 1 mL of 17 mM p-Aminobenzene Sulfonic Acid and a-Naphthylamine, the mixture was incubated at 25°C for 15 min. The absorbance value was measured at 530 nm and the content of O 2 was calculated based on the standard curve. The extraction solution for determining the contents of H 2 O 2 was processed by using acetone grinding and homogenization by centrifuging. One mL of the extraction solution was mixed with 0.1 mL of 5% titanium sulfate (v/v) and 0.2 mL of strong aqua ammonia and precipitated. After discarded the supernatant, 5 mL of 2 M sulphuric acid was added in the mixture, and the absorbance value was read at 415 nm and the H 2 O 2 content was determined with the standard curve. Measurement of the contents of ammonium nitrogen and nitrate nitrogen The method of Cataldo et al. (1975) was referred for determining the content of nitrate nitrogen (NO 3 -) in plants. One gram of the frozen root tissue of each sample was extracted in a distilled boiling water bath for 30 min. After cooled, the extraction solution was diluted with distilled water to 25 mL. Then 0.1 ml of the diluted extraction solution was mixed with 0.4 mL salicylic acid (H 2 SO 4 -[1:5 (v/v)]) for 20 min. After addition of 9.5 mL of 2 M NaOH, the absorbance value of the solution was read at 410 nm and the NO 3 content was calculated with the standard curve. The content of ammonium nitrogen (NH 4 + ) was determined by a ninhydrin colorimetry. Half of a gram of the frozen root tissue of each sample was mixed with 1.7 M acetic acid and diluted with distilled water to 100 mL. Two mL of the diluted extraction solution was mixed with 3 mL of 54 mM acidic ninhydrin acetic acid buffer (pH 5.4) and 0.1 mL of 60 mM ascorbic acid, and incubated in a boiling water bath for 15 min. After cooled, absolute ethanol was added to 10 mL. The absorbance was read at 580nm and the NH 4 + content was estimated through the standard curve. Assay of the enzyme activities The activities of glutamate synthase (GOGAT), glutamine synthetase (GS), glutamate dehydrogenase (GDH), and nitrate reductase (NR) were evaluated based on the methods of Lin and Kao (1996) and Gangwar et al. (2011). The root tissue and extraction solution of the transcriptomic analysis were homogenized in ice bath in the ratio of 1:5. After centrifugated at 4°C with 150,00×g for 20 min, the supernatant of the homogenized solution was collected as the crude solution for measuring the enzyme activities. The GS crude enzyme solution was extracted with 50 mM Tris-HCl buffer (pH 8.0, 2 mM MgCl 2 , 2 mM DTT, 0.4 M sucrose). The mixture of 0.7 mL of the crude enzyme solution with 1.6 mL of 0.1 M Tris-HCl buffer (pH 7.5, 2 mM MgCl 2 , 20 mM sodium glutamate, 20 mM cysteine, 2 mM EGTA, 80 mM hydroxylamine hydrochloride) and 0.7 mL of 40 mM ATP was incubated at 37°C for 30 min. After addition of 1 mL of color developing agent (0.2 M TCA, 0.35 M FeCl 3 , 0.6 M HCl), the absorbance was measured at 540 nm. The crude enzyme solution of GOGAT was extracted with 10 mM Tris-HCl buffer (pH 7.5, containing 1 mM MgCl 2 , 1 mM EDTA, 1 mM DTT). A total of 0.5 mL of crude enzyme solution was mixed with 0.05 mL of 0.1 M a-Ketoglutarate, 0.1 mL of 10 mM KCl, 0.2 mL of 3 mM NADH and 1.75 mL of buffer. After added 0.4 mL of 20 mM glutamine, the absorbance changes for 3 min were immediately recorded at 340 nm. The crude enzyme solution of GDH was extracted with 0.2 M Tris-HCl buffer (pH 8.0). One mL of the crude enzyme solution was thoroughly mixed with 0.3 mL of 0.1 M a-Ketoglutarate, 0.3 mL of 1 M NH 4 Cl, 0.2 mL of 3 mM NADH and 1.2 mL of buffer. The decrease of absorbance was recorded for 3 min at 340 nm. The crude enzyme solution for NR was extracted with 25 mM phosphate buffer (pH 8.7, 10 mM cysteine, 1.3 mM EDTA). 1.5 mL of the crude enzyme solution was mixed with 1.2 mL of 0.1 M KNO 3 phosphoric acid buffer (pH 7.5) and 0.5 mL of 3 mM NADH and incubated in a water bath at 25°C for 1 h in dark. After added 1 mL of 1% sulphonamide solution (v/v) and 1 mL of 0.02% naphthalene ethylenediamine hydrochloride solution (v/v), the mixed reaction was kept in dark for 15 min. The absorbance was read at 540nm and the NO 2 was calculated according to the standard curve. The activities of superoxide dismutase (SOD), peroxidase (POD) and catalase (CAT) were estimated following the methods of Decleire et al. 2012 andMaehly andChance, 1954. The root tissues of each sample were mixed with 50 mM of PBS (pH 7.8) at a ratio of 1:10 on ice bath and centrifugated at 15000×g for 20 min at 4°C. After mixed 0.1 mL of SOD crude enzyme solution with 0.3 mL of 0.13 M methionine, 0.75 mM nitrogen blue tetrazole, 0.1 mM EDTA-Na 2 , 20 mM riboflavin and 1.7 mL PBS, the mixture solution was incubated at 4000 LX for 20 min. The absorbance was measured at 560 nm. The POD activity was evaluated by mixing 0.1 mL crude enzyme solution with 2.9 mL of 0.5% guaiacol (containing a small amount of H 2 O 2 ). The increase of absorbance was recorded for 3 min at 470 nm. The mixture of 0.1 mL of the crude enzyme solution and 0.4 mL of 0.1 M H 2 O 2 and 2.5 mL of PBS was used to measure the absorbance of CAT at 240 nm and record the changes of absorbance in 3 minutes Measurement of the malondialdehyde content The content of malondialdehyde (MDA) was determined by the thiobarbituric acid method (TBA) (Li et al., 2013). The mixture of 1 g of the frozen root tissue and 10 mL of TCA was grinded for homogenization by centrifugation. Then 2 mL of the extraction solution was mixed with 0.6% TBA (v/v) and incubated in a boiling water bath for 15 min. The absorbance values were read at 532, 600, and 450 nm. RNA extraction, cDNA library preparation, and sequencing Total RNA was extracted from the root tissues by using the Plant RNA Purification Reagent (Invitrogen, USA) following the manufacturer's instructions. The potential contaminating genomic DNA was removed from RNA preparation with the DNase I kit (TaKara Bio, Japan). RNA integrity was checked by 1% agarose gel electrophoresis. Total RNA concentration was determined using a 2100 Bioanalyser (Agilent Technologies, USA) and RNA quality was assessed using the ND-2000 NanoDrop (NanoDrop Technologies, USA). RNA-seq libraries were constructed using the TruSeq ™ RNA sample preparation kit from Illumina (San Diego, USA). 1,000 ng of total RNA was used to isolate mRNA using a polyA selection method by oligo(dT) beads. The extracted mRNA was fragmented in the first-strand synthesis buffer by heating at 94°C, followed by first-strand cDNA synthesis using reverse transcriptase and random primers. Synthesis of double-stranded cDNA was performed using the 2nd strand master mix provided using a SuperScript doublestranded cDNA synthesis kit (Invitrogen, USA). Resulting doublestranded cDNA was end-repaired, dA-tailing and ligated with NEBNext adaptors. Libraries were fragmented as 300 bp on 2% Low Range Ultra Agarose. Finally, libraries were enriched by 15 cycles of amplification. After quantified by TBS380, paired-end RNA-seq libraries were sequenced with the Illumina HiSeq xten/ NovaSeq 6000 sequencer (2 ×150bp read length). The data presented in the study are deposited in the NCBI repository, accession number PRJNA874354 Bioinformatics and RNA-seq analysis The transcriptomic analysis was based on the genome of Zea mays (GCF_902167145.1, https://www.ncbi.nlm.nih.gov/genome/? term=Zea_mays) from the NCBI database. The acquired paired-end reads were trimmed and quality controlled by SeqPrep (https:// github.com/jstjohn/SeqPrep) and Sickle with default parameters. The clean reads in each sample were independently mapped to the reference genome with the orientation mode of the HISAT2 package (Kim et al., 2015). The mapped reads of each sample were further assembled by using the reference-based approach of the StringTie package (Pertea et al., 2015). The differential expression of genes between the control and treatment groups were calculated based on the transcripts per million reads (TPM) by using the DESeq2 (Love et al., 2014). Genes with |log 2 fold change | ≥ 1 and p-adjust < 0.05 were considered to be significantly different. For functional enrichment analysis, significantly up-and down-regulated genes (Benjamini-Hochberg with an FDR ≤ 0.05) were selected for GO and KEGG metabolic pathways analysis by using the Goatools (https://github.com/ tanghaibao/Goatools)and KOBAS (http://kobas.cbi.pku.edu.cn/ home.do) (Xie et al., 2011). The outputs were generated by AI and the GraphPad Prism (v 8.0) and the significant differences were evaluated by the IBM SPSS Statistics (v 25) package. qRT-PCR Assays cDNAs were synthesized from the purified 100 ng total RNA using a PrimeScript ™ RT Reagent Kit and gDNA Eraser (TaKaRa, China). The quantitative real-time polymerase chain reaction (qRT-PCR) analysis was conducted with a TB Green Premix Ex Taq ™ II Kit (Takara, China). The reaction mixture contained 12.5 mL TB Green Premix Ex Taq, 1 mL forward and reverse primers respectively, 1 mL cDNA and 9.5 mL ddH 2 O. The qPCR thermal cycling was as follows: 95°C for 30 s followed by 40 cycles of 95°C for 5 s, 55°C for 30 s and 72°C for 30 s. Each reaction was repeated three times. In every qPCR run, ZmGAPDH was used as an internal control to minimize systematic variations in the amount of cDNA template. The primers used for qRT-PCR were listed in Table S1. Effect of histidine on root morphology under salt stress Our results confirmed that when the root was under salt stress, the total length of root (all fibrous roots) was significantly increased by adding histidine, the average length increased from 1690 cm to 2621 cm and the root projected area and surface area enlarged by 45.8% and 45.31%, respectively. In addition, the numbers of the root furcation and root tips were significantly increased by 63.2% and 30.12%, respectively ( Figure 1C). Effect of histidine on the ROS accumulation under salt stress The level of ROS is related to the adaptation of plants to salt stress. After salt stress treatment, the O 2 level of root was significantly increased. When the roots were further treated with histidine, the ROS levels were significantly increased, while the accumulative levels of O 2 and H 2 O 2 were significantly reduced ( Figure 2A). The DAB and NBT staining results revealed that the root tips showed dark brown and deep blue patches with the salt stress treatment, but the patch colors became light after the histidine treatment ( Figure 1B). Antioxidant enzymes are very important in the process of salt tolerance of plants, and they are often used as indicators to evaluate plant salt tolerance. The analysis of enzyme activities showed that, after the treatment of histidine, the activities of SOD, POD and CAT in the root tissues were significantly increased by 21.92%, 12.84% and 59%, respectively (Figure 2A), which were significantly higher than those with the salt stress treatment. In addition, the content of MDA was significantly reduced after the histidine treatment. Effects of histidine on the nitrogen use efficiency in maize with salt stress treatment When the roots were treated with the salt stress, the contents of nitrate and ammonium nitrogen were significantly accumulated; while the activities of NR, GS, and GOGAT were significantly decreased ( Figure 2B). However, after the histidine treatment, compared to the control group (CK1), the contents of nitrate nitrogen and ammonium nitrogen in roots were significantly decreased by 69.54% and 34.48%, respectively, while the activities of NR, GS, and GOGAT were increased significantly, by 53.2%, 9.26% and 176.52%, respectively. Expression profiles of transcripts A total of 1,253,271,938 raw paired-end reads were obtained from the Illumina sequencing platform. After quality control, 1,239,263,410 clean paired-end reads with 53.86 -54.74% of GC content were used in transcriptomic analysis. All samples were independently aligned with the reference genome using HISAT2. About 87.98% -89.75% of the reads were mapped to the maize genome and 85.29% -86.96% of reads uniquely mapped to the reference sequences (Table S2). Based on the reference genome, the mapped reads in each sample were processed using the StringTie package and the assembled contigs were compared with the original genomic annotation information for novel transcripts and genes. A total of 52,229 transcripts with >1,800 bp and 4,432 transcripts with ≤ 200 bp were derived. The search against the known transcripts of the genome identified 28,976 potential novel transcripts, 5,977 of which may produce new genes ( Figure S2). The principal component analysis (PCA) revealed that the CK0 and T0 samples at 12 h and 24 h showed no difference along PCI (39.80% and 24.36%) and PCII (8.82% and 15.94%). The CK1 and T1 treatment samples were not significantly different along PCI, but they could be separated as two different groups along the PCII at 12h and 24h ( Figure 3A). The unique and shared genes across all samples were identified by the Venn diagram analysis (Figure 3B) Figure 3C). The details of the DEGs were presented in Table S3. Functional annotation of genes All genes obtained from the transcriptomic assembly were searched against six major databases for gene functional annotation. In a total of 47,042 (94.47%) annotated genes, 36,571,18,364,42,430,46,966,32,196,27,808 genes were annotated by the GO, KEGG, COG, NR, Swiss prot and Pfam databases, respectively, accounting for 77.74%, 39.04%, 90.20%, 99.84%, 68.44% and 59.11% of the total annotated genes ( Table S4). The gene function predictions based on the COG database suggested that a total of 42,430 genes were annotated into 23 annotated categories, including the largest group (n = 26,306, 62%) of unknown, followed by the posttranslational modification, protein turnover, chapters (n = 3,145, 7.41%) and the transcription (n = 2,749, 6.48%) categories. Only 7 and 5 genes were assigned to the nuclear structure and cell motility categories, accounting for 0.016% and 0.012%, respectively ( Figure 4A). The GO functional annotations of DEGs assigned to the biological process, cellular component and molecular function categories were presented in Table S5 and Figure 4B. In all comparisons, most of DEGs were assigned to the cellular process Root morphology (Ruler represents 20cm) (A), root tips with DAB and NBT staining (Ruler represents 200 mm) (B), total root length, root projected area, root surface area, root volume, number of root tips, number of root forks (C) in the four different treatments. Data were presented as the mean ± SEM for six biological replicate samples. Bars labeled with different letters indicated significant difference between treatments. of the biological process, the cell part in cellular component, and the binding and/or catalytic activity in molecular function. Functional pathway enrichment analysis The GO enrichment analysis found that the DEGs in the comparisons of the CK0_12 vs T0_12 (569 genes), CK1_12 vs T1_12 (385 genes), CK0_24 vs T0_24 (59) and CK1_24 vs T1_24 (278 genes) were enriched in 252, 146, 133, and 280 pathways, respectively (Table S6). With the p-adjust <0.05 as the significance threshold, about 22, 16, 12, and 32 GO pathways were significantly enriched in those analysis. Most of DEGs in the 12 h and 24 h groups with histidine treatment alone were related to the plasma membrane (GO:0005886) and response to hydrogen peroxide (GO:0042542). Under salt stress, most of DEGs in 12 h and 24 h histidine treatment groups were associated with the reactive oxygen species metabolic process (GO: 0072593) and response to wounding (GO: 0009611) ( Figure 5) The KEGG enrichment analysis on DEGs showed that 266, 178, 20 and 120 DEGs in the analysis of CK0_12 vs T0_12, CK1_12 vs T1_12, CK0_24 vs T0_24 and CK1_24 vs T1_24 were enriched to 76, 61, 17 and 44 pathways, respectively (Table S7 and Figure 6). With the p-adjust < 0.05 as a cutoff, 4 enriched KEGG pathways in the analysis of CK0_12 vs T0_12 were related to the environmental information processing and starch and sucrose metabolism, plant hormone signal transduction, inoleic acid metabolism, and phenylpropanoid biosynthesis. The significantly enriched KEGG pathways in the analysis of CK0_24 vs T0_24 were related to genetic information processing and metabolism, including cutin, suberine and wax biosynthesis, protein processing in endoplasmic reticulum, flavonoid biosynthesis. Without salt stress, the DEGs in the 12 h treatment group were mainly associated with the carbohydrate metabolism, signal transmission, lipid metabolism, biosynthesis of other secondary metals. While in the 24 h groups, the DEGs were involved in lipid metabolism, biosynthesis of other secondary metals, as well as folding, sorting, and degradation. For the samples with salt stress treatment, the KEGG pathways enriched by DEGs were all associated with environmental information processing and metabolism. In the analysis of CK1_12 vs T1_12, the DEGs only in phenylpropanoid biosynthesis and plant hormone signaling transduction were significantly enriched. In the comparison of CK1_24 vs T1_24, 6 pathways related to environmental information processing and metabolism were significantly enriched, including plant hormone signal transduction, glycolysis/gluconegenesis, nitrogen metabolism, pentose phosohate pathway, fructose and mannose metabolism and linoleic acid metabolism ( Figure 6). The RT-qPCR analysis of 10 DEGs from the enriched KEGG pathways also confirmed the consistency of the results between RNA-seq and RT-qPCR analysis (Figure 7). After 12 h of histidine treatment, all DEGs in T1 were downregulated, including Auxin early response genes (AUX/ IAA), auxin response factor (ARF), small auxin upregulated RNA(SAUR), gretchen hagen3 (GH3), Abscisic acid receptor (PYL), Jasmonic acid inhibitor (JAZ). However, after 24 h of histidine treatment, some DEGs were up-regulated and some down-regulated. Thirteen up-regulated genes were involved GH3, cytokinin receptor (AHK2/3/4), DELLA, phytochrome interaction factors 4 (PIF4), ABA response element (ABRE), ABRE binding protein (AREB/ABF), and JAZ. Five downregulated genes were involved GH3, SAUR, and TCH4. A total of 39 DEGs were identified that are associated with the phenylpropanoid biosynthesis, glycolysis and nitrogen metabolism were identified in CKI and T1 groups. In the 24 h treatment group, 11 and 5 up-regulated genes were significantly enriched in glycolysis and nitrogen metabolism, respectively. In glycolysis, these up-regulated genes were related to 6phosphofructokinase (pfkA), pyrophosphate fructose 6phosphate 1-phosphotransferase (PFP), triose phosphate isomerase (TPI), glyceraldehyde-3-phosphate dehydrogenase (gapN), pyruvate kinase (PK), pyruvate decarboxylase (PDC), alcohol dehydrogenase (ADH1), and acetyl-coenzyme A synthetase (ACSS1/2). In nitrogen metabolism, these upregulated genes were related to nitrate reductase [NADH] (NR), glutamine synthetase (glnA), and glutamate synthase [NADH](GLT1). In the 12 h treatment group, 1 and 3 downregulated genes were enriched in glycolysis and nitrogen metabolism, respectively, whichis related to pfkA and carbonic GO enrichment analysis of DEGs. p-adjust < 0.05 was considered to be significantly different. KEGG enrichment analysis of DEGs (A), up-and down-regulated DEGs in KEGG pathways (B). p-adjust < 0.05 was considered to be significantly different. FIGURE 7 Quantitative real-time PCR (RT-qPCR) validation of the selected DEGsfrom RNA-seq analysis.The relative expression amount obtained by RT-PCR was expressed by broken lines. RT-qPCR data showed the mean values from three replicates, and the error bars represented the SEM of the means, while the corresponding expression data for RNA-seq were presented in the grey histogram. anhydrase (cynT) in glycolysis and nitrogen metabolism. In the 12 h treatment group, 15 down-regulated genes were significantly enriched in the phenylpropanoid biosynthesis, phenylalanine ammonia lyase (PAL), cinnamoyl-CoA reductase (CCR), cinnamyl alcohol dehydrogenase (CAD), and peroxidase (E1.11.1.7) (Figure 9). Discussion The interaction between plant and environment is a constant and essential activity throughout the plant life cycle, which influences plant growth, development, and survival. In arid and semi-arid regions, soil salinization is an important worldwide environmental problem. As one of the most important crops and the main material for bioethanol production in China, maize crops are extremely sensitive to drought and salt stresses, which significantly affect plant growth and development. The results of the present study showed that histidine can protect the maize root system from salt stress and enhance the abiotic stress tolerance of maize. When applied exogenously to maize exposed to salt stress, histidine not only resulted in increased growth and other physiological characteristics of roots, but also overcame the environmental salt stress by activating the antioxidant enzyme activities and regulating nitrogen metabolism and plant hormone signal transduction pathways. Roots are highly sensitive to changes in their surrounding environment and the responses to stresses such as salinity and drought are very dynamic and complex in nature (Duan et al., 2015). Studies have shown that the addition of amino acids from hydrolyzed meat meal will induce structural changes in maize roots (Ertani et al., 2010;Colla et al. (2015). Our results confirmed that salt stress caused damage to maize roots and reduced the root area, number of lateral roots, and root length. However, after histidine was applied, the root length, root projected area, root surface area, root volume, and numbers of root tips, and furcation were significantly increased (Figure 1). Plants antioxidant system plays an important role in resisting environmental stresses. Salinity exposure causes enhanced energy consumption and often enhanced respiration which are directly linked to the enhanced production of ROS (Tiwari et al., 2002). Acting as signaling molecules including hydroxyl radicals (OH), hydrogen peroxide (H 2 O 2 ), and (Jiang et al., 2012). Previous studies suggested that plant roots could absorb amino acids, which in turn as bio-stimulants could improve the ability of plants to resist salt stress (Watson and Fowden, 1975;Colla et al., 2014). For example, proline can remove free radicals in plants, significantly increase the activity of antioxidant enzymes, and reduce the content of MDA (Ashraf and Foolad, 2007;Satoh et al., 2002;Rady et al., 2019). Exogenous ornithine and glutamate and g-aminobutyric acid can increase the activities of CAT, SOD, POD, and reduce the Na + /K + ratio and content of H 2 O 2 and MDA under salt stress (Chang et al., 2010;Da Rocha et al., 2012;Kaspal et al., 2021;Ahmadi et al., 2021). Our experiments revealed that the histidine treatment significantly increased the resistance enzyme activities of SOD, CAT, and POD in maize roots and decreased the accumulation of O 2 · -, H 2 O 2 , and MDA ( Figures 1A and 2A). In the antioxidant enzymatic system of plants, SOD forms the first line of defense against oxidative stress POD is to oxidize phenolic compounds (Hasanuzzaman et al., 2020), and CAT can rapidly decompose H 2 O 2 to produce H 2 O and O 2 (Hao et al., 2021). The increased CAT activity and the reduced H 2 O 2 content in the maize roots demonstrated that histidine can mitigate oxidative damage and improve the tolerance of maize to salt stress. Plants can adapt to salinity stress through flexible regulation of hormone levels and/or signaling. Accumulating evidence indicates that plant hormones, besides controlling plant growth and development under normal conditions, also mediate various environmental stresses (e.g., salt stress) and thus regulate plant growth adaptation. It has been proved that amino acids are closely associated with plant hormones. For example, auxins are a group of plant hormones that affect plant growth and development. The exogenous L-glutamate acid can affect the auxin level of the root tips in Arabidopsis thaliana and L-tryptophan can improve growth and photosynthetic capacity as the precursor of auxin synthesis (Müller et al., 1998;Walch-Liu 2006). Our results revealed that, after 24 h of the histidine treatment, the DEGs involved in 5 plant hormone signal transduction pathways were significantly up-regulated, including the growth promoting hormones in auxin (IAA), cytokinin (CTK), and gibberellin (GA) and the stress response hormones in abscisic acid (ABA), and jasmonic acid (JA) signal transduction pathways (Figure 8). results of this study also confirmed that histidine can induce GH3 (LOC100280445; LOC100383126) expression, which involved in IAA synthesis and regulate the content of free IAA in maize roots (Feng et al., 2015) to improve the tolerance of maize to environmental stress. CTK is another major phytohormone that not only regulate the plant growth/development but also play an important role during stress and in the nutrient metabolic pathway of crop plants. Research showed that the decrease of CTK level will improve the survival ability of plants under salt stress (Nishiyama et al., 2011;Pizarro et al., 2021). After 24 h of the histidine treatment, genes associated with CTK signaling pathway were up-regulated, including genes in AHK2/3/4 (LOC732762; LOC732835). AHKs act as receptors during cytokinin signaling and play a significant role in providing cytokinin function during plant development (Cucinotta et al., 2020). AHKs also can regulate histidine kinase activity, and sense CTK signal and mediate the function of CTK. Previous research reported that drought, cold and salt stress can induce the expression of AHK3 and the survival rate of ahk mutant increased under salt stress and drought (Tran et al., 2007;Kumar and Verslues, 2015). The up-regulated DEGs related to CTK signal transduction may be closely linked to the histidine treatment, which participated in signal transduction through AHK and act as signal molecules to interact with specific receptors on the cell membrane, causing changes in plant morphology, physiology and biochemistry as other amino acids reported by Ryan and Pearce (2001). GA is a plant hormone and controls major aspects of plant growth such as germination, elongation growth, flower development, and flowering time. DELLA proteins are negative regulators of GA signaling that act immediately downstream of the GA receptor. When the maize roots were treated with histidine, the genes related to DELLA (LOC103650192) and PIF4 (LOC103625828) in the GA signal transduction pathway were up-regulated. In the absence of GA, DELLA will block the binding of PIF3 and PIF4 to DNA, thus inhibiting the growth of hypocotyl. In the presence of GA, it will form a GA-GID1-DELLA complex, leading to the ubiquitination and degradation of DELLA, causing PIF expression and promoting hypocotyl elongation (Li et al., 2016). This suggested that DELLA proteins promote the expression of downstream negative components of GA signaling and provide a direct feedback mechanism for regulating GA homeostasis. ABA is an important phytohormone regulating plant growth, development, stress responses to abiotic stress, and control of seed dormancy and germination. Plants will produce ABA under high salt stress, and ABA will improve the stress resistance of plants (Nakashima and Yamaguchi-Shinozaki, 2013). In the present study ABRE (LOC100285149) and AREB/ABF (LOC100502540) were upregulated under the histidine treatment. ABA activates the expression of many genes through the ABRE in promoter region. AREB/ABF regulates the transcription of downstream target genes mediated by ABRE (Zhu, 2002). The AREB/ABF overexpressing in maize root suggested that the addition of histidine can improve the stress resistance by regulating the expression of genes related to the stress response hormones. Several other studies also reported that AREB/ABF-overexpressing plants showed ABA hypersensitivity and enhanced tolerance to abiotic stresses, such as freezing, drought and salt stress in Arabidopsis (Fujita et al., 2005;Fujita et al., 2013). JA is a stress-related hormone and plays a crucial role in a variety of plant development and defense mechanisms. The JA signaling pathway is involved in the response and adaptation process of plants to abiotic stresses, including cold, drought, and salinity. JAZ proteins play pervasive roles in the response to biotic stress and development of plants. Jasmonate can promote the binding of JAZ proteins with SCF COI1 ubiquitin ligase, cause JAZ degradation, release MYC2 transcription factors, and cause jasmonic acid dependent gene expression. The up-regulated JAZ (LOC100284433; LOC100282471; LOC100282471; LOC100276585; gpm925) at 24 h after histidine treatment in T1 group indicated that the expression of JAZ genes play an important role in plant hormone signal transduction and defense response against salt stress. In fact, JAZ protein is a key hub of crosstalk between JA and other hormone signaling pathways (such as auxin, gibberellin, and salicylic acid) (Kazan and Manners, 2012). JAZ can promote the transcription of GA response genes and bind with DELLA to interfere with the interaction between DELLA and PIF transcription factors and offset the growth inhibition produced by DELLA (Staswick, 2008). Studies have shown that amino acids can trigger related processes of plant hormones and induce the biosynthesis genes of jasmonic acid, abscisic acid, salicylic acid (Colla et al., 2014). Based on our results we speculate that histidine may act as a signal molecule to regulate genes involved in the plant hormone signal transduction pathways by mediating plant hormone levels. The nitrogen sources of plants include organic nitrogen (e.g., amino acids) and inorganic nitrogen (e.g., nitrate nitrogen and ammonium nitrogen). Amino acids can be directly used for protein and other nitrogen-containing compounds synthesis (Rentsch et al., 2007) and transported through the vascular system for plant metabolism and development. They worked as the signal role in the process of nitrogen acquisition by roots to stimulate nitrogen metabolism and assimilation (Tegeder, 2012;Calvo et al., 2014). Assimilation of nitrate and ammonium are vital procedures for plant development and growth. The main enzymes involved in assimilation include NR, NIR, GS, GOGAT, and GDH. Our study demonstrated that the histidine treatment can enhance the activities of NR, GS, and GOGAT ( Figure 2B). The mechanism of histidine involved in nitrogen metabolism in maize roots under salt stress may be like that of the above amino acids. (Glass et al., 2002). As an inducible enzyme, nitrate reductase (NR)together with nitrite reductase (NIR) can reduce NO 3 to NH 4 + (Privalle et al., 1989;Chen et al., 2017). Although excess NH4 + can result in root and shoot growth inhibition such as biomass reduction, oxidative stress with overproduction of reactive oxygen species (ROS), plants can eliminate or reduce the toxicity of NH 4 + by either directly reacting with 2-oxoglutarate to produce glutamate or combining glutamate to generate glutamine under the catalysis of GS and then reacting with 2-oxoglutarate under the action of GOGAT to produce glutamine (Liu and von Wireń, 2017). Our study showed that the activities of NR, GOGAT, and GS in roots were still at high levels after 14 days of histidine treatment, implying that histidine can regulate the activity of NR, GS, and GOGAT and improve the utilization and transformation efficiency of nitrogen and glutamate anabolism in maze root system under salt stress. Amino acids can promote nitrogen reduction and assimilation and increase the activity of NAD dependent NR, GS, GOGAT, and GDH in maize roots, as well as the expression of genes encoding nitrogen metabolism enzymes (Schiavon et al., 2008;Ertani et al., 2010;Colla et al, 2015). Phenylpropanoids are also key mediators of plants resistance towards a number of biotic and abiotic stresses. Phenylpropanoid metabolism in plants mainly includes phenylalanine metabolism and the synthesis of secondary metabolites such as lignin and flavonoids. After 24 h of the histidine treatment, the genes associated with NR (LOC100383210; LOC542278), glnA (LOC542215), and GLT1 (LOC103636185; LOC103652755) were significantly expressed (Figure 9), which regulated the activity of NADH-NR and NADH-GOGAT. Also, the histidine treatment induced the enrichment of CCR (LOC103634692; LOC103634692) and E1.11.1.7 (LOC103638313) in phenylpropane biosynthesis pathway (Figure 9), which enhance the defense response of plants (Schiavon et al., 2010), especially E1.11.1.7 (LOC103638313) which plays a role in the antioxidant system. These results confirmed that histidine can play a regulatory role on the stress resistance ability and growth and development of maize by modulating genes related to nitrogen metabolism. Although the gene expression in phenylpropane biosynthesis pathway was down-regulated at 12 h after the histidine treatment, the genes associated with the above pathways were upregulated at 24 hours, which might imply that the lignin synthesis in maize root under salt stress was not dominant in the early phase of histidine treatment. In plant, glycolysis is a common process to provide energy for aerobic and anaerobic respiration by decomposing sugars. When plants were exposed to high salinity, glycolysis and amino acid synthesis in leaves of wheat were enhanced and the levels of some amino acids and sugars increased, including proline, lysine, sucrose, and etc. (Guo et al., 2015). Other studies suggested that the intermediates in the glycolysis process and free amino acids can affect the metabolism of plants by stimulating the glycolysis process and the activities of glucose phosphate isomerase and pyruvate kinase ( (Fernie et al., 2004;Sukumar et al., 2017;Palumbo et al., 2018). The present study revealed that when the roots under salt stress were treated with histidine, some genes associated with glycolysis were significantly enriched, which induced the high expression levels of pfkA (LOC100192478; LOC100281688), PFP (LOC103652493; LOC101027252; LOC103632535), TPI (LOC100282142), gapN (LOC100282142), PK (LOC100282142), PDC (LOC541919), ADH1 (LOC542364), and ACSS1/2 (LOC103646525) (Figure 9). The protein hydrolysate with amino acids can change the gene expression levels in the glycolysis, tricarboxylic acid cycle and pentose phosphate pathways of maize roots (Ebinezer et al., 2020). Our findings indicated that the histidine treatment dramatically increased the glycolysis process of maize root and enhanced the salt tolerance of roots mainly through increasing glycolysis and energy consumption. This suggested that the modulation of energy metabolism is essential for a response to salinity to balance the production of ROS with the requirements for defense. Conclusion In summary, histidine has been implicated in the mechanism regulating salt tolerance in plants. The results of the present study confirmed that histidine can ameliorate the adverse effects of salt stress on maize root growth. When the roots were treated with histidine for 12 h or 24 h, the activity of SOD, POD, and CAT were significantly increased, alleviating the accumulation of ROS and improving the salt tolerance of maize roots. Also, the histidine treatment enhanced the activities of NR, GS, and GOGAT and promoted the nitrogen utilization, glutamate anabolism and other amino acid anabolism in maze by expressing the genes related to nitrogen metabolism. Interestingly, transcriptomic analysis revealed that the number of upregulated DEGs and enriched pathways of the roots under salt stress were increased after 24 h of histidine treatment. KEGG enrichment analysis found that these DEGs that involved in glycolysis and plant hormone signal transduction pathways were significantly up-regulated, including the growth promoting (IAA), (CTK), and (GA) and the stress response hormones in (ABA), and (JA) signal transduction pathways. Based on the above results, we speculate that histidine may act as a signal molecule to regulate genes involved in the plant hormone synthesis, signal transduction, stress perception and metabolite production for alleviation of salt stress in the maize root system. Data availability statement The data presented in the study are deposited in the NCBI repository, accession number PRJNA874354. Author contributions GY and XZ designed the experiments and edited the manuscript. HJ conducted the experiments. QZ, YQ, KW and TS analyzed the data. HJ wrote the paper and prepared the manuscript. All authors contributed to the paper and approved the submitted version.
2022-11-28T14:48:39.048Z
2022-11-28T00:00:00.000
{ "year": 2022, "sha1": "6de750095fd94bc4b6ece9e702c0f968c4354e29", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "6de750095fd94bc4b6ece9e702c0f968c4354e29", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
73568881
pes2o/s2orc
v3-fos-license
Impact of laptop usage on symptoms leading to musculoskeletal disorders Due to inherent portability of laptops, users frequently assume inconvenient postures while using them that may lead to discomfort or injury. The study was conducted to evaluate the postures and identify the prevalence of musculoskeletal symptoms in girls using laptops for which 100 college going female students between 18 25 years age group were selected through random sampling technique. A self-structured questionnaire was used to assess the laptop usage among adolescents and Rapid Upper Limb Assessment (RULA) was used to assess the posture of students while working with laptop. Standardized Nordic Musculoskeletal Questionnaire (SMSQ) was used to assess the nature and severity of self-rated musculoskeletal symptoms. Results revealed that the maximum respondent’s (74%) posture came under Action level 3 and 26% respondents comes under Action level 2 which indicated that the posture needed “further investigation and may need change” or “changes needed soon”. There was a positive correlation in Normal (0.50), Mild (0.31), Moderate (0.56) and Severe users (0.60) between the posture adopted by the respondents and the incidence of pain in last 12 months and in Normal (0.76), Mild (0.52), Moderate (0.56) and Severe user (0.65) respectively in last 7 days. The Musculoskeletal Symptoms was prominent in various anatomic regions like Neck, Shoulders, Upper back and Lower back, respectively. These symptoms if not addressed at an earlier stage might lead to Musculoskeletal Disorders. INTRODUCTION Computers, and especially laptops, have become standard equipment in higher education as the number of universities instituting laptop initiatives continues to grow (Weaver and Nilsson, 2005).A laptop is a portable personal computer commonly used in a variety of settings, including work, education and personal multimedia.It can be expected that laptop computers have become a working norm for students in the new trend of education.The size and portability of laptops make them powerful, yet practical devices which are easy to handle.Most adolescents now commonly use laptops on a regular basis so they may be considered as heavy users with an increased usage.Because of the inherent portability users frequently assume inconvenient postures and this puts the laptop user in awkward or unhealthy postures that may lead to discomfort or injury (Rafael et.al 2007).If these discomforts and pain persist continuously, it will be a chronic disorder leading to musculoskeletal discomfort and disorders.There is a need to increase the awareness of ergonomics to improve the current practice of laptop's usage and to minimize health problems among students.It is very important to identify the laptop usage and the postures of the students so that the Musculoskeletal Disorder and its consequences can be controlled before it becomes an obsession.Hence this study was an attempt to find out the postures adopted and the prevalence of symptoms leading to Musculoskeletal Disorder in college students. MATERIALS AND METHODS Exploratory research design was adopted for the present study and 'Survey Method' was used for collecting the data.100 college going girls were randomly chosen between 18 -25 years of age for this study from various colleges of Allahabad city.A self structured questionnaire was used to access the laptop usage among college going girls.The questionnaire comprised of questions based on frequency of laptop usage.Rapid Upper Limb Assessment (RULA) developed by McAtamney and Corlett (1993) was used to assess the posture of students working on laptop.Standardized Nordic Musculoskeletal Questionnaire developed by Kuorinka et.al (1987) was implemented on the respondents scoring higher on RULA for assessing the incidence of pain mainly for lower back, neck, shoulder, elbow and wrist/hands. RESULTS AND DISCUSSION The distribution of respondents according to their Diksha Gautam and Nisha Chacko / J. Appl. & Nat. Sci. 9 (3): 1687-1690(2017) Laptop Usage was determined by self-structured questionnaire and it was found that from the total sample 6 per cent students were using laptop for less than 1 hours, 57 per cent students were using laptop for 1-3 hours, 25 per cent students were using laptop for 3-5 hours and 12 per cent students were using laptop for more than 5 hours (Table 1).Postural analysis of college going girls: The Table 2 shows the RULA scores of laptop users on the basis of their usage category.The table 2 reveals that 50 percent respondents falling in normal user category 22.8 percent of the respondents falling in mild user category, 32 percent in moderate user category and 16.6 percent in severe user category were to have a RULA score between 3 -4, Action level 2 which indicates that further investigation was needed and their posture may need changes.Whereas, 50 percent in normal, 77.1 percent in mild, 68 percent in moderate and 83.3 percent in severe users category were found to be in Action level 3 which indicates further investigation and changes needed soon.None were found to have an acceptable posture, that is, Action Level 1 (Table 2).Similar results were found by Oates et al. (1998) where none of the 95 school children in their study were deemed to have acceptable posture (Action Level 1).Research which used RULA to assess the posture of adults during computer use, similarly found no subject to have acceptable posture (Shuval and Donchin, 2005).Prevalence of musculoskeletal symptoms of college going girls: Table 3 indicates the distribution of respondents on the basis of Nordic pain Questionnaire depending on their laptop usage in the last 12 months.From the data, it can be seen that maximum score of 34 was observed in mild users followed by moderate, severe and normal users.In the Neck, Trunk and lower limb category signifying a higher incidence of pain among the mild users as compared to other users category.In the Arm, Shoulder, Wrist category It was also observed that maximum score of 11 was seen in mild users followed by 9 in moderate users, 6 is severe users experienced more than normal user category.Thus results revealed that the pain in different parts of the body is the not governed by the time of usage but by posture adopted while working on laptop (Table 3).Table 3 also indicates the score of respondents on the basis of Nordic pain Questionnaire depending on their laptop usage in the last 7 days.It can be seen from table 3 that maximum score was observed in mild users i.e. 113 followed by moderate (74), severe ( 27) and normal users (9) in the Neck, Trunk and lower limb category signifying a higher incidence of pain among the mild users as compared to other users category.In the Arm, Shoulder, Wrist category It was also observed that maximum score was seen in mild users (44) followed by 29 in moderate users, 13 in severe users and least by normal user(10) category.Thus results revealed that the pain in different parts of the body is the not governed by the time of usage but how the work is carried out using laptop.It was found that during last 7 days 40% mild pain in neck experienced by moderate users and 8.7 mild users experienced severe pain in shoulders, 16.6% mild and moderate pain in wrists experienced by severe users.50% moderate pain in upper back experienced by 3).This study finds out the prevalence of MSDs in girls in which higher incidence of pain was experienced among the mild users as compared to other user category.The Musculoskeletal Symptoms was prominent in various anatomic regions like Neck, Shoulders, Upper back and Lower back, respectively.This is in accordance with the study conducted by Ismail et.al (2009) who reported MSD pain in one week was also recorded higher at the neck area (22.7%) for the 5th grade as compared to the 2nd grade (8.2%) while computer use in forward neck posture.However, Yanto et.al (2008) also reported most of the 2 nd grade school children reported as having higher thigh pain (>30%).For the 2 nd grade students, the highest reported musculoskeletal pain was the shoulder area (16.4%) followed by the neck (14.5%) and leg (12.7%). In this study as the RULA score increasing the incidence of pain (Nordic scores) also increase which signifies that the pain reported by the respondents was due to the different defective postures adopted while using laptop.It is in accordance with Karen et al (2009) also found that frequent assumption of awkward posture was associated with frequent discomfort.Results also revealed that the incidence of pain associated with the awkward postures adopted by users rather than the time spend on laptop.This is in accordance with the study conducted by Karen et al (2009) on college students who reported experiencing frequent musculoskeletal discomfort specifically associated with the activity of using computers.Palm etal (2007) also found that between 10-43% of the students who had health complaints believed that their symptoms were related to computer use.It is depicted from table 5 that there is positive correlation between RULA and Nordic score in all the laptop user category during both last 12 months and last 7 days.Statistically it was found that there was significant correlation at 5% level of significance between RULA posture and Nordic pain score in Normal, Mild, Moderate and Severe users and the correlation values were 0.50, 0.31, 0.56 and 0.60 respectively during last 12 months.In case of last 7 days RULA posture and Nordic pain score in Normal, Mild, Moderate and Severe users were significantly correlated at 5% level of significance and the correlation values were 0.76, 0.52, 0.56 and 0.65 respectively. It is clear from the table that As the RULA score increasing the incidence of pain (Nordic scores) also increases which signifies that the pain reported by the respondents was due to the different incorrect postures adopted by the college girls using laptop. Indicating that as the RULA score increasing the incidence of pain (Nordic score) also increases which signifies that pain reported by the respondent due to the different prolonged incorrect postures adopted by the respondent using laptop.It is in accordance with the study conducted by Sui et.al (2009), who reported that Female students had higher rates of musculoskeletal discomfort in each of the specified anatomic site than male students.Students who reported musculoskeletal discomfort had high prevalence (68.3%) of MSD related to using computer in last 12 months and spent a longer time on computer related activities (Table 5). In the previous studies it was analysed that females are more prone to the musculoskeletal discomforts due to laptop and computer usage so in this study the females were focused so that they the recommendations can be given for correct postures. Conclusion Current practice of laptop's usage was ergonomically improper.Prolonged usage in faulty posture has created various musculoskeletal problems among college students.It was concluded from the present study that there was a positive correlation between the posture adopted by the respondents and the incidence of pain. Table 1 . Distribution of respondents according to their Laptop Usage. Table 2 . Distribution of respondents on the basis of RULA scores. Table 3 . The Musculoskeletal Symptoms was prominent in various anatomic regions like Neck, Shoulders, Upper back and Lower back, respectively which clearly correlate to the local physical demands.These symptoms if not addressed at an earlier stage might lead to Musculoskeletal Disorder.There was a Scores of respondents regarding pain usage in last 12 months and last 7 days based on nordic questionnaire.Diksha Gautam and Nisha Chacko / J.Appl.&Nat.Sci.9 (3): 1687 -1690 (2017)need to develop the most effective ergonomics strategy to improve the knowledge and practice of ergonomics among students.
2018-12-21T10:58:59.179Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "d51ae14c5b0ba4bef70f0b3846dbeabf8d0581da", "oa_license": "CCBYNC", "oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/1422/1363", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d51ae14c5b0ba4bef70f0b3846dbeabf8d0581da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269624840
pes2o/s2orc
v3-fos-license
The histone methyltransferase SUV420H2 regulates brown and beige adipocyte thermogenesis . Activation of brown adipose tissue (BAT) thermogenesis increases energy expenditure and alleviates obesity. Here we discover that histone methyltransferase suppressor of variegation 4–20 homolog 2 ( Suv420h2) expression parallels that of Ucp1 in brown and beige adipocytes and that Suv420h2 knockdown significantly reduces — whereas Suv420h2 overexpression significantly increases — Ucp1 levels in brown adipocytes. Suv420h2 knockout (H2KO) mice exhibit impaired cold-induced thermogenesis and are prone to diet-induced obesity. In contrast, mice with specific overexpression of Suv420h2 in adipocytes display enhanced cold-induced thermogenesis and are resistant to diet-induced obesity. Further study shows that Suv420h2 catalyzes H4K20 trimethylation at eukaryotic translation initiation factor 4E-binding protein 1 ( 4e-bp1 ) promoter, leading to downregulated expression of 4e-bp1 , a negative regulator of the translation initiation complex. This in turn upregulates PGC1α protein levels, and this upregulation is associated with increased expression of thermogenic program. We conclude that Suv420h2 is a key regulator of brown/beige adipocyte development and thermogenesis. Introduction Obesity is a risk factor for a panel of metabolic disorders, including insulin resistance/type 2 diabetes, hypertension, fatty liver diseases, dyslipidemia, cardiovascular diseases, and certain types of cancer.Persistent energy imbalance due to excess energy intake over energy expenditure results in obesity.The total energy expenditure can be divided into basic metabolic rate, physical activity, and adaptive thermogenesis (1).Brown fat is a major player in adaptive thermogenesis (2,3) due to the unique presence of UCP1 in mitochondrial inner membrane, which uncouples oxidative phosphorylation from ATP synthesis, thereby dissipating energy as heat (2,3).Recent studies also point to several UCP1-independent mechanisms in thermogenesis (4,5).Rodents have 2 types of brown adipocytes: classic brown adipose tissue (BAT) is mainly confined to interscapular area, and newly discovered beige adipocytes (BeAT), or beige fat, is sporadically dispersed in white adipose tissue (WAT) and can be induced by β-adrenergic activation (6)(7)(8). Activation of brown/beige adipocyte thermogenesis increases energy expenditure and ameliorates obesity (9,10).Given the recent discovery of thermogenic brown fat in humans (11)(12)(13), it is conceivable that brown/beige adipocyte thermogenesis is a promising target for therapeutic treatment of obesity. Epigenetic mechanisms, including histone modifications, have emerged as key links between environmental factors (e.g., diets) and complex diseases (e.g., obesity).However, how epigenetic mechanisms regulate brown/beige adipocyte function have been less explored.To identify functional epigenetic markers that regulate brown/beige adipocyte development, we surveyed the expression of most epigenetic enzymes, including histone methyltransferases, demethylases, and histone deacetylases, that catalyze histone methylation and acetylation during the early postnatal development of mouse beige adipocytes, and we found that the expression pattern of suppressor of variegation 4-20 homolog 2 (Suv420h2) (Drosophila) mirrors that of Ucp1.Using genetic mouse models with loss or gain of functions of Suv420h2, we determined the role of Suv420h2 in cold-induced thermogenesis, energy metabolism, and diet-induced obesity. Activation of brown adipose tissue (BAT) thermogenesis increases energy expenditure and alleviates obesity.Here we discover that histone methyltransferase suppressor of variegation 4-20 homolog 2 (Suv420h2) expression parallels that of Ucp1 in brown and beige adipocytes and that Suv420h2 knockdown significantly reduces -whereas Suv420h2 overexpression significantly increases -Ucp1 levels in brown adipocytes.Suv420h2 knockout (H2KO) mice exhibit impaired cold-induced thermogenesis and are prone to diet-induced obesity.In contrast, mice with specific overexpression of Suv420h2 in adipocytes display enhanced cold-induced thermogenesis and are resistant to diet-induced obesity.Further study shows that Suv420h2 catalyzes H4K20 trimethylation at eukaryotic translation initiation factor 4E-binding protein 1 (4e-bp1) promoter, leading to downregulated expression of 4e-bp1, a negative regulator of the translation initiation complex.This in turn upregulates PGC1α protein levels, and this upregulation is associated with increased expression of thermogenic program.We conclude that Suv420h2 is a key regulator of brown/beige adipocyte development and thermogenesis. Results Suv420h2 is important in regulating Ucp1 expression.Xue et al. previously reported that beige adipocytes in WAT can be transiently induced in mice during early postnatal development, which peaked at 20 days of age and gradually disappeared thereafter (14).Although the mechanism underlying the transient induction of the developmental beige adipocytes remains unclear, the expression pattern of Ucp1 in WAT during this period offers a unique framework for identifying factors that regulate brown/beige cell development.Thus, we surveyed the expression patterns of most epigenetic enzymes, including histone methyltransferases, demethylases, and deacetylases, in mouse inguinal WAT (iWAT) during postnatal development from P5 to P120, and compared them with those of Ucp1.For the preliminary screening, we pooled 4 RNA samples from each time point (14).We found that Ucp1 expression in iWAT during postnatal development followed similar patterns as those observed in retroperitoneal WAT (rWAT) (14), peaked at P20, and gradually disappeared afterward (Supplemental Figure 1A; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.164771DS1).Among the 4 genes encoding histone methyltransferases (Suv420h1, Suv420h2, and SET domain containing protein 8 [Setd8], and demethylase PHD finger protein 8 [Phf8]) that are responsible for histone H4 lysine 20 (H4K20) methylation, we discovered a unique expression pattern of Suv420h2 (Supplemental Figure 1, B-E), which mimicked that of Ucp1.We then further confirmed our results on Ucp1 and Suv420h2 expression using 4 individual RNA samples (Figure 1, A and B).In adult rodents, Suv420h2 expression was much higher in interscapular BAT (iBAT) than in other fat depots, including iWAT, epididymal WAT (eWAT), and rWAT (Figure 1C).We also found that Suv420h2 expression was much higher than that of Suv420h1 in adipose tissues (Supplemental Figure 1F).As expected, a 7-day cold exposure at 5°C in 2-to 3-month-old male mice stimulated Ucp1 expression in iWAT (Figure 1D).Interestingly, Suv420h2 expression parallels that of Ucp1 in iWAT during cold exposure (Figure 1E). Since knocking down Suv420h2 resulted in around 50% reduction of H4K20me3 and since both SUV420H1 and SUV420H2 catalyze H4K20 trimethylation, we also explored possible physiological function of Suv420h1 in regulating brown adipocyte thermogenesis.Interestingly, overexpressing Suv420h1 (Supplemental Figure 3) resulted in significantly decreased NE-stimulated Ucp1 expression (Figure 1L), indicating that Suv420h1 and Suv420h2 may have opposite effects on brown adipocyte thermogenic function.To further explore this possibility, we knocked down Sub420h2 in BAT1 brown adipocytes and further treated cells with SUV420H1/H2 inhibitor A196, which has been shown to achieve an 80% reduction of H4K20me3 levels in treated cells (19).Suv420h2 knockdown significantly reduced Suv420h2 expression without changing Suv420h1 levels in BAT1 cells, whereas combined Suv420h2 knockdown and A196 treatment did not change Suv420h1 expression nor did it further change Suv420h2 expression (Supplemental Figure 4, A and B).As expected, knocking down Suv420h2 significantly suppressed NE-stimulated expression of genes important for brown adipocyte thermogenesis, including Ucp1 (Figure 1M), type 2 deiodinase (Dio2) (Figure 1N), and acyl-CoA thioesterase 2 (Acot2) (Figure 1O), a gene shown to facilitate mitochondrial fatty acid oxidation (20).Interestingly, combined treatment of BAT1 cells with Suv420h2 knockdown and A196 reversed the inhibitory effects of Suv420h2 knockdown on these gene expression levels and restored them to those of the control group (Figure 1, M-O).These data collectively demonstrate that Suv420h1 and Suv420h2 regulate brown adipocyte thermogenesis, with Suv420h2 serving as a potential positive regulator, whereas suv420h1 may negatively regulate brown adipocyte thermogenesis. Suv420h2 regulates the development of brown and beige fat.Recent data suggest that mice lacking both Suv420h1 and Suv420h2 exhibited increased mitochondria respiration in brown adipocytes, improved glucose tolerance, and were resistant to diet-induced obesity (21).However, since our in vitro data suggest that Suv420h1 and Suv420h2 may exert opposite effects on brown adipocyte function, it is important to delineate the functions of Suv420h1 and Suv420h2 separately in mouse models.Our gene expression data suggest that Suv420h2 mirrors Ucp1 expression during the postnatal development of beige adipocytes; we thus interrogated the role of Suv420h2 in the development of brown and beige adipocytes in vivo.We first examined brown and beige adipose tissue development in mice with whole body Suv420h2 KO (H2KO) (22) at P20, when the developmental beige adipocytes appear at peak while brown fat development also ascends to maturity (14).As expected, Suv420h2 mRNA was not detectable in fat depots of H2KO mice, including iBAT, iWAT, eWAT and rWAT; in addition, there was also no difference in adipose tissue Suv420h1 expression between WT and H2KO mice (Supplemental Figure 5, A and B).Interestingly, iBAT from H2KO mice had significantly decreased UCP1 protein expression and less UCP1 staining compared with that of WT controls (Figure 2, A and B and Supplemental Figure 6A).This was associated with enlarged adipocyte size (Figure 2C), as shown by a shift of significantly decreased smaller adipocyte and reciprocally increased larger adipocyte numbers in iBAT of 20-day-old H2KO mice compared with WT mice (Figure 2D).Likewise, iWAT from 20-day-old H2KO mice also had significantly lower UCP1 protein expression (Figure 2E) and less multilocular beige adipocytes with UCP1 staining (Figure 2F and Supplemental Figure 6, B and C), suggesting less appearance of the developmental beige adipocytes in iWAT of H2KO mice.iWAT from 20-day-old H2KO mice had enlarged adipocyte size (Figure 2G), with a shift of significantly decreased smaller adipocyte and a tendency of reciprocally increased larger adipocyte numbers (Figure 2H). We also generated transgenic mice (AH2Tg mice) overexpressing Suv420h2 specifically in adipocytes under the control of adiponectin promoter (Supplemental Figure 7A).AH2Tg mice exhibited a significant increase of Suv420h2 mRNA in all fat depots, including iBAT, iWAT, eWAT and rWAT, without affecting Suv420h1 levels (Supplemental Figure 7, B and C).iBAT from 20-day-old AH2Tg mice exhibited enhanced UCP1 protein levels and more UCP1 staining (Figure 2, I and J, and Supplemental Figure 8A).In addition, overexpression of Suv420h2 in adipocytes resulted in reduced adipocyte size in iBAT during postnatal development at P20 (Figure 2K), as shown by a shift of significantly increased smaller adipocyte and a reciprocally decreased larger adipocyte number (Figure 2L).AH2Tg mice exhibited higher UCP1 protein levels and more UCP1 + multilocular beige adipocytes in iWAT (Figure 2, M and N, and Supplemental Figure 8, B and C).iWAT from AH2Tg mice also exhibited reduced adipocyte size (Figure 2O), as shown by a shift of significantly increased smaller adipocyte and reciprocally decreased larger adipocyte number (Figure 2P).These data suggest that Suv420h2 promotes brown and beige adipocytes formation during the postnatal development. Suv420h2 regulates cold-induced thermogenesis.In adult mice, beige adipocytes can be induced by chronic cold exposure.To determine the role of Suv420h2 in cold-induced brown and beige adipocyte thermogenesis, we subjected 3-month-old male H2KO, AH2Tg, and their respective WT littermates to a chronic 7-day cold challenge.During the cold exposure, H2KO mice displayed significantly lower body temperature compared with their littermate controls (Figure 3A), suggesting that Suv420h2 deficiency causes cold intolerance.Moreover, H2KO mice had higher fat mass in iWAT, eWAT, and rWAT after the cold challenge (Figure 3B), suggesting less efficiency in utilizing stored energy in fat depots.This was consistent with larger adipocytes observed in both iBAT and iWAT of H2KO mice (Figure 3C), with a shift of reduced smaller adipocyte and reciprocally increased larger adipocyte numbers in both iBAT and iWAT of cold-challenged H2KO mice, although the increase of larger adipocyte numbers in iWAT did not reach statistical significance (Figure 3D).In addition, cold-challenged H2KO mice exhibited decreased expression of Ucp1 in both iBAT and iWAT (Figure 3, E and F), along with reduced expression of other cold-induced thermogenic genes, including peroxisome proliferator activated receptor γ (Pparγ), cell death-inducing DNA fragmentation factor, α subunit-like effector A (Cidea), muscle type carnitine palmitoyltransferase 1b (Cpt1b), epithelial V-like antigen 1 (Eva1), palmitoyl acyl-Coenzyme A oxidase 1 (Acox1), and cytochrome c oxidase subunit I (Cox1) in iBAT and Pparα, PR domain containing 16 (Prdm16), Cidea, and Cpt1b in iWAT (Figure 3, E and F).As expected, Suv420h2 deficiency resulted in decreased H4K20me3 levels in both iBAT and iWAT, along with decreased UCP1 protein levels (Figure 3, G and H).Consistent with these findings, IHC analysis revealed Suv420h2 siRNA via electroporation.On day 6 of differentiation, cells were further treated with DMSO or A196 (5 μM) for 24 hours.Before harvesting, cells were further treated with PBS or NE (1 μm) for 4 hours, n = 3-4/group.Control: Scramble siRNA+DMSO; H2KD: Suv420h2 siRNA+DMSO; H2KD+A196: Suv420h2 siRNA+A196.All data are expressed as mean ± SEM. *P < 0.05 by 1-way ANOVA followed by Tukey's multiple-comparison test in A-G; *P < 0.05 by unpaired 2-tailed Student's t test in (H-I); *P < 0.05 by 2-way ANOVA followed by Tukey's multiple-comparison test in J-L; in M-O, bars with a different letter indicate statistical significance at P < 0.05 as analyzed by 2-way ANOVA followed by Tukey's multiple-comparison test.9, A-C).Seahorse analysis of primarily isolated brown adipocytes revealed reduced basal and maximal oxygen consumption rate (OCR) in H2KO mice relative to WT controls (Figure 3J), suggesting that Suv420h2 deletion compromised mitochondrial function in a cell-autonomous manner. JCI In contrast, AH2Tg mice with adipocyte Suv420h2 overexpression exhibited an opposite phenotype.Specifically, AH2Tg mice displayed higher body temperature compared with their littermate controls during the cold challenge (Figure 4A), suggesting an increased cold tolerance.Cold-challenged AH2Tg mice also had decreased fat mass in iWAT, eWAT, and rWAT (Figure 4B).iBAT and iWAT from cold-challenged AH2Tg mice had smaller adipocytes (Figure 4C), as shown by a shift of significantly increased smaller adipocyte numbers and a tendency of reciprocally decreased larger adipocyte numbers (Figure 4D).In addition, iBAT and iWAT from cold-challenged AH2Tg mice exhibited enhanced expression of Ucp1 and other thermogenic genes, such as Pparα, Pparγ, Cox1, Otopetrin 1 (Otop1), Eva1, and elongation of very long-chain fatty acids (FEN1/Elo2, SUR4/Elo3, yeast) like 3 (Elovl3) in iBAT, and Pparα, Cidea, Cpt1b, Otop1, and Elovl3 in iWAT (Figure 4, E and F).Moreover, Suv420h2 overexpression in adipocytes led to a significant increase in H4K20me3 as well as UCP1 levels in both iBAT and iWAT (Figure 4, G and H).IHC analysis revealed a stronger UCP1 staining in iBAT and higher UCP1 + beige adipocyte induction in iWAT (Figure 4I and Supplemental Figure 10, A-C).In addition, Seahorse analysis revealed enhanced maximal OCR in primary adipocytes isolated from AH2Tg mice (Figure 4J), suggesting that Suv420h2 overexpression increases mitochondrial function in a cell-autonomous manner. Suv420h2 regulates mitochondrial bioenergetic program.To gain further insight into how Suv420h2 regulates brown/beige fat thermogenesis, we performed RNA-Seq analysis in iWAT of 7-day-cold-challenged H2KO and AH2Tg mice.Analysis of differentially expressed genes with online software (https://github.com/Peroc-chiLab/ProFAT;commit ID: 84d79da) (23) predicted an overall reduced browning probability in Suv420h2-deficient iWAT, with a reciprocal increase in gene expression profile resembling that of WAT (Figure 5A).This was consistent with a downregulation of BAT-specific gene expression and an upregulation of WAT-specific gene expression in Suv420h2-deficient iWAT (Figure 5A).In contrast, analysis of differentially expressed genes in iWAT between WT and AH2Tg mice revealed an overall enhanced browning probability, evidenced by enhanced BAT-specific and reduced WAT-specific gene expression (Figure 5B).Interestingly, we found that groups of BAT-specific genes were reciprocally regulated in iWAT between H2KO and AH2Tg mice, including Ucp1, Ucp3, Cpt1b, Otop1, Kcnk3, and S100b (Figure 5, A-C), highlighting the importance of Suv420h2 in beige fat thermogenesis.More strikingly, genes involved in mitochondrial bioenergetic pathways, including electron transport chain, fatty acid β-oxidation, and TCA cycle stood out as converged pathways that were down-or upregulated in H2KO and AH2Tg mice, respectively (Figure 5D and Supplemental Figure 11, A and B). To further investigate how SUV420H2 regulates pathways in mitochondria function and thermogenesis, we performed assay for transposase-accessible chromatin sequencing (ATAC-Seq) analysis in iBAT of 7-daycold-challenged WT and AH2Tg mice.We compared genome-wide alterations in chromatin accessibility landscape assessed by ATAC-Seq with gene expression patterns assessed by RNA-Seq and discovered a strong correlation between chromatin accessibility status and gene expression changes.As illustrated in Fig- ure 5E, the decreases in read densities of genes of 2 selective clusters (Clusters 1 and 2; Figure 5E) based on variable degree of peaks in AH2Tg iBAT, which indicates less chromatin accessibility, were highly associated with the downregulations of the corresponding gene expression; this includes several genes known to negatively regulate brown/beige adipocyte thermogenesis and energy metabolism, such as nicotinamide N-methyltransferase (Nnmt) (24), natriuretic peptide receptor 3 (Npr3) (25), twist basic helix-loop-helix transcription factor 1 (Twist1) (26), and zinc finger protein 423 (Zfp423) (27).In addition, we also identified 2 clusters of genes that showed more chromatin accessibility and were associated with increased gene expression (Clusters 3 and 4; Figure 5E), including several genes encoding mitochondrial electron transporting 3 replicate animals/group.Images from additional animals are located in Supplemental Figure 6, B and C. Scale bar: 140 μm in F and G. (I-L) UCP1 protein levels (I), UCP1-immunostaining (J), H&E staining (K), and adipocyte size (L) in iBAT of 20-day-old AH2Tg and WT mice.In I, n = 4-5/group; in L, n = 3/group.In J, images are representatives from 3 replicate animals/group).Images from additional animals are located in Supplemental Figure 8A.Scale bar: 70 μm in J and K. (M-P) UCP1 protein levels (M), UCP1-immunostaining (N), H&E staining (O), and adipocyte size (P) in iWAT of 20-day-old AH2Tg and WT mice.In M and P, n = 3/group.In N, images are representatives from 3 replicate animals/group.Images from additional animals are located in Supplemental Figure 8 Our data suggest that Suv420h2 regulates pathways involved in mitochondrial bioenergetics.Indeed, immunoblotting analysis of mitochondrial respiratory chain proteins revealed downregulation of complex I NADH dehydrogenase 1β subcomplex 8 (CI-NDUFB8); complex II succinate dehydrogenase complex, subunit B (CII-SDHB); and complex III cytochrome b-c1 complex subunit 2 (CIII-UQCRC2) in both iBAT and iWAT of H2KO mice (Figure 6, A and B), while revealing upregulation of CI-NDUFB8, CII-SDHB, and complex IV mitochondrially encoded cytochrome c oxidase I (CIV-MTCO1) in iBAT and iWAT of AH2Tg mice during cold exposure (Figure 6, C and D). H4K20me3 is elevated at the promoter of 4E-BP1.Since genes responsible for mitochondrial function appear to be the most significant feature of Suv420h2 regulated pathways, we next explored whether Pgc1α, the master regulator of mitochondrial biogenesis (28), is involved in this process.We first studied whether Pgc1α mRNA and protein levels were regulated during postnatal development and cold exposure.While both Pgc1α mRNA and protein levels were significantly higher in iWAT of 20-day-old mice compared with those of 3-month-old mice (Figure 7, A and B), H4K20me3 level at Pgc1α promoter was not significantly different in iWAT across the developmental course (Supplemental Figure 12).Furthermore, while Pgc1α mRNA expression was only transiently upregulated in iWAT 1 day after cold exposure, cold-induced increase in PGC1α protein levels was observed at 7 days after cold exposure (Figure 7, C and D).These data suggest that Pgc1α expression may not depend on promoter H4K20 trimethylation and PGC1α protein level may be regulated independently of mRNA expression, at least during chronic cold exposure. Similarly, although our ATAC-Seq and RNA-Seq data suggest that overexpressing Suv420h2 in adipocytes resulted in a more open chromatin structure at Pgc1α locus, along with increased Pgc1α expression peaks (Supplemental Figure 13A), quantitative PCR (qPCR) analysis showed that Pgc1α expression was not significantly changed in iBAT and iWAT of H2KO (Figure 3, E and F) or AH2Tg mice (Figure 4, E and F) after cold exposure.We also did not observe any changes in Pgc1α expression in BAT1 brown adipocytes with Suv420h2 knockdown and with combined Suv420h2 knockdown and A196 treatment (Supplemental Figure 13B).Interestingly, Suv420h2 deletion in H2KO mice decreased, while Suv420h2 overexpression in AH2Tg mice increased, PGC1α protein content in both iBAT and iWAT (Figure 7, E-H).Thus, our data suggest that PGC1α protein level may be regulated independently of its mRNA expression and that Suv420h2 may be involved in the regulation of PGC1α protein levels. PGC1α is a short-lived protein; therefore, its protein level is tightly regulated by either protein synthesis or degradation.PGC1α protein levels can be regulated by protein degradation (29,30) or synthesis (31).The E3 ligases F-box and WD-40 domain protein 7 (FBXW7) and ring finger protein 34 (RNF34) have been previously shown to promote PGC1α protein ubiquitination and degradation (29,30), whereas PGC1α protein translation can be regulated by the eukaryotic translation initiation eukaryotic translation initiation factor 4F (eIF4F) complex, since the negative regulator of the elF4F complex, the eukaryotic translation initiation factor 4E binding protein 1 (4E-BP1), has been shown to negatively regulate PGC1α protein synthesis (31).There was no change in the expression of Fbxw7 and Rnf34 between WT and H2KO and between WT and H2Tg mice (Supplemental Figure 14) (31).Interestingly, our ATAC-Seq and RNA-Seq data suggest that overexpressing Suv420h2 in adipocytes resulted in a more closed chromatin structure at 4e-bp1 locus, which was associated with reduced 4e-bp1 expression (Figure 8A).Indeed, 4e-bp1 expression was significantly upregulated in iBAT and iWAT of cold-challenged H2KO mice but tended to decrease in iBAT and was significantly decreased in iWAT and eWAT of cold-challenged AH2Tg mice (Figure 8, B and C).4E-BP1 protein levels in iBAT and iWAT were increased in cold-challenged H2KO mice but decreased in cold-challenged AH2Tg mice (Figure 8, D-G). We also measured 4E-BP1 protein levels in iWAT of C57BL/6J mice during postnatal development and cold challenge.Interestingly, 4E-BP1 protein levels were significantly increased in iWAT of 3-month-old group).Images from additional animals can be found in Supplemental Figure 9, A-C.UCP1 + multilocular brown/beige adipocytes are shown in dark purplish red color and are indicated with black arrows; UCP1 -unilocular white adipocytes are shown in light color and are indicated with red arrows.Scale bar: 70 μm for iBAT and 140 μm for iWAT.(J) Oxygen consumption rate (OCR) in primary brown adipocytes isolated from iBAT of male WT and H2KO mice measured by a Seahorse XF 96 Extracellular Flux Analyzer, n = 9 (WT) and 8 (H2KO).All data are expressed as mean ± SEM. *P < 0.05 by 2-way ANOVA with repeated measures followed by Tukey's multiple-comparison test in A and left panel of J; *P < 0.05 by unpaired 2-tailed Student's t test in B, E-H, and right 2 panels of J; *P < 0.05 by 2-way ANOVA followed by Tukey's multiple-comparison test in D. mice as compared with 20-day-old mice (Figure 8H).Since 4E-BP1 negatively regulates PGC1α protein levels (31), this may explain the decreased PGC1α protein levels in iWAT of 3-month-old mice (Figure 7B).On the other hand, cold exposure significantly reduced 4E-BP1 protein levels (Figure 8H), which may contribute to the increased PGC1α protein levels in iWAT of cold-challenged mice (Figure 7D). Mechanistically, ChIP assay reveals that H4K20me3 levels at 4e-bp1 promoter (Supplemental Figure 15) (32)(33)(34) were significantly decreased in both iBAT and iWAT of H2KO mice (Figure 9, A and B).Thus, Suv420h2 deletion may decrease histone repressive mark H4K20me3 at the 4e-bp1 locus, resulting in increased 4e-bp1 expression, which could lead to decreased PGC1α protein levels seen in H2KO mice.In contrast, Suv420h2 overexpression in AH2Tg mice increased 4e-bp1 promoter H4K20me3 levels in iBAT and iWAT (Figure 9, C and D), and this increase may lead to decreased expression of 4e-bp1, potentially contributing to increased PGC1α protein levels observed in AH2Tg mice. To further confirm that SUV420H2 regulates PGC1α protein levels via regulation of 4e-bp1 expression, we knocked down both Suv420h2 and 4e-bp1 in BAT1 brown adipocytes.As shown in Figure 9E, knocking down Suv420h2 significantly increased 4E-BP1 levels in BAT1 brown adipocytes, similarly to those observed in H2KO mice (Figure 8, D and E), whereas combined knockdown of both Suv420h2 and 4e-bp1 significantly reduced 4E-BP1 levels (Figure 9E).Interestingly, knocking down of Suv420h2 tended to reduce basal PGC1α protein levels and significantly reduced NE-stimulated PGC1α protein levels in BAT1 brown adipocytes.Further knocking down of 4e-bp1 blocked this effect and restored PGC1α protein level to that of control group (Figure 9F).These data suggest that 4E-BP1 mediates SUV420H2's effect on regulating PGC1α protein levels. We also explored other possible SUV420H2 downstream targets that could mediate SUV420H2's function in regulating brown/beige adipocyte function.Pedrotti et al. (21) reported that deletion of both Suv420h1 and Suv420h2 resulted in enhanced mitochondria respiration in brown adipocytes, possibly via upregulation of the expression of Pparγ, a master regulator of brown and white adipocyte lipid and glucose metabolism, and thermogenic function (35,36).However, we observed no difference in chromatin accessibility and RNA expression peaks at Pparγ locus in our ATAC-Seq and RNA-Seq data from cold-challenged WT and AH2Tg mice (Supplemental Figure 16A).In addition, there were no consistent changes in cold-induced Pparγ mRNA (Figures 3, E and F, and Figure 4, E and F) and protein (Supplemental Figure 16, B and C) levels in iBAT and iWAT between WT and H2KO mice and between WT and AH2Tg mice. Prdm16 is emerged as an important regulator of brown adipocyte development (16,37).However, we did not observe any differences in chromatin accessibility and RNA expression at the Prdm16 locus in our ATAC-Seq and RNA-Seq data (Supplemental Figure 17A).In addition, there were no consistent changes in cold-induced Prdm16 mRNA (Figure 3, E and F, and Figure 4, E and F) or protein (Supplemental Figure 17B-C) levels in iBAT and iWAT between WT and H2KO mice and between WT and AH2Tg mice. Twist1 and Zfp423 negatively regulate brown/beige adipocyte thermogenesis and energy homeostasis (26,27).Twist1 interacts with PGC1α on PGC1α-target genes to suppress mitochondrial metabolism and uncoupling (26), whereas Zfp423 suppresses adipocyte thermogenic capacity by interfering with several important factors for brown adipocyte function, such as early B cell factor 2 (Ebf2) and Prdm16 (27,38).Our ATAC-Seq and RNA-Seq data indicate that chromatin accessibility and RNA expression peaks at Twist1 and Zfp423 loci (Supplemental Figure 18A and Supplemental Figure 19A) were decreased in WT and AH2Tg mice after cold exposure.In addition, the expression of Twist 1 (Supplemental Figure 18, B and C) and Zfp423 (Supplemental Figure 19, B and C) was increased in iWAT of H2KO mice but reciprocally decreased in iWAT of AH2Tg mice after cold exposure.However, ChIP assay demonstrated that H4K20me3 levels at the Twist1 (Supplemental Figure 18, D and E) or Zfp423 (Supplemental Figure 19, D and E) promoter were not different in iBAT and iWAT between cold-challenged WT and H2KO mice or between cold-challenged WT and AH2Tg mice.Thus, while changes in Twist1 and Zfp423 expression might contribute to altered brown/beige adipocyte function observed in our H2KO and AH2Tg mice, they are not likely mediated via Suv420h2-regulated H4K20 methylation. UCP1 immunostaining in iBAT and iWAT of WT and AH2Tg mice after the 7-day cold challenge (representative images from 3 replicate animals/group).Images from additional animals can be found in Supplemental Figure 10, A-C.UCP1 + multilocular brown/beige adipocytes are shown in dark purplish red color and are indicated with black arrows; UCP1 -unilocular white adipocytes are shown in light color and are indicated with red arrows.Scale bar: 70 μm for iBAT and 140 μm for iWAT.(J) Oxygen consumption rate (OCR) in primary brown adipocytes isolated from iBAT of male WT and AH2Tg mice measured by a Seahorse XF 96 Extracellular Flux Analyzer, n = 8 (WT) and 9 (H2KO).All data are expressed as mean ± SEM. *P < 0.05 by 2-way ANOVA with repeated measures followed by Tukey's multiple-comparison test in A and in left panel of J; *P < 0.05 by unpaired 2-tailed Student's t test in E-H and in right 2 panels of J; *P < 0.05 by 2-way ANOVA followed by Tukey's multiple-comparison test in E. R E S E A R C H A R T I C L E JCI Insight 2024;9(11):e164771 https://doi.org/10.1172/jci.insight.164771 Estrogen-related receptor γ (Esrrg) is emerged as a positive regulator of mitochondrial oxidative metabolism and thermogenesis via both Pgc1α-dependent and -independent mechanisms (39,40).Our ATAC-Seq and RNA-Seq data indicate that chromatin accessibility and RNA expression peaks at the Esrrg locus were increased in cold-challenged WT and AH2Tg mice (Supplemental Figure 20A).In addition, Esrrg mRNA and protein levels were decreased in iWAT of H2KO mice (Supplemental Figure 20, B and C) but were reciprocally increased in iWAT of AH2Tg mice (Supplemental Figure 20, D and E) after cold exposure.However, the H4K20me3 level at Esrrg promoter was not different in iBAT and iWAT between H2KO and WT (Supplemental Figure 20, F and G) or between AH2Tg and WT mice (Supplemental Figure 20, H and I), suggesting that the altered Esrrg expression in H2KO and AH2Tg mice was not dependent on Suv420h2. We further investigated whether ESRRG protein level could be regulated by 4E-BP1.As shown in Supplemental Figure 20J, Suv420h2 knockdown significantly reduced NE-stimulated ESRRG protein levels in BAT1 cells; however, further knockdown of 4e-bp1 blocked this effect and restored ESRRG protein levels to those of the control group.Thus, Esrrg may be another potential target besides Pgc1α mediating Suv420h2's effect on brown/beige adipocyte thermogenesis.However, similarly to that of Pgc1α, Esrrg may not be a direct target of Suv420h2, since Esrrg promoter H4K20me3 levels in iBAT and iWAT were not different in cold-challenged H2KO and AH2Tg mice compared with their respective WT controls.Instead, ESRRG protein levels may be regulated by 4E-BP1-mediated translational regulation, similarly to levels of PGC1α. We further explored whether other brown/beige adipocyte-related genes could be direct targets for Suv420h2 by comparing H4K20me3 level at the promoters of several genes in iWAT during postnatal development.However, we did not observe differences in H4K20me3 levels at the promoters of Ucp1 (Supplemental Figure 21A); RB transcriptional corepressor 1 (Rb1) (Supplemental Figure 21B), a negative regulator of brown adipocyte thermogenesis (41); or Krupple-like transcription factor 2 (Klf2) (Supplemental Figure 21C), a negative regulator of adipogenesis (42) in iWAT along the postnatal developmental course. Suv420h2 is important in the regulation of diet-induced obesity.To determine the role of Suv420h2 in diet-induced obesity, we challenged H2KO, AH2Tg, and their respective WT littermates with a high-fat diet (HFD).When housed at ambient room temperature (20°C-22°C), H2KO mice had increased fat mass in iWAT and eWAT despite no change in body weight (Supplemental Figure 22, A and B).This was associated with decreased energy expenditure in H2KO mice evident by reduced oxygen consumption and heat production (Supplemental Figure 22, C and D) without changes in locomotor activity (Supplemental Fig- ure 22E) or food intake (Supplemental Figure 22F). Similarly, while there was no change in body weight (Supplemental Figure 23A), HFD-challenged AH2Tg mice housed at ambient room temperature had decreased fat mass in eWAT without changes in other fat pads (Supplemental Figure 23B).Ah2Tg mice also exhibited increased energy expenditure, as shown by increased oxygen consumption and heat production (Supplemental Figure 23, C and D) without changes locomotor activity (Supplemental Figure 23E) or food intake (Supplemental Figure 23F). We previously reported that mild cold stress under ambient room temperature (20°C-22°C) may trigger nonshivering thermogenesis (43).Thus, we also conducted HFD feeding experiments under thermoneutrality (30°C).When housed under thermoneutrality, H2KO mice gained more weight starting after 4 weeks of HFD feeding (Figure 10A) with increased fat mass in iBAT, iWAT, and rWAT depots (Figure 10B), and they exhibited glucose intolerance and insulin resistance assessed by GTT and ITT, respectively (Figure 10, C and D).In contrast, HFD-challenged AH2Tg mice gained less weight under thermoneutrality with lower fat mass in iBAT, iWAT, and rWAT (Figure 10, E and F), and they exhibited improved glucose tolerance and insulin sensitivity as shown by GTT and ITT (Figure 10, G and H).Thus, our data indicate that Suv420h2 is important in regulating diet-induced obesity. Discussion Xue et al. previously discovered developmentally induced beige adipocytes (14).To identify functional epigenetic marks that regulate brown/beige adipocyte development, we surveyed the expression of epigenetic enzymes responsible for histone modifications during the postnatal development of beige adipocytes and discovered a unique expression pattern of the histone methyltransferase Suv420h2, which mirrors that of Ucp1.Using genetic models with gain or loss of functions of Suv420h2, we demonstrate that Suv420h2 promotes the development of brown and beige adipocytes postnatally, enhances cold-induced thermogenesis, and prevents diet-induced obesity. R E S E A R C H A R T I C L E JCI Insight 2024;9(11):e164771 https://doi.org/10.1172/jci.insight.164771 Methylation of H4K20 was one of the first histone modifications to be discovered and is evolutionarily conserved from yeast to humans (17,18).H4K20 can be mono-, di-, and trimethylated (17,18).SET8/ PR-SET7 is the only known monomethyltransferase, whereas SUV420H1 and SUV420H2 are responsible for the di-and trimethylation of H4K20 (17,18).The methylation states of H4K20 exert different biological functions.Whereas H4K20me1 and H4K20me2 are involved in DNA replication and DNA damage repair, respectively, H4K20me3 is a hallmark of silenced heterochromatic regions and is also enriched in chromatin regions that contain silenced genes (17,18,44).H4K20me3 plays an important role in dynamic biological functions, including development, cellular differentiation, aging, and cancer development (45)(46)(47)(48)(49).Here we demonstrate that H4K20me3, catalyzed by SUV420H2, may also be involved in the regulation of brown/beige fat thermogenesis and energy metabolism though the 4E-BP1/PGC1α axis. The enrichment of genes involved in mitochondrial functions revealed by our RNA-Seq analysis drew our attention to Pgc1α, a master regulator of mitochondrial biogenesis and thermogenesis (28).It has been demonstrated that PGC1α protein translation can be regulated by the eukaryotic translation initiation complex (31).The eIF4F complex is composed of eIF4E (mRNA m7GTP 5′ cap-binding protein), eIF4G (a scaffolding protein), and eIF4A (an ATP-dependent RNA helicase) (50).Recognition of the mRNA 5′ cap structure by eIF4E is a rate-limiting step in translational initiation and is, hence, tightly regulated (51).The activity of eIF4E is regulated through interaction with the 3 inhibitory 4E-BPs, 4E-BP1, -2, and -3.The 4E-BPs compete with eIF4G for a shared binding site on eIF4E (52), thereby negatively regulating eIF4F complex formation and translation initiation.Cold exposure downregulates 4E-BP1 expression in BAT, which is mediated through β3-adrenergic agonist-stimulated signaling pathways (53).Importantly, deletion of 4E-BP1 in mice results in greater reduction of adiposity, increased energy expenditure, upregulated Ucp1 expression, and beige adipocyte induction in WAT; this is primarily due to increased eIF4F complex formation, leading to increased PGC1α protein translation (31).Indeed, we discovered that the 4e-bp1 promoter H4K20me3 level is increased in Suv420h2-overexpressing adipocytes, leading to downregulated 4e-bp1 expression and corresponding upregulated PGC1α protein levels.The enhanced PGC1α protein levels may drive the mitochondrial biogenesis in Suv420h2-overexpressing adipocytes, resulting in increased brown fat thermogenesis.SUV420H2 catalyzes the deposition of trimethylation to histone H4k20, which in turn represses gene transcription (17,18).In the current study, we observed that overexpressing Suv420h2 increased, whereas H2KO decreased, thermogenic gene expression in brown adipocytes.Thus, we could reasonably predict that SUV420h2 may repress a putative negative regulator of thermogenesis, which in turn promotes thermogenesis.Indeed, we have measured H4K20me3 levels at the promoters of several positive regulators of thermogenesis, including Pgc1α, Pparγ, Prdm16, and Esrrg; none of them showed any differences in promoter H4K20me3 level between H2KO and AH2Tg mice, suggesting that they are not direct targets for Suv420h2.We have also measured H4K20me3 levels at the promoters of several negative regulators of thermogenesis in adipose tissues, including 4e-bp1, Twist1, Zfp423, and Rb1.Only 4e-bp1 fit our criteria with a decreased promoter H4K20me3 mark in H2KO mice and reciprocally increased promoter H4K20me3 levels in AH2Tg mice.Future studies with ChIP-Seq using Suv420h2 or H4K20me3 antibodies are warranted to identify Suv420h2-or H4K20me3-target genes. In the current study, we also identified PGC1α as one of the targets whose protein synthesis could be regulated by 4E-BP1-dependent regulation of the eukaryotic translation initiation eIF4F complex activity.In addition, whereas Esrrg mRNA transcription may not be directly regulated by SUV420H2, our data suggest that ESRRG protein levels may be regulated by 4E-BP1-mediated regulation of the eukaryotic translation initiation eIF4F complex activity, similarly to that of PGC1α.Although 4E-BP1 may regulate the whole translational machinery, the specificity may be regulated in part by specific transcriptional factor complexes on each target gene.Thus, future experiments with ribosome profiling or Ribo-Seq technologies (54,55) could be performed to identify potential protein candidates that are dependent on SUV420H2/ H4K20me3/4E-BP1-regulated cap-dependent protein translation.Along the course of our study, there were 2 papers published studying the roles of SUV420H1/ H2 proteins in brown/beige adipocyte thermogenesis.Pedrotti et al. reported that deletion of both Suv420h1 and Suv420h2 in brown adipocytes increased brown fat thermogenesis and ameliorated obesity via activating Pparγ-regulated gene networks (21).The results were opposite to what we observed in our genetic models.The exact reason for this discrepancy is not clear.However, different genetic models were used in these 2 studies.For our purpose to distinguish the functions of Suv420h2 from that of Suv420h1, we used animal models with Suv420h2 deletion without affecting the expression of Suv420h1, whereas Pedrotti et al. (21) used animal models with Suv420h1/Suv420h2 double deletion.Interestingly, we observed that either Suv420h2 deletion or Suv420h1 overexpression suppressed brown adipocyte thermogenic gene expression, suggesting that, whereas Suv420h2 may positively regulate brown adipocyte thermogenesis, Suv420h1 may serve as a negative regulator.Thus, a possible reason accounting for the differences between our mouse models and those published by Pedrotti et al. ( 21) is that deletion (56) reported that mice with adipocyte-specific Suv420h2/lysine methyltransferase 5C (Kmt5c) deletion exerted decreased thermogenic gene expression in WAT and BAT and were prone to diet-induced obesity and associated metabolic disorders.These phenotypes were similar to the phenotypes observed in our H2KO models.Mechanistically, the authors showed that enhanced expression of a negative regulator of brown fat thermogenesis, transformation related protein 53 (Trp53) in Suv420h2/Kmt5c-KO mice, due to decreased H4K20me3 on its proximal promoter was responsible for the metabolic phenotypes (56).In our current study, we have identified a mechanism, in which Suv420h2 suppresses the expression of a negative regulator of PGC1α protein translation, 4e-bp1, by increasing repressive mark H4K20me3 on its promoter, thus promoting brown/beige adipocyte mitochondrial oxidative metabolism and thermogenesis.These complementary studies could significantly enhance our understandings of how Suv420h1/h2 regulates brown/ beige adipocyte thermogenesis and whole body metabolic homeostasis. In our current study, we observed significant differences in the metabolic phenotypes in our animal models during a cold challenge, whereas the differences diminished in animals challenged with an obesogenic HFD at ambient temperature.It is possible that diet-induced thermogenesis and cold-induced thermogenesis may be triggered by different stimulations.In the context of increased energy needs (cold environment), the purpose of BAT activation is to increase heat production and maintain temperature stability.This is in contrast to a positive energy balance in diet-induced obesity, in which increased heat is not necessary but energy expenditure increases owing to diet-induced thermogenesis, a phenomenon in which excess caloric consumption increases metabolic rate and stimulates BAT thermogenesis (2).Thermogenesis might be stimulated via different mechanisms, depending on whether it is triggered through cold or other factors (57).Additionally, cold and diet can lead to differential gene expression patterns in BAT and WAT (58).Our previous data also show that BAT responded differently in response to a HFD or a cold challenge (59).Thus, it is possible that there are differences in metabolic phenotypes in our animal model during a cold challenge versus a HFD challenge. We also observed that metabolic differences during a HFD challenge were more evident in animals housed under thermoneutrality compared with the ambient temperature.Mice housed at ambient room temperature have a metabolic rate and food intake around 1.5 times higher than mice housed at thermoneutrality (3).While diet-induced thermogenesis might be primarily dependent on UCP1-dependent brown fat thermogenesis, metabolic rate in response to a cold environment could be influenced by factors other than brown fat adaptive thermogenesis -for example, shivering, skin/fur insulation, and most importantly, adipose tissue response to sympathetic activation (3,60).These factors could mask the true intrinsic energetic demands in response to a HFD if mice are housed at ambient temperature that presents mild cold stress condition.These different metabolic adaptations may be partly responsible for the differences in metabolic phenotypes observed in our animal models housed at different environmental temperatures.The thermogenic adaptation to diet-induce obesity in an animal model may be partially dependent on the difference in environment temperature. In summary, we discovered a unique expression pattern of the histone methyltransferase Suv420h2, which mirrors the appearance of developmental beige adipocytes.Using genetic models with loss or gain of functions of Suv420h2, we demonstrate that Suv420h2 promotes the development of brown and beige adipocytes postnatally, enhances cold-induced thermogenesis, and prevents diet-induced obesity, possibly through the 4E-BP1/PGC1α axis.We conclude that Suv420h2 is a key regulator of brown/beige fat thermogenesis, energy metabolism, and diet-induced obesity. Methods Sex as a biological variant.Our study examined both male and female mice.However, we found there were sex-dimorphic effects and the phenotypes were more profound in males.Thus, results from male mice are reported. Mice.Mice with whole body H2KO were provided by Gunnar Schotta (Ludwig Maximilian University, Munich, Germany) (22).To generate transgenic mice with adipocyte-specific Suv420h2 overexpression (AH2Tg), a bacterial artificial chromosome (BAC) containing the mouse adiponectin gene was used, and full-length coding sequence of the mouse Suv420h2 gene was PCR amplified and inserted into the ATG position at exon 2 of the adiponectin gene in the BAC using homologous recombination.The adiponectin BAC carrying Suv420h2 was linearized and microinjected into pronuclei of fertilized embryos of C57BL/6J mice at Georgia State University transgenic core facility. Metabolic analysis.Mice were housed in a temperature-and humidity-controlled environment with a 12/12-hour light-dark cycle and had ad libitum access to water and food.H2KO, AH2Tg mice, and their respective littermate controls were fed a regular chow diet (LabDiet, 5001, 13.5% calories from fat) or a HFD (Research Diets, D12492, 60% calorie from fat) for up to 24 weeks.Various metabolic measurements were characterized.Body weight was monitored weekly.Body composition including fat and lean mass was analyzed using a Minispec NMR body composition analyzer (Bruker BioSpin Corporation).Food intake was measured in single-housed animals over 7 consecutive days.Energy expenditure and locomotor activity were assessed using PhenoMaster metabolic cage systems (TSE Systems).Insulin sensitivity was assessed by GTT and ITT, respectively (61,62).Blood glucose was measured by OneTouch Ultra Glucose meter (LifeScan).At the end of experiments, tissues including BAT and WAT were dissected, weighed, and frozen in liquid nitrogen for further analysis. Cold exposure.H2KO, AH2Tg mice, and their respective littermate controls were subjected to a cold challenge (5°C-6°C) for 7 days.To measure body temperature, some animals were surgically implanted with a temperature transponder (BioMedic Data Systems) in the peritoneal cavity (61,62).At the end of the experiment, WAT and BAT were dissected, weighed, and frozen for further analysis. qPCR analysis of gene expression.Total RNA was extracted from fat tissues using Tri Reagent kit (Molecular Research Center) (61,62).The expression of target genes was measured by qPCR analysis with a Figure 2 . Figure 2. Suv420h2 regulates brown and beige fat development.(A-D) UCP1 protein levels (A), UCP1 immunostaining (B), H&E staining (C), and adipocyte size (D) in iBAT of 20-day-old H2KO and WT mice.In A, n = 4-5/group; in D, n = 3/group.In B, images are representatives from 3 replicate animals/group.Images from additional animals are located in Supplemental Figure 6A.Scale bar: 70μm in B and C. (E-H) UCP1 protein levels (E), UCP1 immunostaining (F), H&E staining (G) ,and adipocyte size (H) in iWAT of 20-day-old H2KO and WT mice.In E, n = 3/group; in H, n = 4/group.In F, images are representatives from , B and C. Scale bar: 140 μm in N and O.All data are expressed as mean ± SEM.UCP1 + multilocular brown/beige adipocytes are shown in dark purplish red color and are indicated with black arrows; UCP1 -unilocular white adipocytes are shown in light color and are indicated with red arrows.*P < 0.05 by unpaired 2-tailed Student's t test in A, E, I, and M; *P < 0.05 as analyzed by 2-way ANOVA followed by Tukey's multiple-comparison test in D, H, L, and P. Figure 3 . Figure 3. H2KO mice have impaired cold-induced thermogenesis.(A and B) Core body temperature (A) and fat pad weight (B) in 3-month-old male WT and H2KO mice during a 7-day cold challenge at 5°C.In A, n = 5-7/group; in B, n = 5/group.(C and D) H&E staining (C) and quantification of adipocyte size (D) in iBAT and iWAT of WT and H2KO mice after the 7-day cold challenge.In C, scale bar: 70 μm for iBAT and 140 μm for iWAT; In D, n = 3/group.(E and F) Gene expression analysis in iBAT (E) and iWAT (F) of WT and H2KO mice after the 7-day cold challenge, n = 6/group in E and n = 5/group in F. (G and H) UCP1 protein and H4K20me3 levels in iBAT (G) and iWAT (H) of WT and H2KO mice after the 7-day cold challenge, n = 5 (WT) and n = 3 (H2KO) in G and n = 4/ group in H. (I) UCP1 immunostaining in iBAT and iWAT of WT and H2KO mice after the 7-day cold challenge (representative images from 3 replicate animals/ Figure 4 . Figure 4. AH2Tg mice have enhanced cold-induced thermogenesis.(A and B) Core body temperature (A) and fat pad weight (B) in 3-month-old male WT and AH2Tg mice during a 7-day cold challenge at 5°C.In A, n = 6-7/group.In B, n = 7/group.(C and D) H&E staining (C) and adipocyte size (D) in iBAT and iWAT of WT and AH2Tg mice after the 7-day cold challenge.In C, scale bar: 70μm for iBAT and 140μm for iWAT.In D, n = 3/group in iBAT and n = 4/ group in iWAT.(E and F) Gene expression analysis in iBAT (E) and iWAT (F) of WT and AH2Tg mice after the 7-day cold challenge, n = 7/group in E and n = 8/group in F.(G and H) UCP1 protein and H4K20me3 levels in iBAT (G) and iWAT (H) of WT and AH2Tg mice after the 7-day cold challenge, n = 3/group.(I) Figure 5 . Figure 5. SUV420H2 regulates mitochondrial bioenergetic program.(A and B) RNA-Seq analysis of BAT-specific gene expression in iWAT of male H2KO mice (A) and male AH2Tg mice (B) after the 7-day cold exposure using an online software (https://github.com/PerocchiLab/ProFAT).The WAT reference aggregate and BAT reference aggregate were derived from the online software.(C) Heatmaps of genes that are reciprocally regulated in iWAT of H2KO and AH2Tg mice after cold exposure.(D) Analysis of pathways that are reciprocally regulated in iWAT of H2KO and AH2Tg mice after cold exposure.(E) Comparison of genome-wide alterations in chromatin accessibility landscape assessed by ATAC-Seq with the corresponding gene expression assessed by RNA-Seq of AH2Tg and WT mice after the 7-day cold exposure. Figure 6 . Figure 6.SUV420H2 regulates mitochondrial respiratory chain complex protein levels.(A and B) Immunoblotting of mitochondrial respiratory chain complex proteins in iBAT (A) and iWAT (B) of H2KO and WT mice after the 7-day cold exposure.n = 5-7/group in A and n = 3/group in B. *P < 0.05 by unpaired 2-tailed Student's t test.(C and D) Immunoblotting of mitochondrial respiratory chain complex proteins in iBAT (C) and iWAT (D) of AH2Tg and WT mice after the 7-day cold exposure, n = 3/group.*P < 0.05 by unpaired 2-tailed Student's t test.All data are expressed as mean ± SEM. Figure 7 . Figure 7. SUV420H2 regulates brown/beige adipocyte thermogenesis through posttranscriptional regulation of PGC1α protein levels.(A and B) PGC1α mRNA (A) and protein (B) levels in iWAT of C57B6/6J mice during postnatal development.n = 5/group in A and n = 3/group in B. *P < 0.05 by unpaired 2-tailed Student's t test.(C and D) PGC1α mRNA (C) and Protein (D) levels in iWAT of C57B6/6J mice during cold exposure.n = 4/group.In C, bars with a different letter indicate statistical significance at P < 0.05 as analyzed by 1-way ANOVA followed by Tukey's multiple-comparison test; in D, *P < 0.05 by unpaired 2-tailed Student's t test.(E and F) PGC1α protein levels in iBAT (E) and iWAT (F) of H2KO and WT mice after the 7-day cold exposure.n = 6/ group.*P < 0.05 by unpaired 2-tailed Student's t test.(G and H) PGC1α protein levels in iBAT (G) and iWAT (H) of AH2Tg and WT mice after the 7-day cold exposure.n = 3/group.*P < 0.05 by unpaired 2-tailed Student's t test.All data are expressed as mean ± SEM. Figure 8 . Figure 8. 4E-BP1 mRNA and protein levels are reciprocally regulated in H2KO and AH2Tg animals after cold exposure.(A) ATAC-Seq analysis of chromatin accessibility and RNA-Seq peak data at 4e-bp1 gene locus in AH2Tg and WT mice after a 7-day cold exposure.(B and C) Expression of 4e-bp1 in various adipose tissues of H2KO (B) and AH2Tg (C) mice after cold exposure.n = 5/group in B, and n = 7 (WT) and 6 (AH2Tg) in C. *P < 0.05 by unpaired 2-tailed Student's t test.(D and E) 4E-BP1 protein levels in iBAT (D) and iWAT (E) of H2KO and WT mice after the 7-day cold exposure.n = 7 (WT) and 5 (H2KO).*P < 0.05 by unpaired 2-tailed Student's t test.Blots were run in parallel at the same time.(F and G) 4E-BP1 protein levels in iBAT (F) and iWAT (G) of AH2Tg and WT mice after the 7-day cold exposure.n = 8 (WT) and 7 (AH2Tg) in F, and n = 6 (WT) and 9 (AH2Tg) in G. *P < 0.05 by unpaired 2-tailed Student's t test.(H) 4E-BP1 protein levels in iWAT of C57BL/6J mice during postnatal development and after a cold challenge.n = 3/group.Bars with a different letter indicate statistical significance at P < 0.05 as analyzed by 1-way ANOVA followed by Tukey's multiple-comparison test.Blots were run in parallel at the same time.All data are expressed as mean ± SEM. Figure 9 . Figure 9. SUV420H2 regulates PGC1α protein levels through increasing H4K20me3 at 4e-bp1 promoter.(A and B) H4K20me3 levels at the promoter regions of 4e-bp1 as assessed by ChIP assay in iBAT (A) and iWAT (B) of H2KO and WT mice after a 7-day cold exposure.n = 3/group.*P < 0.05 by 2-way ANOVA followed by Tukey's multiple-comparison test.(C and D) H4K20me3 levels at the promoter regions of 4e-bp1 as assessed by ChIP assay in iBAT (C) and iWAT (D) of AH2Tg and WT mice after a 7-day cold exposure.n = 3/group.*P < 0.05 by 2-way ANOVA followed by Tukey's multiple-comparison test.(E and F) Basal and NE-induced 4E-BP1 (E) and PGC1α (F) protein levels in BAT1 brown adipocytes treated with either Suv420h2 knockdown or combined Suv420h2/4e-bp1 knockdown.n = 4-6/group.In E, bars with a different letter indicate statistical significance at P < 0.05 as analyzed by 2-way ANOVA followed by Tukey's multiple-comparison test.In F, *P < 0.05 by 2-way ANOVA followed by Tukey's multiple-comparison test.All data are expressed as mean ± SEM. Figure 10 . Figure 10.SUV420H2 regulates diet-induced obesity.(A-D) Body weight (A), fat pad mass (B), glucose tolerance test (GTT) (C), and insulin tolerance test (ITT) (D) in H2KO and WT mice fed a HFD when housed at thermoneutralty.In A and B, n = 7/group.*P < 0.05 by 2-way ANOVA with repeated measures followed by Tukey's multiple-comparison test in A and unpaired 2-tailed Student's t test in B. In C and D, n = 6-7/group.*P < 0.05 by 2-way ANOVA with repeated measures followed by Tukey's multiple-comparison test.(E-H) Body weight (E), fat pad mass (F), glucose tolerance test (GTT) (G), and insulin tolerance test (ITT) (H) in AH2Tg and WT mice fed a HFD when housed at thermoneutralty.n = 6-7/group.*P < 0.05 by 2-way ANOVA with repeated measures followed by Tukey's multiple-comparison test in E, G, and H, and unpaired 2-tailed Student's t test in F. In G and H, n = 6-7/group.*P < 0.05 by 2-way ANOVA with repeated measures followed by Tukey's multiple-comparison test.All data are expressed as mean ± SEM.
2024-05-09T06:16:33.649Z
2024-05-07T00:00:00.000
{ "year": 2024, "sha1": "d694228113bdec8a9ec06a6ec8992fc9d9e9ec27", "oa_license": "CCBY", "oa_url": "http://insight.jci.org/articles/view/164771/files/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d7667c75f06fdbe65d29541942a531bedfdecb34", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
104293758
pes2o/s2orc
v3-fos-license
Post-exposure administration of chimeric antibody protects mice against European, Siberian, and Far-Eastern subtypes of tick-borne encephalitis virus Tick-borne encephalitis virus (TBEV) is the most important tick-transmitted pathogen. It belongs to the Flaviviridae family and causes severe human neuroinfections. In this study, protective efficacy of the chimeric antibody chFVN145 was examined in mice infected with strains belonging to the Far-Eastern, European, and Siberian subtypes of TBEV, and the antibody showed clear therapeutic efficacy when it was administered once one, two, or three days after infection. The efficacy was independent of the TBEV strain used to infect the mice; however, the survival rate of the mice was dependent on the dose of TBEV and of the antibody. No enhancement of TBEV infection was observed when the mice were treated with non-protective doses of chFVN145. Using a panel of recombinant fragments of the TBEV glycoprotein E, the neutralizing epitope for chFVN145 was localized in domain III of the TBEV glycoprotein E, in a region between amino acid residues 301 and 359. In addition, three potential sites responsible for binding with chFVN145 were determined using peptide phage display libraries, and 3D modeling demonstrated that the sites do not contact the fusion loop and, hence, their binding with chFVN145 does not result in increased attachment of TBEV to target cells. Introduction Tick-borne encephalitis virus (TBEV), a positive-sense single-stranded RNA virus from the Flaviviridae family, is a causative agent of one of the most severe human neuroinfections [1][2][3]. TBEV is mostly transmitted via bites by Ixodes ticks that inhabit the forested areas of Eurasia from Western Europe to the Far East and from the Scandinavian peninsula to the Mediterranean [4]. Three main subtypes of TBEV are currently recognized: the Far-Eastern a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 (TBEV-FE), European (TBEV-Eu), and Siberian (TBEV-Sib) subtypes [5][6][7]. TBEV-FE is considered to cause the most severe disease, while TBEV-Eu causes mostly mild infections [6,[8][9][10]. Tick-borne encephalitis (TBE) is documented in many European countries, Russia, China, Mongolia, and Kazakhstan, and, in the past several years, the highest incidence of the disease was recorded in Russia, Slovenia, and the Baltic states [11]. Although most human cases of TBE are asymptomatic, TBEV can cause severe TBE, which is often lethal [3,6,12]. Several vaccines for the prevention of TBE in adults and children are currently available in Europe, Russia, and China. These vaccines are safe, highly immunogenic, and efficient; however, vaccination coverage is low in many regions, which leads to a substantially higher frequency of TBE cases in these regions. Moreover, TBEV vaccine breakthrough cases have been recorded in several countries [13][14][15]. In most endemic countries, there are no specific preparations for the treatment of TBE. Previously, specific anti-TBE serum immune globulin Encegam (FSME-Bulin) was used in European countries; however, the use of Encegam was suspended due to concerns regarding possible enhancement of TBE after its administration. In Russia, where the most severe and lethal cases of TBE have been recorded, specific anti-TBE serum immunoglobulin (anti-TBE-Ig) is applied for post-exposure prophylaxis and treatment of TBE [3,16]. Anti-TBE-Ig can decrease the severity of the disease [17]; however, like other preparations derived from the donor's blood, it has disadvantages including a limited number of plasma from donors vaccinated against TBE; insufficient standardization of original plasma and, as a consequence, insufficient standardization of the preparation; increased risk of contamination with nondetectable pathogens. In this regard, the development of alternative anti-TBEV immunological preparations is required. Previously, several anti-TBEV chimeric antibodies were constructed [18,19] based on variable domains of mouse monoclonal antibodies against glycoprotein E of TBEV-FE. Two chimeric antibodies were able to neutralize TBEV infectivity in vitro and one of them protected mice that were infected with TBEV-Eu [18]. In this study, the therapeutic potency of the chimeric antibody chFVN145 was examined in mice infected with strains belonging to the Far-Eastern, Siberian, and European subtypes of TBEV and treated once with chFVN145 one, two, and three days after infection. In addition, the neutralizing epitope of chFVN145 was determined. Viruses TBEV strains Sofjin and Vasilchenko, prototype Russian strains of TBEV-FE and TBEV-Sib, respectively, were obtained from the repository at the Federal State Public Scientific Institution «Scientific Centre for Family Health and Human Reproduction Problems» (Collection # 478258, http://www.ckp-rf.ru, Irkutsk, Russia); the TBEV strain Absettarov (TBEV-Eu) was obtained from the repository at the Tomsk branch of the Federal State Unitary Company "Microgen" (Tomsk, Russia). All experiments with live TBEV were conducted under BSL-3 conditions. Development of chFVN145 To develop chFVN145, DNA fragments encoding variable domains of heavy and light chains (VH and VL) of a mouse monoclonal antibody against glycoprotein E of TBEV were amplified by RT-PCR using previously designed primers [18] and total RNA from hybridoma cell line 14D5 [20]. A stable clone producing chFVN145 was obtained as described previously [21]. Briefly, the DNA fragment encoding VH was restricted by NotI and PmeI endonucleases and inserted into the NotI/PmeI-digested plasmid pBiFRT designed previously [21] to generate the pBiFRT-VH145 plasmid. Then, the VL-encoding DNA fragment cleaved by EcoRV and Hin-dIII was cloned into the EcoRV/HindIII-digested plasmid pBiFRT-VH145. The resulting plasmid, pBiFRT-145, and the plasmid pOG44 (Life Technologies, Carlsbad, CA) were cotransfected into suspension CHO-S/FRT cells [22] using PEIPro (Polyplus-transfection SA, Illkirch, France) for site-directed genomic integration. Two days after transfection, the culture medium was replaced with the selective medium CD OptiCHO (Life Technologies) containing 50 μg/mL hygromycin B, and the stable clone CHO-S/FRT/FVN145 was selected according to the manufacturer's instructions. CHO-S/FRT/FVN145 cells were cultivated in CD FortiCHO medium (Life Technologies) with addition of 4 mM glutamine (Biolot, St. Petersburg, Russia) and 3 mM glucose (Biosyntez, Penza, Russia) every three days. To purify chFVN145, cells and debris were harvested by centrifugation; the supernatant was filtered through a 0.22-μm PES capsule filter (Millipore, Burlington, MA) and loaded onto a 4-mL Protein A-sepharose column. ChFVN145 was eluted in 0.1 M citric buffer (pH 3.0). The antibody was concentrated and buffer-exchanged against phosphate buffer saline (PBS, pH 7.4) using an Amicon Ultra-4 50 kDa filter (Millipore, USA), sterilized by filtration through a 0.22 μm filter (Millipore, USA), and stored at a concentration of 2 mg/ml at 4˚C. Affinity constant measurement The kinetics for chFVN145 binding with TBEV glycoprotein E was determined by a surface plasmon resonance (SPR) method using a ProteOn XPR36 System (Bio-Rad, USA). Recombinant protein rE [23] was immobilized onto vertical channel L1 of GLC sensor chip at a 70 response units (RU) level. Serial dilutions of chFVN145 were analyzed starting from the lowest concentration (1 nM, 3 nM, 9 nM, 27 nM, and 81 nM) at a flow rate of 25 μl/min. Vertical channel L2 was used as a reference channel. Binding experiments were performed in triplicate; chip surface was regenerated with 100 mM citric acid. Global analysis of experimental data based on a single-site or a heterogeneous analyte models was performed using the ProteOn Manager v. 3.1.0 software. The affinity constant was calculated as K D = k d /k a . Animal studies BALB/c mice were purchased from the animal care facility in the Federal State Research Center of Virology and Biotechnology "Vector" (Koltsovo, Russia). Animals were housed under normal light-dark cycle; water and food were provided ad libitum. Before the experiments, LD 50 was determined for each viral stock. Groups of three-week old mice (n = 10 in each group) were infected intraperitoneally with serial ten-fold dilutions of a viral stock in 0.9% NaCl and LD 50 for each stock was calculated by the Reed-Muench method [24]. Then, viral stocks were aliquoted and stored at -80˚C. To examine the protective efficacy of antibody preparations, three-week old BALB/c mice (9-10 g) were infected intraperitoneally with TBEV at the appropriate dose in 0.2 mL. One, two, or three days after infection, the mice were treated once intramuscularly with chFVN145, a commercial preparation of human anti-TBE immunoglobulin (anti-TBE-Ig) produced from donor blood (lot #608, 10%, hemagglutination titer 1:160, Virion, Russia), or 0.9% NaCl in a volume of 100 μL. Preparations of chFVN145 and anti-TBE-Ig were diluted in 0.9% NaCl. Each experimental group of mice included 8-10 animals. Mice were monitored at least twice a day for clinical signs of the disease (paresis and/or paralysis), which resulted in euthanasia using an overdose of isoflurane. The mice were observed for 21 days after infection, and the survival rate was recorded. In addition, the mean survival time (MST) was determined as the period between the infection and animal death; the MST of mice that survived until the end of the experiment was 21 days. To consider the effect of storage and freezing-thawing on the viral stock, actual infectious dose (in LD 50 ) was additionally determined in each experiment based on the method described previously [24] using four groups of mice, six animals in each group. All animal procedures were carried out in accordance with the recommendations for the protection of animals used for scientific purposes (EU Directive 2010/63/EU). All experiments with animals were approved by the local bioethics committee of the Federal State Public Scientific Institution «Scientific Centre for Family Health and Human Reproduction Problems». The recombinant proteins rED3delA+ and rED3delD+, which included fragments of domain III of glycoprotein E (aa 301-359 and aa 356-397, respectively), were obtained in this study using the expression vector pHEN2. DNA fragments encoding rED3delA+ and rED3-delD+ were amplified by PCR using cDNA of the same strain (Sofjin-Ru) as a template and two pairs of the primers: 5 0 -GCGCCATGGCCGGCGGTGGCTCGGGTCTTACATACACAATGT GCG-3 0 and 5 0 -TTAGCGGCCGCTGTTATCAACATGGCCACGTTCACATCCG-3 0 for rED3-delA+ and 5 0 -GCGCCATGGCCGGCGGTGGCTCGATGTTGATAACACCCAACCCC-3 0 and 5 0 -TTAGCGGCCGCTTAGTGATGGTGATGATGATGACTCCCTTTTTGGAACCATTG-3 0 for rED3delD+, respectively. The resulting PCR fragments were cleaved with NcoI and NotI and ligated independently into NcoI/NotI-digested pHEN2. E. coli HB2151 cells were transformed with the resulting plasmids, plated onto agar with 50 μg/mL ampicillin and 20 μg/mL isopropyl-β-D-1-thiogalactopyranoside and cultivated. Individual colonies of E. coli producing rED3delA+ and rED3delD+ were screened by PCR using the same primers, and their ability to produce the recombinant proteins was confirmed using 12.5% SDS-PAGE. Epitope mapping Selection of specific peptides exposed on the surface of bacteriophages from the phage display libraries PhD-12 and PhD-C7C (New England Biolabs, Ipswich, MA) was carried out as described previously [21]. Briefly, approximately 10 11 phage particles from each library were pre-incubated with a non-specific mock mouse antibody and added to the wells of 96-well microtiter plates coated with 200 ng chFVN145 in PBS, pH 7.4, and incubated for one hour at 37˚C. Then, unbound phage particles were washed away with PBST, and bound phages were eluted with 100 μg/mL chFVN145. The eluted phages were used for the second round of biopanning, which was carried out using plates coated with 20 ng chFVN145. Phage particles that were eluted after the second round were used to transfect E. coli ER2738 cells. Phages with the selected exposed peptides were isolated from individual plaques and assayed for their binding to chFVN145 by ELISA. For indirect ELISA, the wells of 96-well microtiter plates were coated with 20 ng of chFVN145 or 3% bovine serum albumin in PBS, pH 7.4. After blocking, individual selected phages were diluted in PBST to yield~10 10 colony forming units (CFU) and added to the wells. Indirect ELISA was performed with anti-M13 polyclonal rabbit antibodies followed by alkaline phosphatase-conjugated anti-rabbit IgG (ICN) and stained with pnitrophenyl phosphate. Statistics The data were analyzed using Microsoft Excel software and are expressed as mean values ± standard error of the mean (SEM). Comparisons were performed using the log-rank test. Data were analyzed using the on-line service available at https://www.evanmiller.org/abtesting/survival-curves.html. P < 0.05 was considered to indicate statistical significance. Production and characterization of chFVN145 To produce chFVN145, the stable clone CHO-S/FRT/FVN145 was developed based on previously obtained cell line CHO-S/FRT [21] and the plasmid, pBiFRT-145, in which variable regions were connected to the constant regions of human IgG kappa. CHO-S/FRT/FVN145 cells were cultivated and chFVN145 was purified following procedures described in the Material and Methods section. The purified chFVN145 was examined by SDS-PAGE and western blotting. To analyze the binding affinity of chFVN145, a label-free biosensor assay was used. A global analysis of interaction between chFVN145 and recombinant protein rE demonstrated a good quality fit and the affinity constant was calculated as K D = (1.5 ± 0.2) × 10 −9 M (Fig 1). chFVN145 post-exposure prophylaxis of mice infected with TBEV-Eu, TBEV-FE, or TBEV-Sib strains Six groups of BALB/c mice were challenged by intraperitoneal injection of a target dose of TBEV-Eu 159 LD 50 . One day after infection, the first group of mice received chFVN145 at a dose of 100 μg/mouse (high dose) and the second group at a dose of 10 μg/mouse (low dose), whereas two other groups of animals were treated with anti-TBE-Ig at doses of 100 μg and 10 μg per mouse, respectively. The fifth group of mice was administered 0.9% NaCl, while a group of non-treated mice was used as the control group. Post-exposure administration of chFVN145 at target doses of 100 μg and 10 μg per mouse resulted in 100% (8/8) and 50% (4/8) survival rates, respectively (Fig 2A). Protective efficacy of anti-TBE-Ig at the high dose was lower than that of chFVN145 (37.5%, 3/8); no protection was observed when anti-TBE-Ig was used at the low dose (Fig 2A). To test the protective efficacy of chFVN145 against TBEV-FE and TBEV-Sib, similar experiments were carried out when mice were challenged with the TBEV strains Sofjin and Vasilchenko at doses of 231 LD 50 and 251 LD 50 , respectively. One day after infection, mice in each group were treated once with chFVN145 (100 μg and 10 μg per mouse), anti-TBE-Ig at a dose of 100 μg/mouse, or 0.9% NaCl. Clear protection against TBEV infection in the treated mice was observed (Fig 2B and 2C). One injection of 100 μg/mouse chFVN145 resulted in 90% (9/ 10) survival rate in mice infected with TBEV-FE and 100% (8/8) in mice challenged with TBEV-Sib. When mice received 10 μg chFVN145, the survival rates were 60% (6/10) and 50% (4/8) for TBEV-FE and TBEV-Sib, respectively (Fig 2B and 2C). Administration of anti-TBE-Ig provided protection in 50% (5/10) mice infected with TBEV-FE and was not effective in mice infected with TBEV-Sib ( Fig 2C). However, the MST of animals from this group was not decreased compared to that of the controls, namely 9.8 ± 3.5 vs. 7.3 ± 2.1 in the non-treated mice. Next, mice challenged with 501 LD 50 TBEV-FE were administered chFVN145 at high and low doses at days +1, +2, and +3. Good therapeutic effect was observed at days +1 and +2 for both high and low doses, and only weak improvement in the survival rate was recorded on day +3 for the high dose ( Fig 3B). However, even when mice were treated with chFVN145 at a dose of 10 μg/mouse on day +3 (100% mortality rate), the MST was not lower than that of mice received 0.9% NaCl and non-treated controls (7.4 ± 1.3 vs. 6.9 ± 1.0 and 6.6 ± 0.5, respectively), indicating that the course of the disease in this group of mice was not more severe compared to controls. Therapeutic efficacy of a chimeric antibody against three subtypes of tick-borne encephalitis virus Finally, the therapeutic potency of chFVN145 was examined when mice were infected with TBEV-Sib. In this experiment, mice were challenged with a high lethal dose of TBEV strain Vasilchenko, 3981 LD 50 . Even in this case, administration of chFVN145 increased the MST of mice treated at days +1, +2, or +3 compared to that of the controls (Fig 3C). Expectedly, the survival rate of mice that received 100 μg of chFVN145 was higher than that of mice treated with the antibody at a low dose (Fig 3C). Determination of non-protective doses of chFVN145 To examine whether administration of chFVN145 can enhance TBEV infection in the mouse model, a non-protective dose of the antibody was determined when the mice were treated one day after infection. For this, mice infected with 199.5 LD 50 TBEV-Eu were administered chFVN145 at doses of 40 μg, 4 μg, or 0.4 μg per mouse. The survival rate was improved when the mice were treated with 40 μg/mouse and 4 μg/mouse, but not when mice received 0.4 μg of chFVN145: in that case, a mortality rate of 100% was observed (Fig 4A). The experiment was repeated with mice challenged with 316 LD 50 TBEV-FE ( Fig 4B). Importantly, no significant difference was observed in the MST of mice that received non-protective doses of chFVN145 and the non-treated controls and mice administered 0.9% NaCl. Non-protective doses of chFVN145 were determined: approximately 0.5 μg/mouse for treatment at day +1, 10 μg/ mouse at day +2, and 10-100 μg/mouse at day +3. Therapeutic efficacy of a chimeric antibody against three subtypes of tick-borne encephalitis virus Summarized data on the MST of mice from the groups with a mortality rate of 100%, including non-treated mice and mice that received chFVN145 or 0.9% NaCl showed that the MST of mice treated with chFVN145 at non-protective doses at days +1, +2, or +3 was comparable to or better than the MST of non-treated mice and mice received 0.9% NaCl, indicating that chFVN145 did not enhance TBEV infection (Fig 5). Importantly, the results were independent on the TBEV strain used for infection. To localize the neutralizing epitope, two random peptide libraries were screened using chFVN145. The PhD-12 library contains phages each with a random 12-mer peptide inserted into the N-terminus of the phage minor protein p3 and exposed on the phage surface. In the PhD-C7C library, an exposed 7-mer random peptide is flanked by cysteine residues, which form a peptide loop. Three different peptides with DIII-like sequences were selected from the PhD-12 library along with six peptides from the PhD-C7C library (Fig 6B). All identified sequences were localized within the rED3delA+ sequence, between aa 307 and 348. Three possible sites responsible for binding with chFVN145 were predicted: aa 307-313, aa 333-338, and aa 341-348, from which site 1 was located on the strand A, site 2 on the loop BC, and site 3 on the strand C and loop CD according to [25] (Fig 6C). Notably, sites 1 and 2 are located on the surface of domain III and available for binding, while site 3 is buried in the protein layer close to viral lipid membrane. Therapeutic efficacy of a chimeric antibody against three subtypes of tick-borne encephalitis virus Discussion In this study, the therapeutic efficacy of the chimeric antibody chFVN145 was investigated in mice infected with various subtypes of TBEV. In previous experiments, ch14D5a purified from culture medium after transient expression in CHO-K1 cells showed protective efficacy against lethal infection of BALB/c mice with the TBEV strain Absettarov (TBEV-Eu) when ch14D5a was administered one day after infection [18]. A similar experiment was designed to test the ability of chFVN145 produced by the stable clone CHO-S/FRT/chFVN145 to protect mice challenged with the same TBEV strain. Commercially available anti-TBE-Ig, which is routinely applied for post-exposure prophylaxis and treatment of TBE in Russia [3,16], was used as a positive control. Since antibody preparations used in this study were diluted in 0.9% NaCl, mice treated with 0.9% NaCl were used as one of the controls. Clear dose-dependent efficacy of chFVN145 was observed, and the dose of 100 μg/mouse provided substantially better protection than the dose of 10 μg/mouse. The protective efficacy of chFVN145 was significantly higher than that of the anti-TBE-Ig that is probably due to the substantially lower amount of specific anti-TBEV immunoglobulins in the total protein in this preparation. The anti-TBE-Ig was not effective at a dose of 10 μg/mouse, resulting in the exclusion of this dose from further experiments. As different TBEV subtypes occur in Eurasia, the protective efficacy of chFVN145 against virus strains belonging to TBEV-FE and TBEV-Sib was examined. The results of these experiments were similar to those obtained in mice infected with TBEV-Eu: chFVN145 demonstrated dose-dependent efficacy against both TBEV-FE and TBEV-Sib and was significantly more effective than anti-TBE-Ig. Importantly, despite the different pathogenicity of TBEV-Eu, TBEV-Sib, and TBEV-FE strains for humans, no substantial difference in the protective efficacy of chFVN145 against all three subtypes was observed in the mouse model. Given this, the therapeutic efficacy of chFVN145 was assessed when mice were challenged with variable doses of target TBEV strains. The post-exposure window was expanded and mice were treated one, two, or three days after infection. The results indicated that survival rates were TBEV dose-dependent. When mice were infected at a dose of 20 LD50 (TBEV-Eu strain), the survival rate of mice administered chFVN145 at +1, +2, or +3 days substantially increased compared to that of non-treated mice and mice received 0.9% NaCl. When mice were infected at a dose of approximately 500 LD 50 (TBEV-FE strain), good results were observed in mice that received chFVN145 on days +1 or +2. The survival rate of mice challenged with TBEV-Sib strain at a dose of approximately 4000 LD 50 was substantially improved only when mice were treated one day after infection. However, even in the case of such a high infection dose, an improvement in the survival rate or the MST was observed when mice were given chFVN145 at days +2 or +3. The ability of chFVN145 to augment TBEV infection in mice was specifically investigated due to concerns that some flavivirus-specific antibodies can induce antibody-dependent enhancement (ADE) of infection. This phenomenon was repeatedly observed for flaviviruses in in vitro and in vivo experiments [26][27][28][29][30]. Moreover, antibody-associated augmentation of disease has been recorded for dengue virus infection in humans [31][32][33]. ADE is mainly associated with the use of a sub-neutralizing concentration of antibodies [34,35]. In this study, non-protective doses of chFVN145 were determined: approximately 0.5 μg/mouse for treatment at day +1, 10 μg/mouse at day +2, and 10-100 μg/mouse at day +3. Importantly, administration of non-protective doses did not decrease the survival rate and MST of treated mice compared to those of mice received 0.9% NaCl and non-treated controls regardless of the timing of chFVN145 injection and the TBEV strain used for infection. Only one administration of chFVN145 was examined in this study, and further investigation is required for the assessment of the effects of multiple administrations of the antibody. Recently, a novel mechanism of antibody-induced enhancement of flavivirus infection, which is based on the antibody-promoted conformational changes in glycoprotein E that result in an increased availability of the usually buried fusion loop (FL), has been described [36]. In this regard, the neutralizing epitope for chFVN145 was examined and potential sites on the surface of glycoprotein E responsible for binding with chFVN145 were determined using phage display. According to 3D modeling, these potential sites do not contact the FL, and, hence, chFVN145 binding cannot result in exposure of the FL that would lead to the increased attachment of TBEV to target cells. In conclusion, chFVN145 demonstrated clear therapeutic efficacy, which was TBEV dosedependent. ChFVN145 protected mice infected with TBEV-Eu, TBEV-Sib, and TBEV-FE strains. Importantly, administration of chFVN145 did not enhance TBEV infection in the mouse model. The neutralizing epitope was localized in domain III of glycoprotein E; sites responsible for binding with chFVN145 are distant from the FL, and their interaction with chFVN145 would not result in rearrangement of glycoprotein E and exposing the FL, which would lead to enhanced TBEV infection. The obtained results indicate that chFVN145 would be of value in designing potential anti-TBE preparations; however, further in vivo efficacy studies using multiple administrations of the antibody are required.
2019-04-10T13:03:21.241Z
2019-04-08T00:00:00.000
{ "year": 2019, "sha1": "617ca12bfee37fdfa3d5090e27d9336ffa6a2ec5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0215075&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "617ca12bfee37fdfa3d5090e27d9336ffa6a2ec5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
251958738
pes2o/s2orc
v3-fos-license
ImpedanceVerif: On-Chip Impedance Sensing for System-Level Tampering Detection . Physical attacks can compromise the security of cryptographic devices. Depending on the attack’s requirements, adversaries might need to (i) place probes in the proximity of the integrated circuits (ICs) package, (ii) create physical connections between their probes/wires and the system’s PCB, or (iii) physically tamper with the PCB’s components, chip’s package, or substitute the entire PCB to prepare the device for the attack. While tamper-proof enclosures prevent and detect physical access to the system, their high manufacturing cost and incompatibility with legacy systems make them unattractive for many low-cost scenarios. In this paper, inspired by methods known from the field of power integrity analysis, we demonstrate how the impedance characterization of the system’s power distribution network (PDN) using on-chip circuit-based network analyzers can detect various classes of tamper events. We explain how these embedded network analyzers, without any modifications to the system, can be deployed on FPGAs to extract the frequency response of the PDN. The analysis of these frequency responses reveals different classes of tamper events from board to chip level. To validate our claims, we run an embedded network analyzer on FPGAs of a family of commercial development kits and perform extensive measurements for various classes of PCB and IC package tampering required for conducting different side-channel or fault attacks. Using the Wasserstein Distance as a statistical metric, we further show that we can confidently detect tamper events. Our results, interestingly, show that even environment-level tampering activities, such as the proximity of contactless EM probes to the IC package or slightly polished IC package , can be detected using on-chip impedance sensing. Introduction Strong cryptography is required to maintain the secrecy and integrity of processed data in embedded systems.However, even in the existence of such cryptographic primitives, attackers who obtain physical access to these devices can perform physical attacks to break the security of the entire system.Mounting physical attacks (e.g., Side-Channel Analysis (SCA) and Fault-Injection (FI)) usually requires adversaries to tamper with the system and prepare it for such attacks.Depending on the attack requirements, tampering at different abstraction levels of the system, from the printed circuit board (PCB) to integrated circuits (ICs), is desired by the attacker.For instance, in case of power analysis attacks, an adversary might need to solder or replace a shunt resistor on the PCB's power rails or remove decoupling capacitors from the PCB to amplify leakage through power consumption [DCEM18,LBS19].Another example includes the polishing or removing the integrated circuit (IC) package to carry out semi-or fully-invasive The concept of the ImpedanceVerif framework: (i) Modeling the PDN impedance as an RLC circuit, (ii) Characterizing the impedance of this RLC network using an embedded network analyzer on FPGAs, and (iii) performing statistical analysis for detecting tamper events using Wasserstein metric. attacks [HMH + 12, KGM + 21, Sko17].In extreme cases, the attacker might even desolder the chip from its original PCB and mount it on custom boards or sockets optimized for attacks.However, in most non-invasive attacks, the adversary might only create connections between her probes and PCB or place electromagnetic probes in a contactless fashion close to the IC package without any physical modifications to the system.Several anti-tamper solutions are based on tamper-evident secure enclosures to cover the entire system and realize hardware security modules (HSMs).These envelopes detect changes in the physical characteristics of the environment, such as the enclosure's capacitance or the envelope's internal electromagnetic field. While such secure enclosures shield the system against several classes of tampering and modifications, they are very costly and need a highly customized design making them unusable for legacy systems.Therefore, we seek anti-tamper schemes, which on the one hand, cover different abstraction levels of the system from PCB to chip level, and on the other hand, require minimal changes to the conventional electronic boards.Ideally, a security-critical chip holding secret keys and other assets should be able to physically sense its environment to detect any unauthorized changes in the system and react accordingly. There have been a few attempts in the literature to include such self-contained sensors into system security-critical ICs to detect physical anomalies, including PCB-level Trojans [FNH + 18], counterfeit boards [ZRS21, WHT19, GXTF17], or removal of components [BEG + 21].On digital ICs and field-programmable gate arrays (FPGAs), these sensors take the form of timing circuits such as on-chip [GXTF17, BEG + 21] and PCB trace-based ring-oscillators [ZRS21]).If all goes well, any PCB anomalies will then affect the sensitive timing behavior of these sensors. However, such passive sensing methods have led to low-precision measurements and noisy behavior.Therefore, advanced signal processing and machine learning methods are needed to obtain acceptable classification accuracy.On the other hand, active sensing methods, such as time-domain reflectometry (TDR), have shown a high precision in detecting malicious implants on the I/O signal traces of PCB.However, they need custom analog circuits inside the chips [FNH + 18].Moreover, all these solutions only demonstrate the detection of specific modifications to the system (e.g., removal/insertion of a component to an I/O signal trace).It remains open how these solutions apply to multiple classes of tampering.Therefore, we ask the following research question: Is it possible to have an on-chip circuit-based sensor that is capable of monitoring the physical integrity of its environment beyond its die, from its package to PCB, in a unified manner? Our Contributions: In this work, we indeed positively answer the above question and solve the puzzle of why existing on-chip sensors have limited applicability and poor performance when sensing tampering of the environment beyond the chip itself.Inspired by novel insights in power integrity analysis and its application to physical integrity, we introduce a self-contained tamper-evident sensor.The sensor characterizes the impedance of the power distribution network (PDN) and can verify the physical integrity of the system from board to chip level, see Figure .1.As any tampering attempt on the PCB will lead to changes in the equivalent impedance of the PDN, the continuous physical scanning of PDN will reveal whether the PCB's integrity has been violated [MGST22, ZSS + 22].In this regard, we will explain how the functionality of network analyzers, the traditional tools to perform such impedance characterizations, can be emulated on FPGAs without any extra component or modification to the system [Ior18, ZAB + 18].We demonstrate that electrically stressing the PDN of the system with various frequencies and simultaneously measuring voltage drop for impedance estimation is the key to detecting various tamper events.We further show that the impact of different classes of tampering on the magnitude of PDN impedance is higher at certain frequency bands.After performing extensive experiments on a commercial FPGA board and deploying Wasserstein distance as a metric, we discovered that such a wideband impedance characterization could surprisingly reveal very sophisticated tampering/modifications to the system, namely, (i) the addition/removal of PCB components, (ii) the connection of a probe/wires to the PCB, (iii) the presence of an EM probe close to the IC package, (iv) and modifications to the IC package.Remark.In this work, we are taking only the first steps toward understanding the applicability of power integrity solutions to the problem of physical verification and tamper detection.Therefore, we do not claim that the proposed sensing mechanism provides a complete or error-free security solution.More research is required to explore the strengths and limitations of the proposed solution in real-world scenarios.FPGA-based network analyzers have been previously reported in the power integrity literature [Ior18, ZAB + 18] to achieve system reliability during PCB design.Hence, their design is not the main contribution of this work.Our primary intention in this paper is to draw attention to the potential of this known but not well-researched sensing mechanism in hardware security. PDN Characterization A PDN handles the system's power delivery from the external power regulator down to the transistors on the chip.Figure .2(a) demonstrates a PDN model for a typical PCB.The equivalent circuit of a PDN is an RLC network [ZHS + 15, ZAB + 18].The PDN connects the VRM to every power-sinking component on the PCB, and every component has specific power and voltage requirements.The PDN covers off-chip components such as bulk capacitors, PCB routing, multilayered ceramic capacitors (MLCCs), spreading, and vias.The PDN also covers on-chip components such as IC packaging, bonding wires, on-chip power grid, etc.The contribution of each component to the PDN's impedance is distinct at different frequencies; see Figure .2(b).While at lower frequencies, the equivalent impedance of the PDN is dominated by the voltage regulator's characteristics, at higher frequencies, the off-chip and on-chip components contribute most to the impedance [LFH10, ZAK + 17, ZAB + 18].The main reason for such an impedance behavior is the existing parasitic inductance on each capacitor [FCAF12].An ideal capacitor behaves as a short-circuit at high frequencies.However, the parasitic inductance causes the capacitor to resonate at a particular frequency depending on its capacitance and inductance values.Smaller capacitors have smaller parasitic inductance due to their smaller physical dimensions; therefore, they resonate at higher frequencies.In this case, the capacitor's impedance for frequencies higher than its resonance frequency increases, and thus, at very high frequencies, the capacitor behaves like an open circuit.Consequently, by moving to higher frequencies, each set of capacitors, from the large to small ones, become open circuits, and their effect on the PDN impedance diminishes. To characterize the PDN impedance over different frequency bands, Z (impedance) and S (scattering) parameters are used [Bog10,Pup20].These parameters are usually employed in RF/Microwave engineering and in power/signal integrity analysis of electronic systems to describe the electrical properties of linear RLC electrical networks.S-and Z-parameters are complex numbers (including the voltage amplitude and the phase of traveling waves) and are frequency-dependent.The number and organization of the parameters depend on the available electrical ports on the RLC circuit.S-parameters directly represent the signal's attenuation and reflection/transmission at each port of network.On the other hand, Z-parameters can be used to derive the observed impedance at each port of the network.S-parameters and Z-parameters represent comparable data, but measuring one or the other may be more convenient depending on the measurement conditions.Network analyzers are the primary instruments for measuring these parameters.They contain both a source used to stimulate the PDN of the system by injecting sine waves with different frequencies into the network and receivers, used to measure the amount of signal reflection and transmission at each frequency.The resulting measurements reveal the system's frequency response.We distinguish scalar network analyzers (SNAs) and vector network analyzers (VNAs).While SNAs can only measure the magnitude of the signal, VNAs can measure both the amplitude and the phase of the traveling wave.VNAs have been utilized in the literature for detecting counterfeit and tampering activities at the PCB level using both scattering [MGST22] and impedance parameters [ZSS + 22].There also exist similar commercial products that characterize the V/I signatures of the ICs (e.g., SENTRY [ABIa]) and PCBs (e.g., AMS [ABIb]) in a frequency range. Challenges with Current On-Chip Circuit-based Sensors A few on-chip circuit-based sensors have been proposed in the literature to indirectly sense the changes in the impedance of the PDN due to different anomalies.Virtually all these circuit-based sensors are based on various analog to digital converters (ADCs).Ringoscillators (ROs) are a popular example of such ADCs due to their sensitivity, simplicity, and compatibility with digital circuits.The main hope has been that modifications in part of the system's board or package lead to changes in the behavior of these ROs, and consequently, the attacks would be detected. .In more ambitious attempts, it has even been tried to detect the effect of non-invasive attacks on the system.A prime example is the integration of a custom LC oscillator into the front side of the IC package to detect the approach of an EM probe [HHM + 14].Another example is the measurement of the voltage drop caused by the inclusion of the shunt resistor into the PCB using an on-chip RO [LML12].By taking a careful look at the outcomes of these sensors, it becomes evident that the changes in the behavior of RO sensors in some tampering scenarios are not distinguishable from changes resulting from the noise (i.e., thermal noise, process variation, etc.).Moreover, they have been designed only for specific system modifications, and therefore, do not cover a wide range of tampering scenarios.The main reason behind such weak performance of virtually all these solutions is their passive sensing mechanism.As described in the previous subsection, each component on an electronic board contributes to the overall PDN impedance in different frequency bands.To measure the PDN impedance at different frequencies, each discrete frequency needs to be measured by actively stimulating the system and measuring each response using RO sensors.If the system is stimulated using always-on stressor circuits or not stressed at all, the RO sensors can only measure the DC characteristics of the PDN, namely the resistance.Even if we consider the generated pulses by switching transistors in an RO sensor as stressors, the switching frequency and its harmonics are constant, and thus, the system is stimulated at only a few frequency points.Some PCB modifications might lead to resistance variations, and therefore, such passive sensors can detect them.However, many tamper events (as will be shown in this paper) lead to changes in the capacitance and inductance at specific frequencies, which are not measurable using passive sensors. Threat Model This section reviews our threat model with the presumption of having an on-chip network analyzer capable of measuring the core and I/O PDN impedance profiles.For our threat model, we assume that the victim's electronic board is operated in an untrusted field and the attacker has physical access to it.The goal is to detect the attacker's tampering attempt on the system before she can mount SCA or FI attacks.The attacker is interested in the secrets and assets stored on a security-critical IC (e.g., a root-of-trust, cryptographic chip, etc.) soldered on a PCB.We assume that this security-critical IC contains an embedded network analyzer circuit for impedance characterization of the PDN.If the security-critical IC is an FPGA, the network analyzer can be programmed as a soft IP into it along with other existing IP cores.Therefore, no additional modification is needed, and the golden impedance signature of the PCB remains intact.For non-FPGA systems, the system should be redesigned to include an FPGA-based or ASIC network analyzer.We assume that the PDN's impedance profiles of genuine samples have been collected in an enrollment phase in a trusted environment and stored on the same chip, which performs the impedance characterization.Later in the hostile environment, the impedance characterization can be performed before boot or during runtime to verify the system's integrity for possible tampering attempts.Upon detection of a discrepancy between the measured impedance profile and the golden impedance profile, an anti-tamper response (e.g., key zeroization) will be executed. We assume that the adversary can physically tamper with all components on the core and I/O PDNs of the board connected to the victim chip, including adding/removing/replacing other components.Moreover, she can make physical connections with the core and I/O PDNs of the system to run measurements or communicate with the victim chip.She also can tamper with the IC package and pins and even place probes in the proximity of the package.However, we assume that she cannot tamper with the running impedance characterization IP during the design phase (e.g., using hardware Trojan insertion) or operation phase (e.g., using remote fault injection attacks).Furthermore, the proposed sensing countermeasure only works on powered-on systems.Attacks on powered-off devices for netlist reverse-engineering purposes or non-volatile memory (NVM) content readouts using techniques such as scanning electron microscopy are out of the scope of this work. Embedded Network Analyzers The VNA functionality needs to be realized on one or multiple chips of an electronic board to enable self-contained monitoring of the physical integrity from PCB to chip level.In the field of power integrity, RF, and microwave engineering, there have been a few attempts to miniaturize VNAs for embedded systems.One prominent example is the open-source NanoVNA kits [Edy], which utilize off-the-shelf analog and digital chips (e.g., audio codec, microcontrollers, mixers, and oscillators) to realize small-size and portable VNAs.It is conceivable that small devices, such as NanoVNA, can be integrated into large electronic boards to stimulate the PDN and collect scattering/impedance traces during runtime.VNA can even be constructed from single-chip solutions such as the ADL5960 chip from Analog Devices [Ana], Inc. or custom analog designed chips [YHN11].While such analog technologies provide high-resolution PDN characterizations, they are incompatible with many legacy and low-cost systems.Therefore, there have been parallel efforts [Ior18, ZAB + 18, GOKT18, OBDS + 19] to emulate the functionality of VNAs using the available digital resources on FPGAs for power integrity purposes. A VNA on an FPGA consists of an active and a passive module, see Figure .3. The active module needs to stimulate the PDN of the system by drawing electrical current with different frequencies using power waster circuits (e.g., an array of interconnected configurable logic blocks (CLBs) [Ior18, ZAB + 18], ring-oscillators (ROs) [GOKT18,PHT20], or Dual RAM collisions [ATG + 19]).A sinusoidal current modulator controls the activation frequency and amount of consuming current of power waster circuits.The passive module, on the other hand, needs to measure the voltage drops using on-die voltage sensors and other analog-to-digital (ADC) circuits, such as ROs or Time-to-Digital converters (TDCs), using the available resources on FPGAs [MLS + 20].Knowing the amount of current consumption and voltage drop can reveal the approximate impedance value of the PDN seen by the logic circuits of the FPGA fabric at a specific frequency.Here we elaborate more on how such circuits approximate the impedance. Currently, the two proposed FPGA-based VNA variants in the literature [Ior18, ZAB + 18] use similar power wasting circuits based on buffer/inverter chains, but they use different sensing circuits for measuring the voltage drop, namely TDC-based ADCs [ZAB + 18] and RO-based ADCs [Ior18].As we use an RO-based ADC in this paper, we focus on how the frequency changes in a RO, measured by on-chip binary counters, can be converted to the impedance values.Activating the power wasting circuit on the core voltage plane at frequency f i , generates a sinusoidal current over time (I = I 0 e j2πfit ) through the PDN, which causes sinusoidal voltage variation (V = V 0 e j2πfit+ϕ ) on the PDN with lagging in the phase.In this case, the impedance of the PDN at frequency f i in the Polar coordinate representation is given by ohm's law as Using the Cartesian representation, the impedance can be written as a complex number: where the real part R P DN of impedance is the resistance and the imaginary part X P DN is the reactance caused by the capacitance and inductance of the system.While R P DN is frequency-independent, X P DN is a function of frequency.The magnitude of the PDN impedance is On FPGAs, I ON and I OF F are constants and can be estimated either during the synthesis using the FPGA power estimators or using off-chip power monitoring modules.V OF F equals the supply voltage of the FPGA V SU P P LY .However, the V ON is dynamic and approximated using the frequency of the RO-based sensor during the measurement.The frequency of an RO is proportional to the voltage drop on the FPGA, i.e., f RO OF F ≈ kV OF F = kV SU P P LY and f RO ON ≈ kV ON , where k is a constant.In this case, based on equation 5, the impedance magnitude at a given frequency can be written as follows, cf.[Ior18] where f RO OF F and f RO ON are the RO frequencies when the power-waster circuits are deactivated and activated, respectively.To characterize the complete profile |Z P DN | over a frequency range, the f RO ON should be measured under different activation frequencies of power-wasting circuits.Note that in the ideal case, the activation signal for the power wasting circuits should be a real sinusoidal wave, not a pulse wave.While sinusoidal waves at a given frequency have a single harmonic, pulse waves at the same frequency contain the sinusoidal frequency and several harmonics at the higher frequencies.This phenomenon is called total harmonic distortion (THD Such techniques, unfortunately, cannot generate sinusoidal waves higher than a few tens of megahertz using the fastest clocks on modern FPGAs.Therefore, at higher frequencies (e.g., higher than 100 MHz), we should inevitably use pulse waves to activate power-wasting circuits.Naturally, this causes inaccuracies in the estimation of the impedance.However, for tampering detection purposes, we are only interested in detecting changes in impedance values, not their absolute physical values.Thus, as long as the measurements for a specific frequency are performed consistently using sinusoidal or pulse waves, we still can rely on the estimated |Z P DN | for tampering detection. Statistical Analysis on Noisy Measurements Voltage and temperature variations can affect the behavior of the RO sensor on the chip, leading to noisy measurements.In this case, measurement repetition at each frequency point can provide us with a probability distribution for impedance values.Here, we provide the preliminary information required for the statistical analysis and comparison of these probability distributions.In our notation, m is the number of frequency points, where the impedance of the PDN is measured using the embedded VNA.The number of measurement repetitions for frequency f i is denoted by n.We define Z i as a random variable corresponding to the impedance of the PDN at the frequency f i .More precisely, we define Z G i and Z T i as random variables corresponding to the impedance of the PDN at the frequency f i in the genuine and tampered system, respectively.The realization (i.e., the measured values) of the Z G i and Z T i in the j th measurement are denoted by z G ij and z T ij , respectively.We show the empirical cumulative distribution function (ECDF) of Z G i and Z T i with G i and T i , respectively.Finally, we denote the probability density function (PDF) of Z G i and Z T i with γ i and τ i , respectively. Wasserstein Metric In order to quantify the difference between Z G i and Z T i , we will use the Wasserstein metric [ACB17].The Wasserstein metric is a function that gives a distance between two probability distributions.The p th (p ≥ 1) Wasserstein distance between γ i and τ i is given by where E(Z) is the expected value of a random variable Z, d is the Euclidean distance between two points, and the infimum is taken over all joint distributions of the random variables Z G i and Z T i with PDFs γ i and τ i , respectively. Empirical Distribution Function Tests Z G i follows a Gaussian distribution with the mean µ G i and the deviation σ G i due to the existence of thermal noise, which has the characteristics of a white additive Gaussian process.However, Z T i does not necessarily follow a Gaussian distribution.In some cases (as we will see in Sect.5), the tampering can indirectly contribute to unknown disturbances in the impedance measurement, which makes Z T i non-Gaussian.To compare Z T i and Z G i , we can use empirical distribution function tests, which can also be applied to non-Gaussian distributions.We deploy two non-parametric statistical tests on samples z 1 , z 2 , • • •, z n to detect the tamper events, namely Shapiro-Wilk (SW) and Kolmogorov-Smirnov (KS) tests.The SW test is a test for normality, testing whether measurement samples follow a normal distribution.The KS test is a supremum-based statistical testing, which is based on the largest vertical difference between two ECDFs.We refer the reader to the [RW11] for more information.KS test can be used to test whether two data samples come from the same distribution. Experimental Setup For our experiments, we used Digilent Arty S7 development boards [Diga] which contain AMD/Xilinx Spartan-7 FPGAs manufactured with a 28 nm technology.These boards have multiple power domains, namely, a 1 V domain supplying the core (V CCIN T ) and Block RAMs (BRAMs) (V CCBRAM ) of FPGA, a 3.3 V domain supplying the FPGA I/O banks (V CCO ), and a 1.8 V domain as the auxiliary supply voltage (V CCAU X ).In this paper, we perform our measurements on V CCIN T and V CCO PDNs as they are the main media for SCA/FI attacks.Figure . 4 shows the front and backside of the Arty S7 development boards used in our experiments.In Figure .4a, the area in red color shows the jumper "JP3" used for bypassing the 10 mΩ shunt resistor, and in Figure .4b, the areas in red and blue color show 47 nF and 470 nF decoupling capacitors, respectively. We have used PIscanner IP [PIS,Ior18] for realizing a VNA on the FPGA.This IP generates current on the FPGA using configuration logic blocks (for the core voltage PDN) and I/O blocks (for I/O voltage PDN) by sequencing multiple transient switching currents to superimpose to an overall constant current.The design can measure the impedance with a resolution of 1 mΩ over 0 -588 MHz.However, we performed the impedance measurements within 100 Hz -588 MHz for this specific FPGA board since there was not much useful information between 0 -100 Hz for our experiments.Moreover, due to large wavelengths between 0 -100 Hz, the integration time is increased significantly.Impedance characterization beyond 1 GHz requires analog VNAs.The IP generates sinusoidal activation waves using the Lookup Table method for lower frequencies and pulse activation waves for higher frequencies using the Mixed-Mode Clock Manager (MMCM) of Xilinx FPGAs [Ior16,Ior20].Moreover, it uses RO and IOBUF ROs [BEG + 21] for measuring the voltage drop on the core and I/O banks of the FPGA, respectively.The time needed to scan the entire frequency band is in the order of seconds.Such a resolution is sufficient to detect permanent tamper events (e.g., capacitor removal) and temporary tampering (e.g., connecting probes) occurring in order of minutes/hours for a practical SCA/FI.More specifically, there is a trade-off between detection accuracy and scan time that can be controlled by tuning the number of frequency points measured.The IP occupies 963 FFs and 1459 LUTs.The cost of the IP is $2000.It is a one-time payment for all FPGAs of the same family.We communicated with the FPGA from our laptop using a UART communication link.After loading the VNA bitstream to FPGA, we could send commands to the FPGA and receive measurement data from it using the same serial link.Finally, we carried out offline statistical analyses (see Sect. 3.3) on the impedance signatures collected from the experimental setup mentioned above using MATLAB [Mat22].We calculated Wasserstein distance in Sect.5.3 using Python's scipy.statslibrary. Results This section presents our impedance characterization results for various classes of tamper events and physical conditions.There are probably unlimited tamper events that can occur to a sample, and naturally, we cannot cover all of them.However, to show the sensor's capability in covering different threats, we select a set of few tamper events, which can represent virtually all possible modifications.Therefore, we prioritize our experiments based on the physical distance of the tamper event to the chip's die, from the maximum to the minimum distance.Moreover, we make sure that we cover the entire frequency band by tampering with the resistive, capacitive, and inductive portion of the PDN impedance.Hence, we start by experimenting on a shunt resistor, which has the maximum distance from the FPGA chip, and we continue the experiments to observe the impact of tampering on the FPGA chip itself.Since some of our performed modifications to the FPGA boards are irreversible (e.g., polishing a package), we had to use different boards from the same family for each tampering experiment.The Digilent Arty S7 kits have a 10 mΩ shunt resistor [Diga, Digb] with the possibility of bypassing it using a jumper (see JP3 jumper in Figure . 4a).We used the boards in the bypassed mode for our reference measurements for all experiments, except for the experiment in Sect.5.2.2.To have enough data for statistical analysis, the PDN impedance profiles of all FPGA boards have been measured 105 times (for obtaining statistically significant results [DW09]) using the FPGA impedance characterization method described in Sect.3.2 before any tampering.The measurements were carried out within the frequency band of 100 Hz -588 MHz with logarithmic steps.The total number of frequency points was 152.After changing the physical condition of the boards or tampering with them, we repeated the impedance characterization 105 times over the same frequency range.It should be noted that all experiments were conducted at room temperature.Finally, we calculated the Wasserstein distance (WD) between the genuine and tampered samples' impedance signatures over the entire frequency band (i.e., between 100 Hz -588 MHz) to quantitatively distinguish between legitimate and tampered samples. Tamper Events not Causing External Disturbances Here, we consider tamper events that do not cause further external disturbances on the sensor, and therefore, all the measurements on modified samples are mainly influenced by the thermal noise. Intra-genuine Signature Consistency In the first step, we wanted to assess the consistency of PDN impedance signatures for the same board over time.This is of great importance as it shows to what extent we can rely on a golden impedance signature.In this regard, we performed two sets of 105 measurements in two different trials for the same samples on different days and times.Figure .5a illustrates the mean of the collected impedance traces for two trials of measurements on the same board, over the frequency band of 100 Hz -588 MHz.According to Figure.5a, it can be observed that the mean of impedance magnitudes are well-matched to each other.Figure .11a shows the histograms for the intra-genuine measurements' trials at f = 315.97MHz, where the maximum distance between the mean of two trails of the genuine boards' measurements occurs.Form Figure .11a, it can be seen that the maximum deviation between the mean of the measurements is 1.67 mΩ. Adding a Shunt Resistor First, we emulated the primary preparation step for power analysis attacks, adding a shunt resistor on the victim PCB's power rails to enable the measurement of current fluctuations.Even if such shunt resistors exist on a genuine PCB, adversaries might want to replace them with other resistors to externally-amplify couplings of shares in protected cryptographic implementations [DCEM18,LBS19].As we had bypassed the existing shunt resistor of the FPGA boards for all of our golden signature extractions, we just needed to remove the jumper to emulate the inclusion of a shunt resistor (see Figure . 4 (a)).The existing shunt resistor on samples has a 10 mΩ resistance [Diga, Digb].The means of PDN impedance measurements for both genuine and tampered samples for 105 measurements are shown in Figure .5b.As it can be observed, there is a considerable shift in the magnitude of the PDN impedance in the entire spectrum.This shift looks constant until the system's resonance frequency and starts to change in higher frequencies.For this case, the maximum deviation takes place at f = 587.80MHz.Form Figure .11b, it can be seen that the maximum deviation between the mean of the measurements is 52.08 mΩ, which is 31.18times the maximum deviation of the intra-genuine measurements (1.67 mΩ).Interpretation.As described in Sect.3.2, the impedance magnitude of the PDN can be written as Z P DN = R P DN + jX P DN .The primary contributor to the shift in Figure .5b is the resistance portion of the impedance R P DN , which is independent of the frequency.However, resistors used on PCBs are not ideal, and thus, they contain parasitic capacitance and inductance and can show much higher impedance magnitudes at higher frequencies.Therefore X P DN of the PDN impedance also plays a role in this shift. Removing Decoupling Capacitors We performed two independent sets of tampering on two different samples to observe the sensor's sensitivity to the decoupling capacitors' removal.First, we removed three 470 nF decoupling capacitors on one of the FPGA boards and measured the impedance profile over frequency.Second, we removed eight 47 nF decoupling capacitors on another FPGA board and measured the impedance profile to check their effect on the physical behavior of the system as well.The location of these two sets of capacitors is shown in Figure .4(b).As decoupling capacitors behave as low-pass filters, the adversary usually removes these decoupling capacitors in real-world scenarios to amplify leakage through Figure .6 and 7 demonstrate the mean of |Z P DN | profiles of 105 measurements for the genuine sample and the samples from which 470 nF and 47 nF decoupling capacitors have been removed, respectively.According to these results, it is observable that each set of decoupling capacitors contributes to a particular portion of the impedance spectrum.In Figure .6 and 7, the right-side graphs show the zoomed-in view of the bandwidth where the most significant deviation from the genuine |Z P DN | signatures occurs. By removing 47 nF and 470 nF decoupling capacitors, the maximum deviation happens at f = 39.90MHz and f = 1.57kHz, respectively.Based on Figure .11d and 11c, the maximum deviation between the mean of the measurements is 5.95mΩ and 3.83 mΩ for 47 nF and 470 nF capacitors removal, respectively.It is observable that the maximum deviation for 47 nF and 470 nF capacitors removal cases is 3.5 and 2.29 times the maximum deviation of the intra-genuine measurements (1.67 mΩ).Interpretation.While 47 nF capacitors affect the frequency band of 3.69 -60.36 MHz, 470 nF capacitors directly impact the lower frequency band below 9.4 MHz.This is in line with the theoretical expectations presented in Sect.2.1, where the effect of smaller capacitors on the impedance is significant at higher frequencies due to their smaller physical dimensions, causing a resonance at higher frequencies. Proximity of an EM Probe Our next experiment analyzes the influence of placing a high-precision EM probe in the vicinity of the FPGA surface.The interesting fact about this experiment is that there is no physical connection between the probe's tip and the system.The hope is that a coupling between the EM probe and the metal layers of the chip affects the PDN impedance at high frequencies [HHM + 14].We used an EM probe station from Riscure with HP EM probe 125 SN126 0.2 mm to emulate this class of non-invasive attack [Ris]. We scanned the surface of the FPGA by placing the EM probe in the vicinity of the FPGA package surface in different locations to evaluate the effect of impedance coupling between the probe and the metal lines of the FPGA.The sensor and stressor circuits were implemented on the X0/Y0 bank of the Spartan 7 FPGA.While the physical location of metal lines can differ from the location of the utilized logic blocks on the FPGA, we still found the strongest coupling effect when the probe was located on top of the X0/Y0 bank (Figure .8b.The mean of 105 measured impedance signatures for this attack can be seen in Figure .9. The maximum deviation takes place at f = 284.92MHz.Form Figure .11e, the maximum deviation between the mean of the measurements is 13.65 mΩ, which is 8.17 times the maximum deviation between the mean of the intra-genuine measurements.Interpretation.The observable effect can be explained using the cavity perturbation theory [Poz11].According to this theory, when a small sample (here, the small cross section of the EM probe) is exposed to electric and magnetic fields of a structure (here, the FPGA), the sample perturbs the field distribution that causes a change in the resonant frequency of the structure.This coupling results in the field perturbation of the circuit and creates this added resonance frequency to the impedance profile as seen in Figure.9. IC Package Polishing The goal of this experiment is to find out whether IC package polishing/removing has an impact on the PDN impedance.Package polishing is the main preparation step for carrying out semi-or fully-invasive attacks.To emulate this attack, we partially polished the surface of the FPGA package and measured the impedance profile magnitudes, see Figure .8a.During the polishing of the IC package on the IC frontside, a small area of metal layers were exposed.The exposure of the die to the ambient light photons can disturb the measurements.Therefore, we carried out these measurements in an isolated dark room.Figure .10 illustrates the mean of 105 impedance profile signatures for the genuine sample and the FPGA, whose package is partially polished and is placed in a dark room during the measurements.For this case, the maximum deviation of the mean graph occurs at f = 430.96MHz.Form Figure .11f, it can be seen that the maximum deviation between the mean of the measurements is 18.49 mΩ, which is 11.07 times the maximum deviation of the intra-genuine measurements (1.67 mΩ). Interpretation.The plastic package behaves as a dielectric material contributing to on-chip PDN capacitance [Inc96].Thus, polishing the package affects the system's PDN impedance.Its effect is observable at high frequencies due to the smaller dimensions of IC structures.Note that we did not de-solder the FPGA for polishing. Tamper Events with External Disturbances Here, we present two sets of experiments where tampering causes extra environmental disturbances on the sensor.In these cases, thermal noise is not the only noise source for the measurements, and we observe an irregular non-Gaussian impact on the impedance measurement distribution of tampered samples. IC Package Polishing with Exposure to Light As mentioned in Sect.5.1.5,the exposure of the die to the ambient light photons can disturb the measurements.As such a scenario still can occur in a real-world attack, we decided to perform the impedance characterization in this physical condition as well.Therefore, we conducted the measurements in a room where the board was exposed to the room light.Figure .12a illustrates the mean of 105 impedance profile signatures for this set of measurements.As seen per results, there exists an unusual behavior in the impedance signatures for this case.Careful investigation results showed a significant fluctuation in impedance signatures in frequencies above the resonance frequency, where the overall impedance is dictated by the IC package and die.For this case, the maximum deviation takes place at f = 388.61MHz.Form Figure .12b, the maximum deviation between the mean of the measurements is 19.39 mΩ, which is 11.61 times the maximum deviation between the mean of the measurements between the intra-genuine measurements (1.67 mΩ).Figure. 12b shows that after such tampering, the data distribution is divided into two bell curves.This was not the case for the previous results where we investigated other classes of tamper events.Interpretation.The unusual impedance signature in Figure .12a could be due to the interaction of photons with metal layers and the active region of the chip, leading to local temperature variations, which affect both the sensor and on-chip impedance. Connecting an Oscilloscope Probe In this experiment, we connected an oscilloscope probe to the jumper "JP3" pins to analyze the impact of connecting a measurement device without any other physical modifications to the system.Figure .12c shows the impedance profile of 105 measurements conducted using the proposed verification method for this case.Note that connecting an oscilloscope probe necessitates removing the jumper "JP3" from the PCB's PDN.Therefore, the golden sample for this experiment would be the FPGA board without bypassing the shunt resistor.We did not have complete control over external disturbances during the measurements in this attack.An example of such disturbances is slight probe movements that can happen during the measurements.Moreover, the unshielded cable and probe connector can behave like an antenna injecting EM interference to the power rails of the chip, which influences the sensor response.The maximum deviation between the mean of the measurements takes place at f = 53.15kHz.From Figure .12d, the maximum distance between the mean of the measurements is 6.05 mΩ, which is 3.62 times the maximum distance for intra-genuine measurements (1.67 mΩ).Interpretation.Adding the oscilloscope probe caused a shift to lower impedance values since the oscilloscope's output resistance would be in parallel with the shunt resistor, and as a result, the overall Z P DN will be shifted to a lower value (the shift is around 4 mΩ).Moreover, the oscilloscope cable can create an inductance, which further affects the impedance at higher frequencies.Due to the slight local movements of the oscilloscope probe during the measurements, the shape of the distribution is deviated from normal, as seen in the histogram representation in Figure . 12d. Statistical Analysis of Impedance Traces In this section, we present the statistical analyses conducted for our experiments.According to the experimental results presented in subsections 5.1, and 5.2, a single measurement can successfully detect some tamper events ( e.g., a shunt resistor addition or package polishing).However, for some other tamper events, the impedance measurements of genuine and tampered samples overlap.In these cases, integration is necessary to obtain the measurement's statistics and, thus, compare the statistical features of the data.Furthermore, based on the results presented in Figure .11, tamper events can change not only the mean of the measurements but also the variance or the entire shape of the PDF.Therefore, we deploy the Wasserstein distance (WD) metric (explained in Sect.3.3.1)to quantify the dissimilarities between the collected impedance traces of genuine and tampered samples.We computed the WD for intra-genuine measurements as well as tamper events performed throughout this work.As the disturbances in some tampering experiments are not systematic and repeatable, they cannot be directly compared with other results.Hence, we excluded these particular experiments from the WD analysis. The WD distance profile is given in Figure .13 within the frequency band of 100 Hz -588 MHz.As it can be observed, the WD for various tamper events and physical conditions is frequency-dependent.Note that the frequency axis of Figure .13 is scaled logarithmically.Thus, the effects of tamper events on WD in higher frequency bands are seen as narrow peaks. The magnitude of the WD metric shows that all tamper events performed in this paper can be distinguished if we use the sensor at the right frequency.Moreover, the detection thresholds for different tamper events, and their corresponding frequency bands, can be selected differently.However, we still can define a global threshold at W D = 3 for the tampering experiments presented in this work without causing any false alarms.Naturally, this threshold should be adjusted for different systems and possible tamper events.Moreover, more research is probably required to consider a broader range of tamper events and their influence on the WD metric. Coverage and Comparison of the Sensor Spatial Coverage: A high spatial coverage sensor ensures that most physical locations of the system can be sensed.To increase the spatial coverage of the proposed on-chip impedance sensing method, one can realize multiple RO or TDC sensors and distribute them around the core and I/O PDNs of the chip.In this case, the sensed values of multiple sensors can be compared and analyzed.Distributing sensors on the FPGA has been shown to increase the spatial coverage of the embedded network analyzer for power integrity purposes [ZAB + 18].For instance, in our case, it can improve the detection of a close-distant EM probe at various locations of the IC package.Note that on larger systems with multiple each domain requires such embedded analyzers for impedance For instance, case of the deployed FPGA board in this work, the I/O PDN is separate from the core PDN of the FPGA.Therefore, any modification to I/O PDN will be hidden from the core PDN sensor. Realizing a network analyzer for other PDNs might be challenging as enough required logic resources might not be available to realize the sensor circuits.For I/O PDN sensing on FPGAs, IOBUF ROs [BEG + 21] can be deployed as the sensor to monitor the voltage drop on I/O banks.We performed an experiment using the IOBUF ROs to assess the feasibility of detecting a physical connection to I/O ports.We connected an FTDI chip through three wires (TX, RX, and GND) to one of the I/O ports of the FPGA board (see Figure . 4 (a)) and measured the impedance profile of the I/O voltage PDN.The mean of 105 measurements is depicted in Figure .14a.In this figure, the maximum deviation is detected at 169.85 MHz, in which the shift in the impedance magnitude is 22.42 mΩ.There also exists a shift between 3-7 mΩ at lower and middle range frequencies.More fluctuation in the impedance profile is observed at higher frequencies.However, these variations were consistent in all measurements, and the detected changes at different portions of the spectrum are reliable.1 compares the proposed VNA sensing method in this work with other on-chip sensing methods in the literature in terms of FPGA compatibility and detection capability.The proposed method enables the detection of resistive, capacitive, and inductive tamper events in both PCB/IC core PDNs as well as I/O PDNs, which makes it superior to other related works.Unfortunately, the overhead of these solutions cannot be compared fairly due to the deployment of different IC platforms for sensing and the lack of reported resource utilization for some of these sensors. Success Rate for Reversing Physical Tampering In some attack scenarios, the adversary might physically tamper (i.e., add, remove, or replace components) with the powered-off device or during a period of time where the sensor is inactive.In such scenarios, a question arises about the feasibility of undoing the tampering effect on the impedance before the activation of the sensor.Here we should elaborate on a couple of points.First, note that the proposed sensor in this work senses a two-dimensional parameter, i.e., the magnitude of the impedance, which itself is a function of frequency.Each physical tamper event affects the PDN impedance magnitude over the entire spectrum.Therefore, the adversary is theoretically required to equalize the impedance curve using electrical components, which is a hard, if not impossible, task due to the following reasons.Components' parasitics cause the most local maxima and minima of the impedance curve over the frequency, and hence, replacing components even with identical samples demonstrates different parasitics.Furthermore, even re-soldering the same removed component to the system or desoldering an added component will not deliver the same signature as the solder wire or flux characteristics will be However, might still feasible if the tampering impact on the entire spectrum is less than the detection threshold of the sensor. Second, the tamper events can cause changes to the shape of impedance traces distribution, which are hard to hide or reverse.We performed empirical distribution function tests (see Sect. 3.3.2) on some of our tamper events to see such changes.The first row of Table .2 (SW test) shows the percentage of the number of the frequency points where impedance distributions deviate from the normal distribution.The second row (KS test) shows the percentage of the number of the frequency points where the impedance distribution of the modified sample is different from the impedance distribution of the genuine sample to the total number of frequency points.As it can be observed from the SW test, because of the external disturbances during or after tampering, the distributions are deviated from normal (compare with Figure . 12b and 12d).Also, from the KS test results, we can conclude that the distributions of the samples under attack are different from the genuine ones depending on the type of modifications.As a result, the attacker needs to correct all these discrepancies in all frequency points to be able to undo the tampering. Third, in our threat model, the genuine impedance signature is stored on the chip, and thus, the adversary does not have access to it for analysis and equalization.We assume that the signature can only be read out using semi-or fully-invasive techniques, which will already change the PDN characteristics.Even if the adversary recovers a genuine signature from another training sample, it will differ from another sample's signature due to the process variations and existing parasitics.As a result, the attacker cannot observe the same impedance that has been used during the enrollment phase. Robustness to Environmental and Malicious Noise Voltage and Temperature Variations: As mentioned in Sect.3.3, the RO sensor's output (here f RO ON ) can suffer from temperature variations.Naturally, measurement repetition and averaging can minimize the effect of noise.In addition to integration, to compensate for the thermal drift, we consider the relative (and not the absolute) frequency values of RO in both the idle and active states of the measurements at each frequency point.Therefore, we always measure the current reference point (i.e., f RO OF F ) of the idle state of the FPGA and then activate the stressor and measure the f RO ON .In this case, the reference point is updated for each measurement, and the effect of the temperature on the sensor is minimized.Note that temperature can also have a direct systematical impact on the impedance of the PDN [BFGG16].In this case, the impedance signatures of a few temperature points should be collected in the enrollment phase.In the verification phase in the field, based on the system's temperature (e.g., using the die temperature sensor), the associated golden signature can be used to compare the measurements and golden signature. Voltage variations can also have an adverse impact on the RO sensor behavior.If other active ICs share the same PDN, their activity might cause severe voltage drops, and thus, distort the RO sensor inside the FPGA.In this case, the impedance can be measured when other active components are idle (e.g., before booting).Another option is to take the maximum voltage drop of other components as additive noise into account during the enrollment phase and later adjust a detection threshold accordingly.To be more specific, if impedance characterization should be performed when other components are switching, we should consider the worst-case scenario, i.e., maximum possible voltage ripple of different components and, consequently, their impact on the RO sensor frequency.Increased voltage ripple will require a higher detection threshold and hence can decrease the detection confidence of the system. The changes in the voltage and temperature can also be induced by the adversary to fool the sensor with the intention of masking the effect of the tamper event.In this case, we elaborate on a couple of points.First, as mentioned in the previous subsection (Sect.6.2), the adversary does not have access to the golden impedance signature of the device, and hence, she does not know the exact required amount of equalization to reconstruct the golden impedance profile.Second, an adversary should bypass the voltage regulator and connect her voltage supply or function generator to the board to change the voltage.As shown in Sect.5, such connections can be detected by impedance sensing.Moreover, most voltage variations can also be detected using on-die voltage sensors.Finally, the temperature has a global effect on the impedance curve, and its impact at different frequency bands is different.Thus, it is challenging to mask the effect of a tamper event in a single frequency band using a global parameter such as temperature.EM/RF Noise: There might be some confusion if one compares impedance sensing with side-channel sensing methods (e.g., [PKR20, PSK + 22]) for verification purposes.One might think that similar to the susceptibility of side-channel signals to EM/RF noise, impedance values might also suffer from the same adverse effect.However, in contrast to side-channel signals, the overall impedance of the system is constant and determined by the amount of materials (i.e., the dielectric) and their geometry used in fabricating PCBs and ICs.Naturally, while the impedance value might be affected under extreme mechanical stress or temperature/humidity variations, it is not impacted by electromagnetic interference.However, as mentioned earlier in this subsection, the behavior of the impedance sensing circuit (here, the on-chip RO) might be sensitive to such noise on the power line.However, because of the existence of the voltage regulator on the board, such noise is usually filtered before reaching the IC.Furthermore, it is very unlikely that the existing radio waves in the room are strong enough to cause any disturbance to the implemented RO on the chip.We conducted extra experiments to validate our assumption about the minimal impact of RF noise on our measurements.To make a comparison, we isolated the experiment condition using a Faraday cage and RF noise suppressor chokes for the cables, see Figure .15a.As it can be observed in Figure .15b, the maximum WD over various frequencies for a genuine board in an unisolated and isolated environment is less than 1.7, which is comparable to the measured WD between two genuine samples in an unisolated environment (see Figure .13).Moreover, W D = 1.7 is smaller than the threshold (i.e., W D = 3) required to detect tamper events.Figure .15b also shows WD for the tampered PCB (the 47 nF capacitors removal and addition of the 10 mΩ shunt resistor experiments) between unisolated and isolated environments.Similar to the obtained WD for the genuine sample between unisolated and isolated environments, the maximum WD for tamper events is very small and falls below the defined threshold of W D = 3.This confirms that there is no significant difference in the measurement outcomes, and hence, EM/RF noise is not a relevant factor for ImpedanceVerif. ImpedanceVerif as PUF? During our experiments, we have observed that different genuine samples demonstrate slight PDN impedance differences compared to each other due to the process variation.Therefore, there might be a potential to deploy such impedance variations to construct seamless tamper-evident PUFs for the entire system.Moreover, it might be feasible to realize the concept of Virtual Proofs of Reality [RMHX + 15] for the entire system.However, to meet the PUF requirements, the inter-distance of impedance variations between genuine boards should be large enough to uniquely identify each board.Estimating the inter-distance requires extensive measurements on a large number of boards, which was beyond the scope of this work.If the inter-distance requirements are met, then it is also conceivable to apply PUF key derivation techniques proposed in [IU19,GXKF22] to extract high-entropy keys from PDN impedance measurements. Conclusion In this work, we presented a self-contained physical verification framework, ImpedanceVerif, which is based on the impedance characterization of the system's power distribution network (PDN).We first explained that various components of an electronic board have a distinct contribution to the PDN's impedance signature at different frequency bands.Hence, any tampering activity on the board will lead to changes in PDN's impedance.We further demonstrated that the functionality of embedded network analyzers can be realized on commercial FPGAs without any extra components and modifications to monitor the integrity of the PDN impedance.To experimentally validate our claim, we implemented an embedded network analyzer on commercial FPGAs and conducted extensive experiments for various classes of attack conditions for PCBs and IC packages required for conducting different physical attacks.We showed that by choosing the Wasserstein distance as a statistical metric, we were able to detect various classes of tamper events (e.g., the inclusion of a shunt resistor, proximity of an EM probe to the IC, or a polished IC package) at distinct frequency bands with high confidence.Finally, we expect that embedded network analyzers on ASICs will achieve higher precision and higher bandwidth in the future using appropriate analog technologies (e.g., see [Ana]).Such future technologies enable a high-confidence detection of more sophisticated tamper events. Figure 1:The concept of the ImpedanceVerif framework: (i) Modeling the PDN impedance as an RLC circuit, (ii) Characterizing the impedance of this RLC network using an embedded network analyzer on FPGAs, and (iii) performing statistical analysis for detecting tamper events using Wasserstein metric. Figure 2 : Figure 2: (a) The equivalent RLC circuit of the system's PDN (b) Contribution of different parts of the PDN to the magnitude of the PDN impedance over frequency. The magnitude of the PDN impedance can be approximated by considering only the difference in values of voltage and current Figure 3 : Figure 3: Main building blocks of an embedded VNA on FPGA.when the power wasters are activated (V ON and I ON ) or deactivated (V OF F and I OF F ), cf.[Ior18,ZS18] Figure 4 : Figure 4: Digilent FPGA development kit.(a) Front side of the board.The area in red color shows the jumper used for bypassing the 10 mΩ shunt resistor.The highlighted area in green color shows one of the I/O ports.The highlighted point in yellow color on the FPGA shows the approximate location for the EM probe measurements with maximum coupling.(b) The backside of board.The areas in red and blue color show 47 nF and 470 nF decoupling capacitors, respectively. Figure 5 : Figure 5: The mean of 105 impedance measurements over the frequency band of 100 Hz -588 MHz (a) An untouched sample (intra-genuine measurements).(b) The board with added 10 mΩ shunt resistor. Figure 6 : Figure 6: The mean of 105 impedance profile measurements for removing 470 nF decoupling capacitors over the frequency band of 100 Hz -588 MHz.The right-side figure shows the zoomed-in view of the bandwidth with the most deviation from the mean graph. Figure 7 : Figure 7: The mean of 105 impedance profile measurements for removing 47 nF capacitors from the board over the frequency band of 100 Hz -588 MHz.The right-side figure shows the zoomed-in view of the bandwidth with the most deviation from the mean graph.power consumption.Figure.6 and 7 demonstrate the mean of |Z P DN | profiles of 105 measurements for the genuine sample and the samples from which 470 nF and 47 nF decoupling capacitors have been removed, respectively.According to these results, it is observable that each set of decoupling capacitors contributes to a particular portion of the impedance spectrum.In Figure.6 and 7, the right-side graphs show the zoomed-in view of the bandwidth where the most significant deviation from the genuine |Z P DN | signatures occurs.By removing 47 nF and 470 nF decoupling capacitors, the maximum deviation happens at f = 39.90MHz and f = 1.57kHz, respectively.Based on Figure.11d and 11c, the maximum deviation between the mean of the measurements is 5.95mΩ and 3.83 mΩ for 47 nF and 470 nF capacitors removal, respectively.It is observable that the maximum deviation for 47 nF and 470 nF capacitors removal cases is 3.5 and 2.29 times the maximum deviation of the intra-genuine measurements (1.67 mΩ).Interpretation.While 47 nF capacitors affect the frequency band of 3.69 -60.36 MHz, 470 nF capacitors directly impact the lower frequency band below 9.4 MHz.This is in line with the theoretical expectations presented in Sect.2.1, where the effect of smaller capacitors on the impedance is significant at higher frequencies due to their smaller physical dimensions, causing a resonance at higher frequencies. Figure 8 : Figure 8: Devices under attack (a) Polished FPGA package (b) Placing an EM probe on top of the FPGA package. Figure 9 : Figure 9: The mean of 105 impedance profile signatures when the EM probe is positioned over the left corner of the FPGA. Figure 10 : Figure 10: The mean of 105 impedance profile signatures when the surface of the FPGA package is polished. Figure 11 : Figure 11: Impedance histogram representations.All histograms are plotted at the frequency point where the distance between the mean of the measurements is maximum.(a) Intra-genuine measurements at f = 315.97MHz.(b) Adding shunt resistor (f = 587.80MHz).(c) 470 nF capacitors removed (f = 1.57kHz).(d) 47 nF capacitors removed (f = 39.90MHz).(e) EM probe on the location showed in Figure.4(a) in yellow color (f = 284.92MHz).(f) FPGA package polished (not exposed to light, f = 430.96MHz). Figure 12 :Figure 13 : Figure 12: (a) The mean of 105 impedance profile signatures for polishing the FPGA package and measuring impedance profile in a room exposed to light.(b) Histograms at f = 388.61MHz for polishing the FPGA package in a room exposed to light.(c) The mean of 105 impedance profile signatures for connecting the oscilloscope probe to the DUT.(d) Histograms at f = 53.15kHz for connecting the oscilloscope probe to the DUT. Figure 14 : Figure 14: Connecting the FTDI cable to one of the I/O ports.(a) The mean of 105 impedance profile signatures.(b) Histogram representation (f = 169.85MHz). Figure 15 : Figure 15: (a) The Faraday cage setup used for isolated measurements.(b) Wasserstein distance between isolated (ISO) and unisolated (UNISO) measurements for the genuine board, removing 47 nF capacitors, and adding shunt resistor experiments. Table 1 : The comparison between embedded sensing methods. Table 2 : Dissimilarity ratio percentage based on statistical test results for different attacks.
2022-09-01T05:30:26.823Z
2022-11-29T00:00:00.000
{ "year": 2022, "sha1": "35d3a674553c95791344085082edd2410801de72", "oa_license": "CCBY", "oa_url": "https://tches.iacr.org/index.php/TCHES/article/download/9954/9457", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fb161700539acbab84b57a42bbc5da687c5ed45c", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
211197492
pes2o/s2orc
v3-fos-license
Compliance and Knowledge of Healthcare Workers Regarding Hand Hygiene and Use of Disinfectants: A Study Based in Karachi Background Hand hygiene is the cardinal step in combating various healthcare-associated infections. These infections are a cause of 37,000 deaths in Europe and 100,000 deaths in the United States annually. Thus, prevention of their spread is of utmost importance today. A study conducted in a tertiary care center in Karachi found that 17% of the medical professionals were aware of the World Health Organization (WHO) guidelines on hand hygiene while only 4.9% followed these hand-washing techniques. Lack of hand hygiene practice and awareness has raised a need to reassess infection control in hospitals. There is currently undisputed proof that adherence to hand cleanliness diminishes the danger of transmission of various infections. Methods A questionnaire-based cross-sectional study was conducted at Dr. Ruth K.M. Pfau Civil Hospital, Karachi in January 2019. Data from 212 participants who met the inclusion criteria were analyzed. A three-part questionnaire was used for the hospital staff who had been present at the hospital for at least six hours and had attended to the patients during the last three continuous working days. Staff members who visited the hospital but did not attend to any patients or those who had been present at the hospital for less than six hours were excluded. Collected data were analyzed using Statistical Package for Social Science (SPSS) version 23.0 (IBM, Armonk, NY). Results A total of 212 individuals (74 doctors, 66 nurses, 52 technicians, and 20 ward assistants) agreed to participate in our study, of which 124 were females. The compliance with hand disinfectant use before and after every patient contact was found to be 12.3%. The use of disinfectant was found to be more among males than females (mean 7.88 times for males vs. 6.20 for females) and the younger individuals were more compliant with hand hygiene practices; 62.73% of participants were aware of the WHO guidelines regarding hand hygiene and 65.56% were aware of hospital-acquired infections. However, nearly half of the participants (45.75%) had never attended a formal lecture on the subject and more than half (62.26%) of the participants were unenlightened about the complications of hospital-acquired infections. Conclusions Hand hygiene is a basic requirement for every medic and paramedic in a hospital setting today. Keeping in mind the drastic consequences of the spread of hospital-associated infections, it is evident that hand hygiene should be stressed upon. The rising incidence of nosocomial infections and their complications can be prevented by raising awareness about hand hygiene practices. There is a need to further investigate the application of and adherence to the basic guidelines on hand hygiene. Our results indicate that this issue should be tackled through a multidimensional approach. Methods A questionnaire-based cross-sectional study was conducted at Dr. Ruth K.M. Pfau Civil Hospital, Karachi in January 2019. Data from 212 participants who met the inclusion criteria were analyzed. A three-part questionnaire was used for the hospital staff who had been present at the hospital for at least six hours and had attended to the patients during the last three continuous working days. Staff members who visited the hospital but did not attend to any patients or those who had been present at the hospital for less than six hours were excluded. Collected data were analyzed using Statistical Package for Social Science (SPSS) version 23.0 (IBM, Armonk, NY). Results A total of 212 individuals (74 doctors, 66 nurses, 52 technicians, and 20 ward assistants) agreed to participate in our study, of which 124 were females. The compliance with hand disinfectant use before and after every patient contact was found to be 12.3%. The use of disinfectant was found to be more among males than females (mean 7.88 times for males vs. 6.20 for females) and the younger individuals were more compliant with hand hygiene practices; 62.73% of participants were aware of the WHO guidelines regarding hand hygiene and 65.56% were aware of hospital-acquired infections. However, nearly half of the participants (45.75%) had never attended a formal lecture on the subject and more than half (62.26%) of the participants were unenlightened about the complications of hospital-acquired infections. Conclusions Hand hygiene is a basic requirement for every medic and paramedic in a hospital setting today. Introduction Healthcare-associated infections are a major obstacle to achieving pinnacle healthcare. With a soaring number of 37,000 deaths from 4,544,100 infections in the European Union annually, and about 2,000,000 infections and 100,000 deaths annually in the United States, these infections pose a serious threat to millions of people worldwide [1]. An integral method of prevention of the spread of nosocomial infections lies in our own hands. Hand Hygiene is a simple and costeffective method that plays a vital role in controlling the outbreak of infections. Lack of proper hand hygiene practices acts as a source of the spread of common healthcare-associated infections that may affect the urinary, respiratory, and gastrointestinal tract, as well as surgical sites [2]. The significance of hand hygiene was brought to light again in 2002 through the revised guidelines published by the Centres for Disease Control and Prevention (CDC), which recommended the use of the alcohol-based solution for invisible hand decontamination and usage of soap and water for visible contamination [3]. In a study by Girou et al., alcohol-based hand rubs were found to be significantly more effective than washing hands with soap in reducing bacterial contamination [4]. The compliance of hand hygiene among healthcare workers, unfortunately, has been mediocre. According to the Society for Healthcare Epidemiology of America, only 31% of the healthcare providers were well informed about proper hand hygiene practices. Apart from healthcare workers, medical students are also involved significantly in patient care. One might assume that medical students are aware of and compliant with these sanitation practices, yet, a study conducted during Observed Structural Clinical Examinations (OSCE) in Saudi Arabia found that hand hygiene compliance among medical students was only 17% [5]. Factors leading to these unsatisfactory results include lack of knowledge and awareness, high-stress work environment, misconceptions about hand hygiene, and poor practices by peers and mentors [6]. Pakistan is among the countries where infectious diseases are identified as a major threat and a leading cause of patient morbidity and mortality. A study conducted among the doctors, nurses, and medical students of Allied Hospitals of Rawalpindi Medical University revealed that, even though the medical students were well informed about hand hygiene, only 37% of healthcare professionals practiced hand washing, whereas the WHO technique of hand washing was followed by only 19% of this 37% [7]. Another study by Anwar MM et al. found that among 211 physicians of a tertiary care hospital of Karachi, only 4.9% of the respondents practiced proper hand hygiene and only 17% were well informed about the WHO guidelines on hand hygiene [8]. Despite the high prevalence of infections in Pakistan, data regarding hand hygiene among healthcare workers is limited and mostly outdated. Thus, we sought to conduct a study with the primary aim of finding the frequency of utilization of alcohol disinfectant by hospital staff in a tertiary care hospital in Karachi, Pakistan. The secondary aim of this study was to assess the knowledge of hospital staff regarding various aspects of hand hygiene. Study duration and population A questionnaire-based cross-sectional study was carried out for a period of one month at Ruth K.M. Pfau Civil Hospital, a government-run tertiary care center based in Karachi in January 2019. The population under study consisted of hospital staff members including doctors, nurses, technical staff, and ward assistants. Inclusion and exclusion criteria All the hospital staff members who had been present in the hospital for a minimum of six hours and had attended to patients during the last three continuous working days were included. All the staff members who had worked less than six hours on any day in past three days were excluded and so were the staff who were in the hospital but did not attend to patients. Based on these exclusion criteria, data from eight participants were excluded from the analysis. Sample size and study design We approached 304 staff members and 220 agreed to take part in our study. The cooperation rate was 72.36%. The sample size was calculated (through OpenEpi.com) to be 207, with a 95% confidence interval (CI) and a 5% error margin. Using convenience sampling, 212 participants were included in the study after receiving informed consent. Any questions that the participants had regarding the study were addressed in detail. All participants were informed that their responses would remain confidential and that they had the right to withdraw from the study as per their wish. The questionnaire consisted of three parts: (1) demographic profile, (2) information regarding duty hours and hand disinfectant use during these hours, and (3) knowledge regarding the importance of hand disinfectant use in the prevention of various hospital-acquired infections. Statistical analysis All the data were entered into Statistical Package for Social Science (SPSS) software version 23.0 (IBM, Armonk, NY) for analysis. Results are presented as means with standard deviations (for continuous variables) and percentages and frequencies (for categorical variables). Chi-square and independent-sample t-test were used to analyze relationships between different variables and a value of p: <0.05 was considered statistically significant in all cases. Results A total of 220 participants agreed to take part in our study, of which 212 were included after applying the exclusion criteria. Among these, 88 (41.5%) were males and 124 were (58.5%) females. More than half of the staff (n = 114; 53.8%) were from the internal medicine department, followed by 80 (37.7%) from the surgical, and 18 (8.5%) from the dental department. The majority of the participants were doctors (34.90%), followed by nurses (31.13%), technical staff (24.53%), and female ward assistants (9.43%). Most of the staff worked 6-8 hours each day. The mean age of staff members was 30.82 ±8.69 years. The demographic profile of the participants is summarized in Table 1. Use of hand disinfectant Men on average used disinfectant significantly more frequently than females (mean 7.88 for males vs. 6.20 for females; p: <0.001). Among doctors, males on average used hand disinfectants more frequently than females; however, the reverse was true in the rest of workgroups. A significant difference was seen in the use of hand disinfectant between both age groups (p: 0.008). The younger age group of 20-40 years had a higher frequency of hand disinfectant use than the elder age group of 41-60 years ( The proportion of the staff that consistently used hand disinfectant before and after attending to every patient was 12.3%. Most frequent disinfectant use over the course of three days was in the surgery department, followed by medicine, and then the dental department (7.19, 6.90, and 5.5 times respectively). Doctors and ward assistants of surgery department used disinfectant more frequently than those of the medicine department, but the nurses and technical staff of the medicine department reported a higher rate of use of disinfectant than those in the surgical department ( Figure 1). FIGURE 1: Average hand disinfectant use (department-and workgroup-wise distribution) The values on top of bar graphs represent mean usage Lack of time and lack of disinfectant in close proximity were identified as the most common reasons for not using hand disinfectant (43% and 27.4% respectively). The responses of 186 participants who denied using hand disinfectant before and after every patient contact are given in Table 3. Figure 2). It was observed that 134 (63.20%) participants were conscious of immune-compromised patients in their wards and 55 (41.04%) said they ensured to take special care and precaution while handling them. Discussion Studies have shown that improved compliance with hand hygiene led to a notable reduction in infection rates [9]. However, keeping up optimal hand hygiene poses a hurdle in most healthcare settings. Lack of adequate knowledge of guidelines, long working hours, empty hand hygiene products, understaffing or overcrowding, skepticism regarding the value of hand hygiene, and a belief that glove use obviates the need for hand hygiene are some of the factors responsible for poor compliance [10]. The present study demonstrated that overall hand hygiene compliance was 12.3% among the study participants. This finding was found to be higher than in studies conducted at Wachemo University Teaching Hospital, Ethiopia (9.2%) and Africa Ghana Teaching Hospital (12%) [11,12]. In contrast, studies conducted in Kuwait (33.4%), India (43.4%), and North West Nigeria (55.2%) have shown considerably higher compliance rates than our study [2,13,14]. This variation might be due to a higher prevalence of inadequate knowledge of nosocomial infections among healthcare providers, and inaccessibility of hand hygiene products and facilities in our setting. Despite a greater ratio of females in our study, we found that the usage of hand disinfectant was approximately twice among males. It must be noted, however, that the gender difference in hand hygiene measures seen in our study population was seen mainly among doctors. In other working groups, females were consistently found to have a higher rate of disinfectant use. This is in line with a study by White et al, which demonstrated that female healthcare providers wash hands more than males [15]. This study also demonstrated that young individuals were more likely to follow hand hygiene protocol. The immense rise in hospital-acquired infections in recent years has put more emphasis on preventive measures. It has been observed that interactive educational programs combined with a free supply of resources have significantly raised compliance [16]. Factors affecting compliance can be related to healthcare staff, their clinical life, and the environment they are exposed to [17]. Lack of time and unavailability of disinfectant nearby were the dominant reasons documented by the individuals in this study. Due to overcrowding and long working hours in our setting, it is possible that the staff does not have adequate time to use alcohol-based hand rubs after every patient contact. In addition, the role of government in ensuring the availability of resources plays a primary role in influencing compliance. A lowmiddle income country with inadequate resources faces great difficulty to meet international standards, which could explain the reason for reduced compliance. The department of surgery was found to be more compliant with hand hygiene in this study. This is likely due to the special emphasis put on maintaining adequate hand hygiene in the operation theatres. In the medicine department, technicians and nurses, not doctors, were found to be more in contact with patients; hence they are likely to use disinfectant more frequently, as seen in our study. This finding is supported by a study by Randle et al. [18]. The second part of this study investigated the knowledge of nosocomial infections and hand hygiene guidelines by WHO among the participants. In 2019 the WHO introduced 'Clean care for all-it's in your hands' mission of universal health coverage, which focuses on the urgent need for access to healthcare for all people worldwide [19]. In our study, approximately twothirds of the population reported being aware of nosocomial infections, and more than half of the participants knew about WHO hand hygiene guidelines. A study conducted in Faisalabad, Pakistan found that while the majority of the nursing staff had adequate knowledge, their practices were not in line with their knowledge scores [20]. Similarly, when inquired about nosocomial infections and its prevention among the nursing staff of a tertiary care hospital in Rawalpindi, a significant gap between knowledge and practice was observed. Education-based workshops and seminars were the major sources of awareness while a few mentioned websites as their source [21]. Results from our study are in harmony with those from previous studies. A vast majority of the participants were aware of the complications related to poor hand hygiene compliance, especially in immunocompromised patients. However, compliance with hand hygiene was still limited. Training can have a positive impact on the improvement of practices. Besides, the use of effective methods of disinfection, continuous timely training, and knowledge improvement can reduce the frequency of hospital-acquired infections [22]. In a tertiary healthcare setting where multiple patients are treated at the same time, people are highly susceptible to pathogens and can easily catch infections. Hence, a proper surveillance system with innovative educational training programs that can motivate healthcare workers should be introduced. There are some limitations to this study. First, our sample size was small and covered only one hospital in Karachi. A multi-centered study giving an overview of the practices of the staff of different hospitals can provide better results. Secondly, the scope of this study was limited to inquiring about knowledge and compliance with the usage of hand disinfectants. Such interview-based studies can be susceptible to Hawthorne bias. A wider, more focussed study should be conducted to examine the level of adherence to the WHO guidelines and their implementation in all hospitals of Pakistan. Conclusions Our study population had adequate knowledge about nosocomial infections and their relationship with hand hygiene compliance. However, a significant gap between the knowledge and practices of participants was observed. Male healthcare providers used disinfectant more frequently than women altogether, especially in the surgical department. A lack of adherence to guidelines was noticed among nurses, ward assistants, and technicians. This study highlights the need for conducting training on infection control and prevention in a healthcare setting, which would stress on maintaining adequate hand hygiene through disinfectants with strict supervision.
2020-02-20T09:17:08.416Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "a24b6d08184c6e6fe8be63ca0ac697b848741629", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/27975-compliance-and-knowledge-of-healthcare-workers-regarding-hand-hygiene-and-use-of-disinfectants-a-study-based-in-karachi.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8af78ee0e9f2f897c54f2a3b4431a2c4b3de1eb1", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
227178565
pes2o/s2orc
v3-fos-license
Visible Light Spectroscopic Analysis of Methylene Blue in Water; What Comes after Dimer? As in our previous work, most attempts to study the self-aggregation of methylene blue (MB) in water have been limited to the dimer. In the present work, we have analyzed the self-aggregation of MB in water beyond the dimeric form. For this purpose, the visible light absorption spectra of a large number of aqueous solutions of MB (1.1 × 10–6 to 3.4 × 10–3 M) and NaCl (0.0–0.15 M) at different temperatures (282–333 K) have been fed to a mathematical routine in order to determine the potential existence of a unique higher-order aggregate without any preconception about the aggregation order or about the need of counterions, such as chloride, for compensating the positive charge of the aggregates. Contrary to the common belief that the trimer is the dominant aggregate at high MB concentration, to our surprise we found that the tetramer acting alone, and without any counterion, is the higher-order aggregate that yields the best fitting to all the experimental absorbance spectra, with a very low average relative error of 0.04 ± 0.34%. Also contrary to previous assumptions, it has emerged quite evidently that this aggregate is present in the solution at MB concentrations below 3.4 × 10–5 M (11 ppm), though to a rather low extent. This has brought the need for the recalculation of the visible light absorption spectrum and the thermodynamic parameters for the dimer, which along with those for the tetramer are the main contributions of the present work. INTRODUCTION Methylene blue (MB) is a widely used dye in the textile industry in wool, cotton, and silk dyeing. 1 It is also used to dye specific tissues and fluids of the body before or during surgery and diagnostic examinations, 2 as an antiseptic and inner cicatrizer 3 and as a staining agent for microscope analysis. 1 On the negative side, MB accounts for a significant part of the approximately 85,000 tons of dyes and pigments that are discharged in the rivers and lakes of the world each year, affecting the freshwater aspects of the water cycle. 4 The negative impacts of MB on human beings and animals comprise irritation of mouth, throat, esophagus, and stomach with symptoms of nausea, abdominal discomfort, vomiting, and diarrhea. 5 Thus, the removal of MB dye from wastewater is of great concern both from a human and an environmental point of view. For this reason, MB is commonly used as a model contaminant in adsorption or photocatalytic processes. 6,7 These processes may be greatly affected by the ability of the aqueous MB dye molecules to organize themselves into aggregates of different orders, depending on the total MB concentration and the temperature. Just as an example, an adsorbent that may prove efficient for adsorbing the MB monomer might, on the other hand, have a pore system unable to accommodate the aggregates. Furthermore, the evaluation of the MB concentration itself by means of spectroscopic techniques is greatly affected by the aggregation degree because all species in solution, whether or not aggregated, show fairly different optical spectra. 8 The selfaggregation of MB in water to form a dimeric species has been researched for many decades, resulting in several scientific works, 9−18 in which the evaluation of the visible light absorption spectra for the monomer and the dimer, as well as the thermodynamic parameters for the equilibrium between both species (in most cases the equilibrium constant at room temperature and, less often, the enthalpy and entropy of dimerization), has been claimed to be performed with greater or lesser success. However, in our previous work, 8 we proved that the molar attenuation coefficients and the thermodynamic parameters obtained in those works are subject to a considerable level of uncertainty. This is a consequence of considering that the spectroscopic behavior of the monomer is unaffected by the solution temperature. This universally employed assumption is, however, in contradiction with the long known fact that the maximum attenuation coefficient of the monomer spectrum obtained via extrapolation at a very low MB concentration, 6.3 × 10 −7 M, shows a slight decrease at increasing temperatures. 9 In our work, 8 we have proven that the temperaturedependent absorption behavior of the monomer is provoked by the change in its electron charge distribution with the variation in the temperature-dependent dielectric constant of water. The monomer charge distribution stands between those of the virtual resonance forms, being the absorption spectrum of the monomer, a composition of the theoretical spectra for the virtual mesomers, whose proportion is established by a temperature-dependent virtual equilibrium constant (resonance virtual equilibrium hypothesis). In practice, this means that the light absorption curve of MB in water in the absence of aggregate forms beyond the dimer is formed by the convolution of the dimer spectrum and the spectra for the two virtual monomeric mesomers, whose proportion is established by the thermodynamic parameters for the monomer/dimer equilibrium and for the resonance virtual equilibrium ( Figure 1A). The application of this hypothesis at MB concentrations below 3.4 × 10 −5 M (11 ppm) allowed the thermodynamic parameters for the equilibria displayed in Figure 1A and the spectra for the dimer and mesomers ( Figure 1B) to be evaluated with remarkable accuracy ( Figure 1C). 8 These spectra were obtained under the hypothesis that below 3.4 × 10 −5 M the aggregates in solution beyond the dimer have a negligible presence, as has been often claimed. 19−22 As we will prove in the present work, this is not completely precise, and though the optical and thermodynamic parameters of the mesomers remain the same as in our previous work, 8 those of the dimer will be changing a bit. In the present work, we have analyzed the self-aggregation of MB in water beyond the dimeric form for highly concentrated solutions. At high MB concentrations, the existence of MB trimers and higher-order aggregates in water has been assumed since long, 10,20,23−25 especially when they are adsorbed on solid surfaces. 12,26,27 According to different authors, 19−22 the threshold of the trimer formation at room temperature lies in the range 10 −5 to 5 × 10 −5 mol L −1 . Since 1968, there have been a limited number of works attempting to determine the equilibrium constants for the formation of the higher-order aggregates, which are typically assumed to be trimers alone or at most accompanied by tetramers. 11,19,[21][22][23]28,29 The results of these works are summarized in Table 1. The assumption that the higher-order aggregate at high MB concentrations is a trimer was first postulated by Braswell, 23 who became a major referent for all the later authors. He assumed that the Debye−Huckel limiting law applies and only the cation aggregates in a MB concentration range in which the monomer and dimer are absent (0.017−0.055 M) and concluded that the limiting form of aggregation is a trimer, which is in good agreement with the conclusions arrived at from his spectral studies. However, the limiting case of the Debye−Huckel law can hardly be applied for such high MB concentration levels, unless a negligible ionic radius of the aggregate is considered, a fact that can in no way be justified. In addition, his spectral studies were biased by the use of the mean activity coefficients instead of the individual activity coefficients in the equilibrium constants. 19 Few of the works included in Table 1 consider the need that chloride is bonded to the trimer to compensate the excess of the Figure 1. Summarized results of our previous work. 8 (A) Monomer charge distribution stands between those of the virtual resonance forms, being the absorption spectrum of the monomer a composition of the theoretical spectra for the virtual mesomers I and II, whose proportion is established by a temperature-dependent virtual equilibrium constant, K m (resonance virtual equilibrium hypothesis). At low MB concentration (below ∼10 ppm) the only aggregate in solution is the dimer, whose concentration is determined by the equilibrium constant K d . (B) Optical spectra for the mesomers and the dimer. (C) Experimental (symbols) vs calculated (lines) absorbance spectra at different MB concentrations and temperatures. positive charge. 19,21 The inclusion of counterions in the trimer composition has been an issue in scientific discussions for decades now. From the original works by McKay and Hilson, back in 1965, 17,30 the metachromatic effect of adding salts to the MB solutions became evident. However, the way the counterion acts in solution, whether by changing the aggregate structure or by affecting the activity coefficients, has never been fully clarified. The association of chloride with the aggregates is implicitly questioned from the analysis of the results provided by Rabinowitch and Epstein. 31 Ghosh and Mukerjee 28,32,33 also considered that the spectral changes provoked by the addition of salt were those expected from changes in activity coefficients, with a negligible counterion participation in association equilibria at low ionic strengths. At the other extreme, Braswell reported that, in the absence of NaCl, the aggregates formed are charged. 34 Zhao and Malinowski 19 and Hemmateenejad et al. 21 also assumed the association of chloride with the trimer for their calculations. In all cases, the uncertainty is hovering over the results. In the present work, we have measured the visible light absorption spectra of a large number of aqueous solutions of MB (1.1 × 10 −6 to 3.4 × 10 −3 M) and NaCl (0.0−0.15 M) at different temperatures (282−333 K). The spectra have been inputted into a mathematical routine executed in a Microsoft Excel sheet in order to determine the potential existence of a unique higher-order aggregate without any preconception about the aggregation order or about the need of counterions such as chloride for compensating the positive charge of the aggregates. The routine considers the nonideality of the activity coefficients as well as the ionic radii of the different charged species. The goodness of the results becomes patently clear through the direct comparison of experimental and calculated visible light absorption spectra, not only for all the solutions prepared in this work but also for a number of different curves taken from the most cited literature works. The molar attenuation coefficients in the 500−700 nm range for all the species in solution, as well as their thermodynamic parameters of aggregation (entropy and enthalpy) have been determined. The results are, at the very least, surprising. MATHEMATICAL ROUTINE Following the resonance virtual equilibrium hypothesis 8 and assuming a unique higher-order aggregate, formed by the association of n monomers with c chloride ions [nMB + + cCl − ↔ (MB + ) n (Cl − ) c ], coexisting in solution with the monomeric and dimeric forms of MB, the application of the Beer−Lambert equation at a given wavelength yields ε ε ε ε ε A λ and A λ i are the absorbances at a given wavelength, λ, of the solution and the i species, respectively, where i represents each of the different MB species (mI and mII: mesomers I and II; d: dimer; n&c: higher-order aggregate). The optical path length, L, is expressed in cm. ε λ and ε λ i are the molar attenuation coefficients of the solution and the i species, respectively (L mol −1 cm −1 ). C MB and C i are the total molar concentration of MB (expressed as monomeric units) and the molar concentration of species i, respectively (mol L −1 ). As explained in the Experimental Section, I F is the instrumental factor. First, the total MB and NaCl concentrations (mol L −1 ) are evaluated from the MB and NaCl molalities (m MB and m NaCl , mol kg −1 ) and the solution density (ρ s , g cm −3 ), neglecting the influence of MB in the solution density The ionic radii for chloride and sodium ions take values of 2 × 10 −10 and 3 × 10 −10 m, respectively. 35 For the MB ions, approximate radii can be calculated considering the dimensions of the MB molecule (17.0 × 7.6 × 3.3 Å). 22,38 For the sake of simplicity, we assume that an aggregate is a simple stack of MB molecules and its ionic radius is that of the sphere with the same volume as the stack. This simplified picture yields the ionic radius of a MB aggregate formed by n monomers as = × − r n 4.67 10 i 10 1/3 (5) With this equation, the ionic radii for the monomer, dimer, trimer, and tetramer are estimated to be 4.7 × 10 −10 , 5.9 × 10 −10 , 6.7 × 10 −10 , and 7.4 × 10 −10 m, respectively. Considering the mass and charge balances and neglecting the influence of hydrogen and hydroxyl ions, the ionic strength can be evaluated as follows where K m is the dimensionless equilibrium constant for the virtual equilibrium between mesomeric forms 8 The values for ΔS m and ΔH m are indicated in Figure 1. Similar equations can be introduced for the cumulative formation constants of the dimer and the higher-order aggregate where C Cl is the chloride concentration (mol L −1 ) and K d (L mol −1 ) and K n&c (L n+c−1 mol 1−n−c ) are the equilibrium constants for the formation of the dimer and the higher-order aggregate. From the mass and charge balances and the equilibrium constants, the concentration of the mesomers and the dimer can be expressed as (12) and the following identity can be arrived at Ä For given integer values of n and c, the Solver tool of Microsoft Excel was used to find the values of the molar attenuation coefficients at seven specific wavelengths for the mesomers, the dimer and the higher-order aggregate, as well as their thermodynamic parameters of formation [entropy and enthalpy parameters in eqs 7−9] that minimize the following average error (total error) Ä where s is the counter for the 224 spectra taken at the different temperatures, MB concentrations, and NaCl concentrations and w is the counter for selected wavelengths (530, 560, 590, 610, 630, 660, and 680 nm) at which the fitting has been performed. The reason for using a limited number of wavelengths is only a matter of the Solver capabilities, though the results will show the validity of this approach. A λ s,w ,exp and A λ s,w are the experimental and calculated [eq 1] absorbances, respectively, for the s spectrum at the w wavelength. A s,exp max is the highest value of absorbance for the experimental s spectrum in the 500−700 nm wavelength range. By using this parameter in eq 14, all the spectra are equally weighted during the fitting procedure, regardless of the MB concentration or the used cuvette. An Excel function was designed to evaluate C mII and C n&c at each iteration step of the Solver tool. The values of these concentrations are used in eqs 10 and 12 to calculate C mI and C d , respectively, and ultimately A λ s,w [eq 1]. The function solves eq 13 by the Newton−Raphson method and comprises the following steps: [#1] the initial values of C n&c and γ i are set to 0.5 × C MB /n and 1, respectively; [#2] C mII is evaluated via eq 11; [#3] the ionic strength is evaluated via eq 6; [#4] the activity coefficients are calculated via eq 3; [#5] a new value of C n&c is calculated as where C n&c * and C mII * come from steps #1 and #2, respectively, and E′(C mII * ,C n&c * ) is the derivative of E(C mII ,C n&c ) with respect to C n&c evaluated at C mII * and C n&c * . This derivative is calculated as To derive eq 16, it was assumed that ∂γ i /∂C n&c ≈ 0, which is essentially true at high ionic strength values. The evaluation of h must be performed as indicated above to avoid overflow errors. was considered sufficient when the relative difference of values fed into and obtained from eq 15 was below 10 −4 %. The mass and charge balances were used to prove the viability of the function. With the thermodynamic values obtained by means of the Solver tool in combination with the Excel function described above, the rest of attenuation coefficients needed to fill the spectrum in the whole wavelength range (500−700 nm) were obtained by repeating the routine at each wavelength value, in a process that was automated by a number of Excel macros. For a more complete understanding of the goodness of fit, the average relative error (%) was evaluated as Finally, the molar fractions of the different species in solution were evaluated as RESULTS AND DISCUSSION Three different scenarios have been analyzed: (A) the optical and thermodynamic parameters for the mesomers and the dimer are set to those evaluated in our former work 8 ( Figure 1) and only the parameters of the higher-order aggregate are varied to minimize the total error (hereafter called the "n&c scenario"); (B) the optical and thermodynamic parameters for all MB species in solution are varied (hereafter called the "m/d/n&c scenario") and (C) the optical and thermodynamic parameters for the mesomers are set to those evaluated in our former work ( Figure 1) and the parameters of both the dimer and the higherorder aggregate are varied (hereafter called the "d/n&c scenario"). Figure 2 shows the errors obtained in the three scenarios, for different values of n and c. In all cases, it was found that integer values of c over 1 (more than one chloride anion linked to the higher-order aggregate) produced significantly higher errors than those obtained for c = 0 or c = 1, so that the corresponding solutions were automatically dismissed. The n&c scenario yielded the errors displayed in Figure 2A. The best ⟨E T ⟩ value was obtained by considering that the higher-order aggregate is a tetramer without chloride, n = 4 and c = 0 (or 4&0), although the parameters obtained with this solution were dismissed for the following reasons: (i) the total error was significantly higher than those obtained in the other scenarios, (ii) the standard deviation of the relative error was too high to comply with a stringent standard of goodness of fit and reflected a trend to underestimate the values of absorbance ⟨E r ⟩ = 1.81 ± 2.75%), and (iii) the calculated fraction of tetramer (X 4&0 ) for the solution with 11 ppm of MB and no NaCl at 282 K was as high as 0.08, conflicting with our previous assumptions, 8 which included the absence of a higher-order aggregate at C MB below or equal to 11 ppm. Therefore, it must be assumed that some changes in the previously reported visible light absorption spectra 8 must be introduced. In that work, the values of the activity coefficients were considered to be always one. In the absence of a higher order aggregate, this approximation is almost exact for C NaCl = 0 M because the lowest value for the dimer activity coefficient evaluated with eq 3 is 0.97. Thus, the m/d/ n&c scenario, in which the attenuation coefficients and the thermodynamic parameters for all the species in solution are simultaneously evaluated, should provide either a different set of parameters for the mesomers and the dimer if the higher-order aggregate is present at a low concentration (C MB ≤ 11 ppm and C NaCl = 0 M) or very similar spectra to those previously reported 8 if the higher-order aggregate is absent at such a concentration range. Any other combination should be considered the result of either a chaotic fit or the noncompliance with the model premises (a unique higher-order aggregate). The fitting process under the m/d/n&c scenario yielded the errors, as displayed in Figure 2B. Interestingly, the lowest ⟨E T ⟩ error was obtained by considering that the higher-order aggregate is a hexamer with one chloride anion (6&1). This error was also the lowest of all the scenarios. However, the relative error (upper plot in Figure 2B) still involved a significant standard deviation, though now with a certain overestimation of the absorbance values (⟨E r ⟩ = −0.55 ± 0.83%). Nevertheless, the reason that leads us, without a doubt, to dismiss this solution is the fact that even though the calculated fraction of the hexamer (X 6&1 ) for the solution with 11 ppm of MB and no NaCl at 282 K was almost 0 (X 6&1 = 0.001), the new absorption spectra for the mesomers were very different from those previously reported ( Figure 3). 8 In fact, the error in the low concentration zone (C MB ≤ 11 ppm and C NaCl = 0 M) for the m/d/n&c scenario was Thus, either the unique higher-order aggregate premise is incorrect or the m/d/n&c scenario produces a chaotic fit. To check this second option, we have used a rational approach by which the optical spectra and the thermodynamic parameters for the mesomers, reported in our previous work, 8 were considered to be correct and the parameters of both the dimer and the higher-order aggregate were varied (d/n&c scenario). This scenario is well in tune with the long-accepted principle that the extrapolation methods give good results with the monomer but are less reliable with respect to the dimer. 10,12,31 In fact, by applying the extrapolation principle, if a certain amount of the higher-order aggregate is present in the most concentrated MB solutions used in our previous work, 8 this would have only affected the evaluation of the dimer parameters. The errors produced by assuming the d/n&c scenario are displayed in Figure 2C. The lowest ⟨E T ⟩ value was obtained when the higher-aggregate was set to be a tetramer without chloride (4&0). This error is somewhat higher than the lowest error obtained in the m/d/ n&c scenario (4.2 × 10 −4 vs 3.3 × 10 −4 ), though the error in the low concentration zone (C MB ≤ 11 ppm and C NaCl = 0 M) reaches its lowest value (1.4 × 10 −4 ). Furthermore, the relative error is by far the lowest of all scenarios (⟨E r ⟩ = 0.04 ± 0.34%). In addition, naturally, the tetramer is present at a low amount in the solutions at the highest concentration extreme of the low concentration range (X 4&0 = 0.11 at C MB = 11 ppm and C NaCl = 0 M at 282 K), thus explaining the higher errors obtained in the n&c scenario. If the higher-order aggregate is considered to be a trimer (3&0) rather than a tetramer, as has always been believed, then the error values rise to ⟨E T ⟩ = 4.7 × 10 −4 and ⟨E r ⟩ = 0.09 ± 0.45% and the fraction of trimer at C MB = 11 ppm and C NaCl = 0 M at 282 K appears to be excessively high (X 3&0 = 0.24). The second most popular option, a trimer with a chloride anion, causes the errors to soar to ⟨E T ⟩ = 9.3 × 10 −4 and ⟨E r ⟩ = 2.01 ± 1.43%. In conclusion, answering the question in the title, tetramer is what comes after the dimer. Figure 4 shows the optical spectra of all (virtual and real) species in the MB solution. The main difference of the dimer spectrum, with respect to that evaluated in our previous work (Figure 1), 8 is the conspicuous increase of the shoulder at λ = 660 nm to become a peak by its own merit. The tetramer has a single maximum at 600 nm and is responsible for the blue shift at high values of the ionic strength. All the optical and thermodynamic parameters for the monomer and the aggregates are summarized in Table 2. The new thermodynamic parameters of the dimer do not differ substantially from those evaluated in our previous work. 8 With respect to the tetramer, the negative increment of enthalpy indicates, as in the case of the dimer, that aggregation is an exothermic process, whereas the negative entropy change is because of the association of similarly charged species, 39 which might be more reasonable than the positive values found by Klika et al. 22 Figures 5−7 show the experimental and calculated values of absorbance for all the solutions analyzed in this work. Beyond the low value of the relative error, these figures provide visual proof of the goodness of fit, which can be considered more than satisfactory. The molar fractions of the different species in solution were evaluated with eq 21 for different temperatures and NaCl concentrations in the 1 × 10 −6 to 3.5 × 10 −3 mol L −1 range of the MB concentration. The results are shown in Figure 8. The increase in the aggregation level with the decrease of the temperature or with the increment of the NaCl concentration is conspicuous. As observed in the figure, for C MB = 3.5 × 10 −3 M and C NaCl = 0.15 M most of the MB molecules in solution are associated as tetramers (X 4&0 = 0.93). As commented in the Introduction section, Braswell 23 proceeded from the assumption that the Debye−Huckel limiting law is applicable to prove that the limiting form of aggregation is a trimer. As observed in Figure 9, the activity coefficients evaluated by eq 3 are comparable to those calculated with the Debye− Huckel limiting law only in the case of the monomer and, in such cases, only at low ionic strength values. It should be emphasized that these coefficients are raised to the positive integer exponents in the expressions for the equilibrium constants [eqs 8 and 9], so that the differences observed in Figure 9 are in fact being magnified in such expressions. The effect of adding salt on the activity coefficients is evident from the results shown in the figure. This ends the argument about the real effect of chloride on the MB aggregation, proving correct the theory that favors the variation in the activity coefficients over the structural changes in the agglomerates as the consequence of adding salt, at least in the concentration ranges studied in this work. Finally, we have applied the optical ( Figure 4) and thermodynamic (Table 2) parameters obtained in this work to a number of absorbance spectra reported in some of the oldest and most cited works dealing with the phenomenon of MB aggregation. 11,12,23,30 Their authors used optical cells made of different materials and with different path lengths, and in one case achieved with the help of spacers. 11 The variability of cells results in unavoidable differences among spectra measured at the same concentration with different cuvettes. Added to this, there is a known wavelength sensitivity in the equipment used before 1975 that is assumed to be around ±3 nm. 40 These two issues were accounted for in the application of eq 1 to the experimental data by means of two parameters. The first is the instrumental factor, I F , that eliminates differences between cells. The second parameter is the difference in sensitivity between the equipment used in this work and those employed in the works referred above, expressed as a shift in the wavelength (Δλ). Both parameters have been optimized to minimize the error between the experimental attenuation coefficients and those calculated by eq 1. The instrumental factor was evaluated for each paper assuming that similar I F values obtained in a first fitting for a given group of MB solutions implied that the corresponding spectra had been obtained with the same cell, and thus, in a second fitting, an unique I F value was optimized for the whole group of MB solutions. Naturally, a unique value of Δλ was optimized for each paper. The results are shown in Figures 10−13. The values of I F and Δλ are indicated in the captions of the figures. As can be observed, the goodness of fit is rather satisfactory in all cases. As commented in the Experimental Section, at a high NaCl concentration (0.9 M) the extensive precipitation of MB aggregates took place ( Figure S2). This is consistent with the significant blueshift in the absorption spectrum that McKay and Hillson 30 found for solutions at C NaCl = 0.9 M, although these authors did not report the precipitation of MB at such conditions. This blueshift could not be reproduced with the attenuation coefficients and thermodynamic parameters obtained in this work (Figure 14) with the same level of precision reached for lower NaCl concentrations ( Figure 13). As the activity coefficients are independent of the ionic strength for values of I higher than around 0.1 M (Figure 9), it seems evident that at very high chloride concentrations, the Cl − anions must participate in the formation of over-aggregates of MB, possibly by linking together tetramers, which ultimately precipitate. However, this phenomenon does not occur in the MB/NaCl concentration and temperature ranges analyzed in this work. CONCLUSIONS The visible light absorption spectra of a large number of aqueous solutions of MB (1.1 × 10 −6 to 3.4 × 10 −3 M) and NaCl (0.0− 0.15 M) at different temperatures (282−333 K) have been fed to a mathematical routine in order to determine the potential existence of a unique higher-order MB aggregate without any preconception about the aggregation order or about the need of counterions such as chloride for compensating the positive charge of the aggregates. The routine considers the nonideality of the solutions in the calculation of the activity coefficients. From the analysis of different scenarios, it was found that the tetramer acting alone and without any counterion is the higherorder aggregate that yields the best fitting to all the experimental absorbance spectra, with a very low average relative error of 0.04 ± 0.34%. In the absence of NaCl, this aggregate is present in solution at MB concentrations below 3.4 × 10 −5 M (11 ppm), though to a rather low extent. Due to this fact, the visible light absorption spectrum and the thermodynamic parameters for the dimer had to be recalculated with respect to those evaluated in our previous work. 8 The goodness of fit has been shown to be rather satisfactory by comparing experimental and calculated light absorption spectra obtained both in this work and from the literature. EXPERIMENTAL SECTION The absorption spectra (400−800 nm at 1 nm step) of different MB (C.I. 52015; analytical grade) and NaCl (supplied by Sigma-Aldrich; analytic grade) solutions in deionized water were measured at temperatures in the 282−333 K range using an UV−vis spectrometer (Shimadzu UV-2401PC). The temperature of the optical cuvettes was kept constant using a LAUDA Alpha RA8 thermo-circulating bath. Every measure was repeated thrice, with exhaustive cleaning of the cuvettes (water, ethanol and air drying) between measures. Four salt concentrations were employed, 0.00, 0.05, 0.10, and 0.15 mol L −1 . The solutions with MB concentrations in the 1.1 × 10 −6 to 3.4 × 10 −5 mol L −1 (0.35−11 ppm) range were poured into UV quartz cuvettes of 700 μL volume and 1 cm path length. At higher MB concentrations [9.4 × 10 −5 to 3.4 × 10 −3 mol L −1 (30−1100 ppm)], the visible light absorption spectra were obtained using a flowthrough UV quartz cuvette of 6 μL volume and 0.01 cm path length. The MB and NaCl concentration ranges were selected to avoid deficiencies in the absorbance measurements due to the presence of the dispersed particles and/or precipitates formed through the over-aggregation of MB molecules. As shown in Figure S2, specific solutions prepared at a high NaCl concentration (0.9 M) suffered from extensive precipitation. Before analysis, all solutions were allowed to stabilize under magnetic stirring in complete darkness overnight. A total of three optical cuvettes were used for all the analyses (two cuvettes of 1 cm and one cuvette of 0.01 cm). To take account of small variations in the quartz transmittance and path lengths with respect to the nominal values, an instrumental factor, I F , was evaluated for each cuvette so that the molar attenuation factors evaluated with a given solution were independent of the cuvette used for the evaluation. For this to happen, the nominal path length, L, is multiplied by I The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsomega.0c03830. Density of NaCl solutions and images of precipitated MB aggregates (PDF)
2020-11-19T09:16:07.325Z
2020-11-11T00:00:00.000
{ "year": 2020, "sha1": "d63fa42e693ac9a4ded95387e4ed6488edef0c03", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c03830", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5b1688cfa49ba3d5dbe43e3fa0d16ea22feeda5f", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
73243905
pes2o/s2orc
v3-fos-license
The internist in the surgical setting : results from the Italian FADOI-ER survey More and more frequently, patients admitted to surgical wards present characteristics similar to those admitted to medical units. They are fragile patients, often elderly, with significant comorbidity. In recent years, to address these emerging clinical issues in a surgical setting, different organizational models involving specialists of different backgrounds were studied, and in particular involving internists and geriatricians. To widen our current knowledge, in 2011 the Federation of Associations of Hospital Doctors on Internal Medicine of Emilia-Romagna, northern Italy (FADOI-ER), proposed a questionnaire to the public healthcare internal medicine departments of the Emilia Romagna region to collect information as to in what way and to what extent internists are involved in the management of surgical patients. In this article, we analyze the results of the questionnaire and make some organizational considerations and proposals. The questionnaire was very simple, consisting of 14 items. The survey was conducted from 1-28 February 2011. Replies were received from 20 internal medicine departments of a total of 75 in the Emilia Romagna region. The FADOI-ER survey has some limitations, the first of which is that only just under 25% of internal medicine departments in the Emilia Romagna region took part. However, the results are still interesting and seem to suggest that internists, because of their particular cultural background and training, could be the preferred partners for comanagement within the context of inpatient surgical procedures. The results of the FADOI-ER questionnaire are also consistent with the data reported in literature and daily clinical experience that highlight the need for a more multi-specialist approach to patient management with medical internists. Further studies will help provide answers as to the best way to conduct this multidisciplinary approach that could represent one of the future challenges for healthcare. Correspondence: Fabio Gilioli, UO di Medicina Interna, Ospedale “B. Ramazzini”, via Molinari 2, 41012 Carpi (MO), Italy. Tel. +39.59659309 +39.3496995740. E-mail: f.gilioli@ausl.mo.it Introduction More and more frequently, patients admitted to surgical wards present characteristics similar to those admitted to medical units. 1 As Mazzi reminded us, these are fragile patients, 2 often elderly with significant morbidity.This is confirmed by results of various studies.Jencks 3 reported that, in 2003 in the USA, among patients treated under the Medicare program who were sent home from surgery units and then readmitted within 30 days, 70.5% of cases were mainly characterized by medical pathologies such as heart failure, pneumonia, gastrointestinal pathologies and sepsis.5][6] The main causes of death are heart failure and infective pulmonary pathologies. 7,8It has also been observed that only 10% of orthopedic patients admitted are completely autonomous and without any comorbidity, while another 10% is made up of patients with motor disability ranging from serious to bed-ridden.0][11] In recent years, in order to deal with these emerging clinical issues in a surgical setting, various organizational models have been studied that involve specialists from different fields, but in particular internists and geriatricians.In order to explore the problems involved, the Federation of Associations of Hospital Doctors on Internal Medicine of Emilia-Romagna, northern Italy (FADOI-ER) proposed a questionnaire to the public healthcare internal medicine departments in the Emilia Romagna region.with the data reported in literature and daily clinical experience that highlight the need for a more multi-specialist approach to patient management with medical internists.Further studies will help provide answers as to the best way to conduct this multi- with the data reported in literature and daily clinical experience that highlight the need for a more multi-specialist approach to u s e with the data reported in literature and daily clinical experience that highlight the need for a more multi-specialist approach to patient management with medical internists.Further studies will help provide answers as to the best way to conduct this multi-u s e Materials and Methods In 2011, the FADOI-ER proposed a very simple questionnaire consisting of 14 items (Table 1).The survey was carried out 1-28 February 2011 and aimed to collect information on the consultancy role of internists in a surgical setting.The questions were formulated to evaluate: i) the integrated healthcare models adopted; ii) the use of human resources in relation to the type of healthcare model proposed; iii) any possible involvement of other internists in the integrated healthcare model. Results Response to the survey was received from 20 internal medicine departments of a total of 75 in Emilia Romagna (Table 2).Of these, 75% had 30-60 beds (Table 1) and 70% of the departments involved had less than 300 beds (Table 1).In all hospitals whose internal medicine departments took part in the survey, the main surgical wards were: general surgery (100%), orthopedics (85%), gynecology (65%), urology (60%), otolaryngology (60%), ophthalmology (60%).From the answers to Question 6, How often are internal medicine specialists consulted about surgery?(Figure 1), it can be seen that in 60% of cases an internist is available in the surgical setting on a weekly basis and 2-3 times per week in 35% of cases.Also in relation to how internists are distributed around the hospital, the main requests for consultancy were made by general surgery (45%), orthopedics (40%) and urology (10%) departments (Figure 2).Fifty percent of internal medicine departments that responded to the survey have created or are in the process of creating programmed Internal Medicine services within their surgical unit (Table 1).Among the internal medicine departments that incorporate daily structured consultancy (Figure 3), in most cases (60%), it is estimated that medical personnel are involved for 1-2 h a day.In cases in which consultancy services are not included in any codified program, medical personnel are involved for less than 60 min a day (60%) (Figure 4).Other medical specialities that were reported to provide consultancy services in the surgical unit were cardiology (100%), nephrology (50%) and pneumology (20%).The geriatric department, available in 45% of the hospitals taking part in the questionnaire (Table 1), seems to be less involved in consultancy services within surgical units (10%).Interestingly, in 25% of cases, and particu-larly in the smaller hospitals, the internist was described as the only consultant available. Data and trends The FADOI-ER survey has some limitations, the first being that only just under 25% of internal medicine departments in the Emilia Romagna region took part.Furthermore, it is probable that the hospital services that answered the questionnaire were those who The internist in the surgical setting already had internists involved in the surgical setting and this could be considered a quite significant selection bias.In spite of the limited representation of the samples studied by the questionnaire, our results are similar to those reported in the literature.Since 2001, in the USA, requests for an approach involving surgeons and medical specialists have been constantly on the increase with 35-40% of hospitalized patients managed in this way. 12One of the most studied management models is that represented by the co-management between the surgeon and the medicine physician, interpreted to mean an internist, geriatrician or internal medicine specialist.Such a specialist would describe his role as that of the daily management of chronic medical comorbidities and possible acute complications of the surgical patient. 12,13Some years ago in the USA, the role of hospitalist was created.These were physicians mainly specialized in internal medicine who, according to the original definition, should carry out at least 25% of their work in assisting hospitalized patients on internal medicine issues. 14This role has progressively widened its scope to include surgical patients.6][17] Furthermore, in the orthopedic setting, more complex care models have u s e patients on internal medicine issues.progressively widened its scope to include surgical pau s e progressively widened its scope to include surgical patients.This has had an extremely favorable impact on u s e tients.This has had an extremely favorable impact on clinical practice, reducing the average length of hos- o n l y patients on internal medicine issues.progressively widened its scope to include surgical pa-o n l y progressively widened its scope to include surgical pa-been studied in the light of this type of experience.These models are based on setting up a group made up of different professional figures (orthopedic surgeons, geriatricians, nurses, physiatrists) capable of creating a true orthogeriatric structure. 18,19The results of our questionnaire obviously provide a best case scenario of the most advanced strategies and most motivated staff in this context.Similar to our findings, the co-management of patients undergoing surgery is mostly centered on the hospitalist, internist and geriatrician, a model that has become more familiar over the last 15 years. 12,20The move towards this kind of approach is also probably due to the opportunity it offers to simplify organizational issues. Experiences and care settings proposed in the literature An analysis of all the studies carried out so far shows there have been few randomized trials on this issue and that studies were for the most part conducted in an orthopedic setting.This makes it difficult to draw definitive conclusions concerning the efficacy of the different management models 21 (see also the interesting article recently published by Colombo in this Journal). 22It is, in any case, useful to highlight the results obtained in the most important clinical studies. Orthopedic surgery In 2001, Marcantonio et al., 23 in a randomized study of patients with hip fracture, showed that, compared with traditional care, co-management of the geriatric patient significantly reduced the number and the seriousness of episodes of delirium.In a more recent prospective observational study in Australia, 24 Fisher et al. compared 447 patients with hip fracture cared for under an orthogeriatric co-management program to 504 patients followed for three years be- The internist in the surgical setting .compared 447 patients with hip fracture cared for under an orthogeriatric co-management o n l y cared for under an orthogeriatric co-management program to 504 patients followed for three years beo n l y program to 504 patients followed for three years be-fore the program was set up.Post-operative medical complications and re-admission rates at six months were significantly reduced in the orthogeriatric comanagement group.In a retrospective study published in 2009, Friedman et al. 25 compared 163 patients over 60 years of age with fracture of the femur cared for under an orthogeriatric co-management program with 121 patients under standard care.The co-management care group developed fewer post-operative infections and complex complications (delirium, heart problems, thromboembolism) along with a shorter hospital stay.There were no differences in mortality either in hospital or at 30 days or in hospital readmission rates. In 2004, Huddleston et al. 26 carried out a randomized controlled trial on 526 patients undergoing elective surgery for complete hip or knee replacement.They compared co-management by a hospitalist and an orthopedic surgeon with standard care based on consultancy intervention on request.Patients followed by a hospitalist had a higher probability of leaving hospital without post-operative complications.There was no difference in mortality rates or in total cost of treatment between the two care models.In a second study, in 2005, Phy et al. 27 analyzed 466 patients over 65 years of age admitted for hip fracture.The patients in the co-management group underwent surgery quicker and had a shorter average hospital stay.There were no differences in hospital mortality or in readmission at 30 days. Cardiac surgery and neurosurgery In 1990, Macpherson et al. 28 evaluated internist comanagement of 165 patients undergoing cardiothoracic surgery.The authors showed that, compared with the previous year, the setting up of the internist comanagement program was associated with a reduction of six days in the length of hospital stay, fewer laboratory tests and radiological examinations, and a trend towards lower mortality.In 2010, Auerbach et al. 29 carried out an observational study on the effects of comanagement with a hospitalist of neurosurgical patients examining the level of professional satisfaction, length of hospital stay, readmission rates, mortality rates and cost.They concluded that health professionals expressed greater satisfaction in the care provided and that costs were reduced (approx.$1500 per patient) but there was no improvement in results of other outcomes. General considerations and future prospects It seems clear that, in most cases, all these studies concern and involve co-management and an important contribution from physicians who, whatever the terminology used, can be best described as in-ternists.In fact, it is problematic for the specialist, who is usually only involved in well-defined clinical situations, to take responsibility for an overall evaluation of surgical patients who, as we said earlier, present complex and multiple comorbidities.Epidemiological data reported by Gulsham et al. 12 seem to support these observations.In this century in the USA, the generalist physicians, mostly made up of internists, geriatricians or general medicine doctors, are of increasing importance in hospital comanagement in the surgical setting while specialists have a progressively smaller role.Scientific evidence and the results of the FADOI-ER survey seem to suggest that internists, in virtue of their particular cultural background, their wider general training, and their presence in even the smallest hospitals have a privileged role to play in co-management programs on surgical wards.Such a role can be designed and adapted according to the characteristics and requirements of each hospital.It would, therefore, be useful to validate this new organizational approach also in Italy and in Europe as a whole in controlled clinical trials. Other data to emerge, in particular in Italy, show that structural and organizational changes of this type need to look at hospital staffing to ensure that more internists are available on the wards. 12,302][33] Given this, from our questionnaire it emerges that 60% of the internal medicine departments that responded to the survey confirmed that internists were involved in consultancy services for 1-2 h per day.It could, therefore, be hypothesized that in the future more time could be spent in providing these services.Such a commitment cannot be sustained unless resources are redistributed according to new organizational strategies that are not based on specialized expertise but focused on the patient and his or her needs.This could also eventually be applied to the organization structured to provide greater intensity of care.Another less costly method from an organizational point of view could be to identify surgical patients who require daily clinical evaluation using a score system based, for example, on risk factors that have already been partially recognized. 4,34This could limit the biggest part of the clinical workload to some patient subgroups. Models for the future We can identify organizational models that could be applied in hospitals in the future.Although these models, however flexible they may be, will obviously be related to the different characteristics and requirements of each hospital, they should lead to a o n l y fore, be useful to validate this new organizational approach also in Italy and in Europe as a whole in o n l y approach also in Italy and in Europe as a whole in constant improvement in the synergy between professionals and an increased presence of internists in surgical units.A first model, called structured consultancy, could be applied to any hospital.This is very similar to some of the models that have also emerged from our survey.Structured consultancy involves establishing a timetable in which the internist is available for partial or complete medical examinations either alone or together with a specialized surgeon.In this context, the internist, either independently or together with the surgeon, will be responsible for the management of issues that may not necessarily be related to the surgical intervention itself or its local consequences.On the other hand, a second model that could also be applied in hospitals of any size could be to put the management of patients admitted to surgical or polyspecialist units completely in the hands of an internal medicine department.This would leave the surgeon to deal only with consultancy services for the surgical intervention itself and wound management.Obviously, this type of model requires a huge step forward in the development of clinical governance.It would be a highly suitable approach in the context of hospital organization aimed at improving intensity of care.This is currently considered a particularly efficient and valid approach to overall care of elderly, complex patients with polypathologies. 35 Conclusions Over recent years, the number of fragile patients has increased and this is changing the scene of the clinical and general care of these patients in surgical settings.The complexity of this epidemiological change will have a significant impact and, even though these changes are still as yet undefined, they can be expected to also affect surgical outcome.Results from the FADOI-ER questionnaire agree with data from the literature and from daily clinical experience.They underline the need for greater collaboration between specialist surgeons and internists in patient care. We have proposed two models that are in line with this type of organization.One represents structured consultancy, a model that could be applied in any hospital, and one that could be integrated into hospital reorganization strategies that aim to increase intensity of care.This second model foresees management of patients in a surgical unit by internists.In this case, the specialist surgeon would provide only consultancy services relating to the surgical intervention itself and its local consequences. Further studies will be needed to identify which of these multidisciplinary healthcare models could best present the challenge for the near future. More and more frequently, patients admitted to surgical wards present characteristics similar to those N o n -c o m m e r c i a l surgical wards present characteristics similar to those As Mazzi reminded us, often elderly with significant morbidity.This is confirmed by results of various N o n -c o m m e r c i a l cant morbidity.This is confirmed by results of various reported that, in 2003 in the USA, N o n -c o m m e r c i a l reported that, in 2003 in the USA, with the data reported in literature and daily clinical experience that highlight the need for a more multi-specialist approach to N o n -c o m m e r c i a l patient management with medical internists.Further studies will help provide answers as to the best way to conduct this multi-N o n -c o m m e r c i a l disciplinary approach that could represent one of the future challenges for healthcare.N o n -c o m m e r c i a ldisciplinary approach that could represent one of the future challenges for healthcare.N o n -c o m m e r c i a l u s ein the Emilia Romagna region.The FADOI-ER survey has some limitations, the first of which is that only just under 25% of u s e in the Emilia Romagna region.The FADOI-ER survey has some limitations, the first of which is that only just under 25% of internal medicine departments in the Emilia Romagna region took part.However, the results are still interesting and seem to u s e internal medicine departments in the Emilia Romagna region took part.However, the results are still interesting and seem to suggest that internists, because of their particular cultural background and training, could be the preferred partners for cou s e suggest that internists, because of their particular cultural background and training, could be the preferred partners for comanagement within the context of inpatient surgical procedures.The results of the FADOI-ER questionnaire are also consistent u s e management within the context of inpatient surgical procedures.The results of the FADOI-ER questionnaire are also consistent u s e 1 ) 2 ) 5 ) 6 ) 7 ) 8 )To 3 ) 2 ) How many beds does your hospital have?How many beds does the Internal Medicine Department have?Are internal medicine specialists consulted in a surgical setting?If yes, how often?a) 2-3 times a week 35% b) At least once a day 60% c) More than once a day 5% Which departments request internal medicine consultancy the most (list at least 3 departments and give an estimate of number of requests, Have any surgical units incorporated programmed internal medicine services within their unit?a) Yes 50% b) No 50% 9) If yes, which models were used?a) Programmed daily consultancy 40% b) Pre-and postoperative services managed 30% within the surgical unit c) Creation of multi-specialist teams 30% How often are internal medicine specialists consulted about sur-N o n -c o m m e r c i a l are internal medicine specialists consulted about sur-(Figure 1), it can be seen that in 60% of cases N o n -c o m m e r c i a l (Figure 1), it can be seen that in 60% of cases an internist is available in the surgical setting on a N o n -c o m m e r c i a l an internist is available in the surgical setting on a weekly basis and 2-3 times per week in 35% of cases.N o n -c o m m e r c i a l weekly basis and 2-3 times per week in 35% of cases.Also in relation to how internists are distributed N o n -c o m m e r c i a l Also in relation to how internists are distributed around the hospital, the main requests for consul-N o n -c o m m e r c i a l around the hospital, the main requests for consultancy were made by general surgery (45%), orthope-N o n -c o m m e r c i a l tancy were made by general surgery (45%), orthopedics (40%) and urology (10%) departments (Figure 2).N o n -c o m m e r c i a l dics (40%) and urology (10%) departments (Figure 2).Fifty percent of internal medicine departments that N o n -c o m m e r c i a l Fifty percent of internal medicine departments that responded to the survey have created or are in the N o n -c o m m e r c i a l responded to the survey have created or are in the aDoes your hospital have a geriatric department?N o n -c o m m e r c i a l 3) Does your hospital have a geriatric department?How many beds does the Internal Medicine Department have?u s e 2) How many beds does the Internal Medicine Department have? Table 2 .Table 2 . 20) UU.OO.Medicina Interna II Ospedale, Fidenza N o n -c o m m e r c i a l pital stay without any increase in the number of read-N o n -c o m m e r c i a l pital stay without any increase in the number of readmissions or mortality rates.List of public healthcare departments of Internal Medicine that completed the FADOI-ER questionnaire.List of public healthcare departments of Internal Medicine that completed the FADOI-ER questionnaire. .Medicina Interna Ospedale, Bagno di Romagna (Forlì Cesena) .Medicina Interna Ospedale, Bagno di Romagna (Forlì Cesena) .Medicina Interna III Ospedale, Reggio Emilia u s e patients on internal medicine issues. clinical practice, reducing the average length of hospital stay without any increase in the number of read-u s e pital stay without any increase in the number of read-o n l y hospitalist o n l y hospitalist was created.These were o n l y was created.These were physicians mainly specialized in internal medicine o n l y physicians mainly specialized in internal medicine who, according to the original definition, should carry o n l y who, according to the original definition, should carry out at least 25% of their work in assisting hospitalized o n l y out at least 25% of their work in assisting hospitalized patients on internal medicine issues. Figure 4 . Figure 4. Answers (%) to Question 12.If your unit does NOT have a consultancy program (listed in Question 9) how much time is staff to dedicate to such a service? Figure 1 . Figure 1.Answers (%) to Question 6. N o n -c o m m e r c i a l Figure 1 . Figure 1.Answers (%) to Question 6.How often are internal medicine specialists consulted in N o n -c o m m e r c i a l quicker and had a shorter average hospital stay.There were no differences in hospital mortality or in readmanagement of 165 patients undergoing cardiothoracic surgery.The authors showed that, compared with N o n -c o m m e r c i a l racic surgery.The authors showed that, compared with the previous year, the setting up of the internist co-N o n -c o m m e r c i a l the previous year, the setting up of the internist comanagement program was associated with a reduction N o n -c o m m e r c i a l management program was associated with a reduction of six days in the length of hospital stay, fewer labo-N o n -c o m m e r c i a l of six days in the length of hospital stay, fewer laboratory tests and radiological examinations, and a trend N o n -c o m m e r c i a l ratory tests and radiological examinations, and a trend towards lower mortality.In 2010, Auerbach N o n -c o m m e r c i a l towards lower mortality.In 2010, Auerbach that structural and organizational changes of this type N o n -c o m m e r c i a l that structural and organizational changes of this type need to look at hospital staffing to ensure that more N o n -c o m m e r c i a l need to look at hospital staffing to ensure that more internists are available on the wards.N o n -c o m m e r c i a l internists are available on the wards.to emerge from the literature is how any changes in a N o n -c o m m e r c i a l to emerge from the literature is how any changes in a surgical setting need the full involvement of all health-N o n -c o m m e r c i a l surgical setting need the full involvement of all health-u s e fore, be useful to validate this new organizational u s e fore, be useful to validate this new organizational u s e approach also in Italy and in Europe as a whole in u s e approach also in Italy and in Europe as a whole in controlled clinical trials.u s e controlled clinical trials.Other data to emerge, in particular in Italy, show u s e Other data to emerge, in particular in Italy, show that structural and organizational changes of this type u s e that structural and organizational changes of this type o n l y have a privileged role to play in co-management o n l y have a privileged role to play in co-management programs on surgical wards.Such a role can be deo n l y programs on surgical wards.Such a role can be designed and adapted according to the characteristics o n l y signed and adapted according to the characteristics and requirements of each hospital.It would, thereo n l y and requirements of each hospital.It would, therefore, be useful to validate this new organizational prospective 5 - year study.J Bone Miner Res 2010; 25:866-72.u s e 25:866-72.8. Roche JJW, Wenn RT, Sahota O, Moran CG.Effect of u s e 8. Roche JJW, Wenn RT, Sahota O, Moran CG.Effect of o n l y 6. Holvik K, Ranhoff AH, Martinsen MI, Solheim LF.Preo n l y 6. Holvik K, Ranhoff AH, Martinsen MI, Solheim LF.Predictors of mortality in older hip fracture inpatients ado n l y dictors of mortality in older hip fracture inpatients admitted to an ortho-geriatric unit in Oslo, Norway.J o n l y mitted to an ortho-geriatric unit in Oslo, Norway.J Aging Health 2010;22:1114-31.o n l y Aging Health 2010;22:1114-31.7. Cameron ID, Chen JS, March LM, et al.Hip fracture o n l y 7. Cameron ID, Chen JS, March LM, et al.Hip fracture o n l y causes excess mortality owing to cardiovascular and in-o n l y causes excess mortality owing to cardiovascular and infectious disease in institutionalized older people: a o n l y fectious disease in institutionalized older people: a Hospital Doctors on Internal Medicine of Emilia-Romagna, northern Italy (FADOI-ER), proposed a questionnaire to the public Hospital Doctors on Internal Medicine of Emilia-Romagna, northern Italy (FADOI-ER), proposed a questionnaire to the public healthcare internal medicine departments of the Emilia Romagna region to collect information as to in what way and to what o n l y healthcare internal medicine departments of the Emilia Romagna region to collect information as to in what way and to what extent internists are involved in the management of surgical patients.In this article, we analyze the results of the questionnaire patient management with medical internists.Further studies will help provide answers as to the best way to conduct this multi-o n l y o n l y extent internists are involved in the management of surgical patients.In this article, we analyze the results of the questionnaire and make some organizational considerations and proposals.The questionnaire was very simple, consisting of 14 items.The o n l y and make some organizational considerations and proposals.The questionnaire was very simple, consisting of 14 items.The survey was conducted from 1-28 February 2011.Replies were received from 20 internal medicine departments Table 1 . Results of the FADOI questionnaire carried out in Emilia Romagna, northern Italy, on internal medicine interventions in a surgical setting.1)Howmany beds does your hospital have?on l y 1) How many beds does your hospital have?
2019-03-08T17:26:59.285Z
2013-03-04T00:00:00.000
{ "year": 2013, "sha1": "07826483aab860941c166160d8f644112dbbe0ea", "oa_license": "CCBYNC", "oa_url": "https://italjmed.org/index.php/ijm/article/download/itjm.2013.32/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "07826483aab860941c166160d8f644112dbbe0ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245958114
pes2o/s2orc
v3-fos-license
Frequency Improvement in Microgrids Through Battery Management System Control Supported by a Remedial Action Scheme In this work, we propose a battery management system control (BMSC) for primary frequency regulation. In many operational scenarios, the microgrid (MG) results in a weak frequency due to the low inertia of the renewable energy sources and the highly dynamic loads. The proposed BMSC improves the operation and control of the MG by managing the energy stored in the battery storage systems (BESS) through the battery management system (BMS); continuous frequency control of the MG is achieved, preserving the energy availability of BESS. The proposed system performs frequency control actions in real time in the MG operation through BMSC, and it is not required to know the insolation and wind speed forecasts, due to the high uncertainty in the forecasts. Frequency regulation is achieved by evaluating the energy required by the MG, and by controlling the charging and discharging operations that ensure that the BESS resource is available. Due to the highly dynamic MGs, the contribution of the battery in each period is limited to a percentage of its capacity, avoiding deep discharges and the loss of premature energy provided by the battery. The control will apply a remedial action scheme to keep the frequency within the operating margins if the BMS cannot regulate the frequency. The system proposed here is evaluated using an MG system in an island operation. The results show the feasibility of the proposed system under different operating conditions and the compliance with the technical operational specifications of the BESS. I. INTRODUCTION The increase in demand for energy, the integration of renewable energy sources (RES), and the depletion of fossil fuels have all contributed to the recent technological development of microgrids (MGs). This has enabled the rapid growth of RES and storage systems to improve the operation of MGs. However, this increase in non-controlled sources has made it necessary to use bidirectional active sources, such as storage systems and power control systems, to provide inertia, balance the total power of the system, and keep the frequency within the operating margins. Due to the high integration of RES in MGs, it is necessary to implement a battery management system control (BMSC) to meet these new criteria and requirements. The control criteria in a battery management system (BMS) must consider both the frequency and voltage controls to create a suitable communication system for the operation of the system in VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ real-time, whether for a connected utility or isolated mode operation. In addition, the communication and control structure must be held in an intelligent network to ensure that measurements and switch statuses are available. A. LITERATURE REVIEW The operation of electrical networks has been analyzed with heuristic algorithms to control variations in the voltage and frequency [1], [2]. Examinations of various storage technologies and control methods are presented in [3]- [5]. In [6], an experiment on primary frequency control systems and centralized control models is presented, taking into consideration the storage characteristics of the problem [7]. Metaheuristic methods have been used to address the issue of frequency control [8]. Alternative methods of frequency control have been proposed using a virtual synchronous generator [9], a stochastic non-integer controller [10], and robust control [11]. A battery energy storage system supports the frequency control process within microgrids with a high penetration of RES active power response to frequency deviations by combining a conventional droop control method with a virtual inertia function to improve the system's stability [12]. In some investigations, it has been shown that a MG central controller (MGCC) [13] can be used for voltage and frequency control when the electrical grid is in interconnected or island mode, to comply with the policies of the grid interconnection code and to optimize the loss of active power with linear algorithms [14]. Alternatively, where appropriate, it can be used with linear predictive control models to minimize operating costs. The authors of [15] proposed a MGCC for frequency control for domestic freight and microdistributed generation. One of the issues that has been studied in recent years is the monitoring of residential demand response [16], [17] in relation to the interconnection of RES such as wind power, which can interact with storage systems and allow for frequency regulation through feedback control systems based on the battery charge and discharge states [18]. The coordination of suitable demand-response support (DRS) and virtual-inertia support (VIS) systems are used to mitigate the intermittency and low inertia of microgrids with renewable resources [19]. Since BESS have a relatively good response to frequency deviations [20], they have been helpful in supporting MGs [21], [22] and in combining them with conventional and renewable sources [23]; battery-related technologies have advanced in giant strides. The use of batteries is different for transmission systems and MGs. In transmission networks, frequency disturbances are infrequent and a large part or all the battery energy is used for each contingency. On the other hand, in isolated MGs there is a greater incidence of energy imbalance due to RES, which can quickly discharge the batteries and, thus, exhaust the number of discharge operations. That is why, in this work, the battery control is discreetly established; within each control interval, the bounded variations of frequency will be allowed, executing control actions at the end of each interval to avoid an energetic waste of the batteries and increasing availability. However, the frequency monitoring is continuous and, in the case of detecting an imbalance outside the tolerance defined in the same control at any moment, remedial actions are activated. In this research, the battery control is established in 5minute periods, according to the frequency limit for the interconnection of distributed generation (UL 1741). Furthermore, there is 24-hour access to the operation, control, and energy of the batteries, which allows for power regulation of the system. The constraints on the dynamics of the BESS are also considered, such as the power limits on each charge/discharge operation and the number of operations over a full cycle of one day. In addition, the time required for charge/discharge operations and the electrical power are evaluated over a complete cycle of one day, based on the specifications in the data sheet for the battery. It is also essential to consider the behavior of all the nodes in the MG for frequency control when a BESS is interconnected at a specific node, since the frequency in the entire electrical system (MG) is being monitored. If a node requires more energy than the batteries have stored, and the frequency cannot be maintained within the limits allowed by the standard, the BMSC will decide on a corrective or remedial action scheme (RAS). The operation of the MG is carried out without considering a forecast of RES due to the high intermittency, so the proposed control evaluates the energy deviation that occurs in each time interval. B. CONTRIBUTIONS Regardless of the different approaches and methods raised in the literature, the main contributions of this paper are enumerated below. • BMSC is proposed to consider the charge/discharge operations over a full cycle of one day of BESS and continuous frequency control of MG is carried out, preserving the energy availability of BESS. • The proposed system performs frequency control actions in the MG operation through BESS control, it is not required to know the insolation and wind speed forecasts. The proposed BMSC does not depend on virtual machines. • RAS is activated when the contribution of BESS is insufficient for frequency control. In this paper, a study of the optimal location and economic analyses of the BESS is not carried out. With more BESS, the frequency control will be better, but the cost will be higher. The location of the BESS is already defined and the proposed system manages the energy available at each node or at the point of common coupling (PCC), only evaluating operating conditions. Primary frequency control at remote nodes, including the sensitivity factors of the electrical network, can be achieved with the proposed method; however, it is necessary to evaluate the energy availability of the BESS when performing remote functions. The control scheme proposed in this work can be applied to any MG. This paper is presented as follows: Section II focuses on the problem statement of BMS control for microgrids. In Section III, the methodology of the proposed scheme is described in detail. In Section IV, the test systems used are described. In Section V, the simulation results and discussion are presented and analyzed. Finally, Section VI summarizes the general conclusions. II. MICROGRID OPERATIONS WITH BMS CONTROL The frequency and voltage in an MG are affected by the intermittency of the renewable sources, a high-impedance connection or islanded operation, which result in more difficult conditions. Stand-alone operation requires an energy management system that can monitor and control the energy generated by the sources and the storage of this energy in batteries, to supply the power required in the network over the time intervals in which variations occur. Using a BESS to protect against variations in frequency is an option for minimizing the active power imbalance that originates, mainly from the RES. During operation, the BMSC monitors the variables to be controlled, such as the power of the distributed generation (DGs), the frequency of the power system, and any BMSs installed locally on optimal nodes. The BMSC actuates the BESSs every 5 minutes, according to the power requirements, while at the same time, continuously monitoring the frequency every second within the interval defined above. In this research, it was considered that the microturbines (MT) will regulate the frequency within five-minute intervals. Communication between the BMSC and each node is carried out via a network that collects the data gathered by the phasor measurement units (PMU), Figure 1. For optimum operation, it is essential to use a telecontrol and telecommunications system that allows for dynamic interaction in real-time with all the components of the MG. The communication technologies currently used in smart controllers for the operation of MGs are local area networks or ethernet networks. Their low acquisition cost and the expansion of their use in communication networks have been of great interest regarding control systems. However, since this type of communication technology was not designed for connectivity in real-time feedback control systems, they have been the subject of recent research due to their latency in the transmission of data packets between remote stations. This latency has been greatly improved of late, which has made the use of ethernet for smart grids possible [24], since the operability of the system could otherwise be affected, depending on the type of control process in question. Another aspect that must be considered is data mining for processing in control systems. Dominant operating variables are used in dimensional reduction, which involves identifying the dominant variables; however, this condition is less critical in an MG. since the electrical network is generally small. In this work, real-time measurements of demand data, the energy generated by MTs, and the energy profiles of wind power (PW) and photovoltaic sources (PV) are used in a simulation. The proposed control system regulates the primary frequency of a MG that activates a BESS, based on the restrictions on battery dynamics and a 24-hour horizon. Renewable sources are modeled by considering energy losses and their efficiencies. In this paper, the converters are not modeled, since we assume that the system is not being evaluated in a transitory state, and only the power output is considered for simulation purposes. However, in future work, it will be important to consider the modeling of the inverters in detail, since one of the requirements considered in the IEEE 1547 and UL 1741 standards is that the converters must remain connected for a specific time and then trip after a contingency. In addition, converters also support the grid with active and reactive power to allow for frequency and voltage regulation [25]. An analysis of the performance of the proposed BMSC is carried out for its operation in real-time. The insolation and wind curves are statistically processed to estimate the behavior in different periods in one year. In the performance evaluation of the proposed BMSC, only an injection of power to the electrical network is required; the specific behavior of the PV or WP is not of interest here. The models used to transfer insolation and wind speed to electric power are presented in the next section. Thus, the same model of the MT is presented because, together with the BESS, it is an element that provides inertia to the MG. A. WIND TURBINE MODEL The electrical power of the wind system [26] is determined within the wind limits given by the power coefficient. The expression used to convert the wind profile to the electrical power injected into the electrical network is as follows: where C pr is the coefficient of performance at the rated speed v r (t), η mr is the transmission efficiency at nominal power, η gr is the generator efficiency at the rated power, ρ is the air density, and A is the swept area of the turbine blades. VOLUME 10, 2022 B. PHOTOVOLTAIC MODEL The power output of the mathematical model of the photovoltaic module [27] is represented as: where n PV is the number of PV modules, P ratePV is the nominal power of the array, G(t) is the global irradiation falling on the arrangement of PV panels, G o is the standard value for the insolation capacity of the photovoltaic modules, T A is the ambient temperature, T CO is the temperature coefficient of the maximum power of PV, η rel is the relative efficiency of the PV modules, and η inv is the efficiency of the inverter. C. MICROTURBINE MODEL MT have been classified as DG sources with lower pollution emissions than conventional, centralized generating plants and have been used in the operation of MGs to support renewable sources [28]. A simplified model of an MT providing active power is shown in (3)(4)(5): where P gT is the output power of the turbine within the period t, Q MT is the waste heat from the exhaust MT, η e is the generation efficiency of the MT, η l is the coefficient of heat loss, Q he is the heat provided by the MT and K he is the heat coefficient of the cooler. Equation (5) represents the equation of the state of the rotor angle (δ) as a function of the kinetic inertia constant (H ) of the MT rotor. P (t) = P m − P e is the difference of the mechanical and electrical power of the generator, while ω 0 is the electrical rated speed. The dynamics of the RES, together with the MT, will be evaluated in the proposed control system because the RES is not controllable. The objective of the proposed system is to maintain frequency control in the first instance and to maintain the energy availability of the batteries. III. PROPOSED BMS CONTROL MGs are currently driving an increase in renewable and sustainable energy, meaning that conventional power generation contributes a lower percentage to the power supply to MGs. These MGs can operate in interconnected mode or island mode. In interconnected mode, they can operate in different areas where the utility attenuates the frequency and voltage regulation. In island mode, the MG operates without a connection to the utility. This means that the choice of RES in an MG depends on its geographical location, weather conditions, and availability of conventional sources. The operation of these MGs involves management of the energy supply, protection of the generation equipment, security, and continuity to users. This is achieved by monitoring the demand and generation through a BMSC, which controls the parameters and variables to maintain the stability of the network. An analysis of the system is carried out to establish the operability in different scenarios over time, considering the sources of GD, intermittent generation, the number of operations and the battery charge / discharge limits and the frequency limits allowed by the IEEE 1547.2-2008 standard. [29]. The operation of a MG is proposed through a BMSC that controls the BESS located in the optimal nodes of the network, and the injection of energy from the sources in each (five-minute) period are then the new inputs for the next period and the actions of the BMSC are carried out. The energy reserve in the batteries, within a 5 minute interval, will then be available for the next interval. As shown in Figure 2, in the presence of a large disturbance when there is limited operations or no availability of power in the batteries or a high frequency of continuous imbalance, the control activates RAS. Otherwise, when the variation is within the parameters allowed by the standard, the BMSC activates the batteries to discharge or charge power. Each BESS at the optimal node is made up of a BMS and its storage in a battery. The BMS receives directions from the control for its operation at each instant of time, including the amount of power to discharge/charge depending on the frequency state of the node at that moment. The energy state in the batteries is important data for each time interval, as it indicates the amount of stored energy. The energy of the batteries is, therefore, considered as a state variable and is calculated as in (6). The BESS model used in this investigation was adapted from [30] and [31]. The purpose of the model is to consider a discretized event in the dynamics of the batteries every five minutes since, during the continuous operation of the MG, it would not be advisable to simulate the dynamics of the continuous-time discharge of the battery in each interval because the BESS contribution is controlled. A. FREQUENCY LIMITS ACCORDING TO IEEE STD The amount of power absorbed or released in each BESS is calculated based on the difference between the power stored in two consecutive intervals. The intervals used here are five minutes over a 24-hour horizon. Hence, in a state t, the amount of power in each defined time interval is represented as follows: where τ = (t2 − t1) is the charge/discharge time to the grid within the time interval, P Charge (τ ) is the stored power and P Discharge (τ ) is the released power. The power available from the BESS at each instant of time is limited by the Eq. (8): where P BESS−min and P BESS−max are the minimum and maximum power from the BESS at each instant of time t. In this paper, the charge/discharge power of the battery is limited, to ensure the availability of this resource over 24 hours, as indicated by Eq. (9). With this restriction, deep discharges can be avoided and premature aging of the battery is reduced. where, η D and η C are the discharge and charge efficiencies, respectively, and P BESS−SOC (k) is the state of charge or discharge of the batteries at each time k. The efficiencies are related to the depth of charge and discharge, the internal resistance of the batteries, and the ambient temperature. In this mathematical model, a percentage of the continuous charge/discharge power of the batteries was considered. Each charge (ramp up, Ec)/discharge (ramp down, Ed) count is associated with a time interval τ as the energy obtained by the percentage of continuous power, as indicated in Eq. (10) and (11): The charge and discharge times of a BESS in a 24-hour cycle are: where τ is the time (5 min) at each instant k within the 24-hour cycle, and N charge and N discharge are the charge and discharge operations, according to the rate C, stipulated by the manufacturer. The capacity of the BESS can be classified in two ways: either based on the nominal power capacity P b or the nominal energy capacity E b . These are defined as the total power and energy that a battery can deliver or absorb, respectively, during a full charge/discharge cycle [32]. The dynamic process of the BESS takes into consideration the different actions of the BMSC in order to balance the network, based on the allowed frequency limits, P pv and P w , the battery capacity, the battery operations (i.e. the number of operations in a full day), and the charge/discharge control of the BESS. The proposed BMSC is updated every 5 min and, within this period, the amount of energy that will be charged or discharged (without exceeding 10% of the battery power, tol) is established. The tolerance was determined as a basis for the control. It could be adapted but the results obtained for the different tests carried out were satisfactory because it was possible to stabilize the frequency for various districts and maintain the availability of the battery resource. B. REMEDIAL ACTION SCHEME In this way, the energy provided is limited in each operation, thus avoiding the loss of battery availability. In the case of an abnormal frequency, a detector establishes the action that should be applied: if the imbalance is less than tol, the energy is provided or stored by the battery; otherwise, if the disturbance is greater, the measurement time is activated every 1s, and the operation of the battery is blocked to avoid a deep discharge and the need for remedial action. The frequency is controlled over a wide operating range, to ensure that the battery remains available. The purpose of a battery is not to solve any frequency problem; therefore, the contribution of the battery is limited to avoid the loss in the availability of the resource. Likewise, a sensitivity analysis, to establish how much energy is required to control the frequency is not included, since this mode of operation will quickly wear down the battery. The frequency limits adapted to this work are stipulated in the IEEE 1547 standard. In the first two scenarios, the frequency limits used were between 59.8 and 60.5 Hz, where the control action for the frequency regulation in the MG was observed. In the third scenario, the limits with higher ranges (between 57.0 and 58.9 Hz) were used, where the supply of energy from the batteries is provided until exhausted. The control sends a remedial action to regulate the frequency within the allowed limits. This frequency regulation BMS control is flexible and can easily be adapted to any n-node distribution system or any MG made up of industrial or commercial energy users. The following section shows the test systems, for this paper the evaluation is carried out in an interconnected system to achieve a greater interaction between sources and include the losses and topology of the electrical network. IV. TEST SYSTEM As an implementation of our approach, a modified six node system was used to create an island mode MG, with a voltage level of 15 kV. Two 35 kW and 50 kW MTs, a 40 kW photovoltaic source, and a 40 kW wind source were included ( Figure 3). The wind and solar irradiation profiles were taken from meteorological stations. Several data curves, representing a period of a year, were randomly averaged to obtain a single profile for each renewable source. In this work, the location and dimensioning of the batteries in the test system was already determined. In this research, a forecast of the RES was not considered because the control measures monitor and evaluate the energy imbalance that occurs over time. Different demand curves were used for each node. The BESS were located at nodes 2 and 3 and each storage system had a capacity of 68 kW. In the tests carried out here, we used the 950V HR model of lithium-ion-type batteries [31] and it was determined that a maximum limit of 10% on the continuous power gave satisfactory results, in terms of controlling the frequency to within the margins established in [30], thus complying with the factory specifications of the BESS. The operation of the BMSC, installed in the experimental network, is analyzed for three different operational scenarios. In each case, the PMUs were used to measure the power injected into the MG. The BMSC places generation restrictions on the MT, activating the power and/or energy of the BESS based on the dynamic characteristics of each. When the BMSC detects the amount of active power from the BESS and RES, it controls the power injections for the next period. The demand data, wind profiles, and solar irradiation were the same for all scenarios, and the values of the sources were changed to represent the different modes of operation of the MG. The frequency limits considered in this simulation were those stipulated in the IEEE std 1547.2-2008, for the protection of RES connected to the MG. This allowed band is limited, between 60.5 and 59.8 Hz. For the evaluation of the algorithm, three testing scenarios are presented, the regulatory action of the batteries is shown as well as the way the proposed BMSC manages to keep the frequency within the operating limits. However, if the disturbance is very large or the number of battery operations is depleted, the proposed logic triggers a remedial action to maintain the energy balance. A. SCENARIO I The MT-1 was set at 35 kW and placed at node 1 in this scenario. A wind turbine system was located at node 6 with 40 kW capacity, a photovoltaic system at node 2 at 40 kW, and the MT-2 at node 5 with operating limits between 35-50 kW. The power contribution of both BESSs is considered, connected at node 2 (PB1) and node 3 (PB2), with capacities of 15 kW each. The time for these simulations was every 5 min. Figure 4 shows the results for the micro PMUs measurements and energy management of the system after power injections before entering the BMSC in each period. The restrictions on the MT-2 and the effect of the BESS on the power when entering the next period can be observed. Figure 5 displays the system frequency when MT-2 power generation is restricted. It is observed that, when the restrictions of the power of the MT-2 are in greater quantities, there is greater variability in the frequency and 8086 VOLUME 10, 2022 Figure 6 shows the state of charge and the dynamics of the batteries after the BMSC is triggered [33]. The upward slope of the curve indicates that the batteries have more charging operations. Figure 7 represents the contribution of active power from DG and the charges/discharges of the BESS in each period. Since the MG is supported by two MTs, the energy contribution in the electrical network is observed. In Figure 8, the frequency attenuated by the action of the proposed BMSC is within the range allowed by IEEE std 1547.2. In this case, the BESS had 92 and 51 charge and discharge operations, respectively, during the 24-hour cycle. B. SCENARIO II To simulate a scenario with less energy support (a more critical case), the MT-1 was disconnected. Since the more controllable energy is in the MG, the frequency problem is less. The following power values were applied in the generation sources: for uncontrollable generation (photovoltaic and wind) the power was set at 70 kW each, for the MT-2 a value of 70 kW (bus slack) was used. The charge/discharge power of the batteries was limited to 6.8 kW, which is 10% of the continuous charge power specified for the batteries used in this case. Figure 9 shows that the generation only depends on the DG and batteries, the MG being in island mode. The usable powers for the energy balance in each t is that available VOLUME 10, 2022 from the intermittent sources, the power available in the MT-2 and that available in the batteries. The variation in the frequency of MG is observable along the 24-hour horizon. In Figure 10, it can be seen that there is more frequency above the acceptable upper limit, i.e. more energy availability. The dynamics of the state of charge (SoC) of the BESS are depicted in Figure 11. Because the frequency variations that are outside the acceptable range are small, the regulation of the BESS obeys these actions. Figure 12 shows the charges and discharges of the BESS. The number of charge and discharge operations are 56 and 31, respectively, during the 24-hour cycle. In this case, the BMSC sends the control action to the BESSs to energetically balance the system in each time interval. In Figure 13, the frequency has been attenuated within the specified limits through the proposed BMS control. C. SCENARIO III In this third case, the same data were used as in scenario I and the generation in the MT-2 was restricted to create a scenario in which, due to the low generation in the MG and the insufficient capacity of the battery, the BMSC activates the load under a RAS to reach the control frequency [30]. When a system suffers an abrupt change in demand or some disconnection from a generation source, the BMSC needs to decide frequency control to avoid the unwanted tripping of the PV and PW converters. In Figure 14 the contribution of power generation towards the MG can be observed and the frequency ( Figure 15) undergoes large changes caused by the insufficient capacity of the batteries ( Figure 16). This activates RAS for load disconnection until a balance of the active power is achieved. Figure 17 shows the frequency after the BESS has participated on the optimal nodes; it can be observed that the use of the batteries has reduced the frequency in the system. However, after 20:40, it was not possible to balance the power of the electrical system. Figure 18 shows the charge/discharge power supplied to the BESS. When a very large frequency variation occurs, the RES will have to disconnect from the MG. The frequency protection ensures that the RES will stop feeding an unintentional islanding [32]. However, after the unwanted tripping of the RES, the power imbalance due to the loss of generation and load will cause the system to be in an uncertain operating condition. RES units with capacities of less than 30 kW may have a lower impact on the operation of the system and can generally disconnect from the electrical grid area within 10 cycles (0.16 s) of the offset time. On the other hand, units greater than 30 kW can positively affect the reliability of the MG. The IEEE 1547 requirement takes this into account by allowing the electrical grid area operator to specify the frequency settings and a time delay of up to 0.16 seconds for low frequencies (below 57 Hz). Based on the IEEE standard applied in this scenario, and noting from Figure 17 that there is an energy imbalance in the MG, we analyzed the curves inside and outside the frequency limits established for a RES unit greater than 30 kW. Figure 19 shows the remedial action that activates the BMSC when the frequency undergoes an abnormal change ( Figure 17). When the frequency is outside the tolerance region below 59.8 Hz, the BMSC actuates the BESS for frequency regulation. However, because the power in the batteries has run out, the compensation time (0.16 s) starts from 57 Hz and, as it has a longer imbalance time, the BMSC takes the corrective action by disconnecting load to regulate the frequency. This RAS results in recovery of the frequency of the network. VI. CONCLUSION With the proposed battery management system control, it was possible to efficiently manage the use of batteries, thus increasing the availability of stored energy over a complete 24-hour cycle. Restrictions were formulated to drive the battery storage systems via the battery management system control to give an adequate response, in terms of energy balancing of the network within each period of 5 min. The battery management system control causes the battery management system to use the batteries to carry out a charge/discharge action when a power imbalance occurs within a given period. The battery power in each period is limited to a maximum of 10% of its capacity, to maintain the availability of the storage resource over a period of 24 hours. The amount of charging power, the minimum discharge, the maximum charge, and the number of operations is considered to prevent premature battery aging. In cases I and II, the frequency band is limited between 60.5 and 59.8 Hz. In case III, at 20.00 hr, the battery runs out and the frequency variation is outside the operating limits; the remedial action scheme is then activated. In the event of a very large power imbalance, for which the battery storage systems cannot supply power, the battery management system control takes action by disconnecting the load and/or the generator. Therefore, in the future, it will be necessary to ensure that the BMSC can not only control the electrical variables but also performs control actions involving the disconnection of loads in blocks or areas classified as main and non-main, and to consider the economic aspects of the electricity market. Also, sensitivity analysis is not included because, for future work, it is desirable to carry out the tests in a real-time simulator. APPENDIX Parameters of the proposed system. BESS SIZING Location at nodes 2 and 3. 950 V HR model of lithium-ion. Charge and discharge power efficiencies, η D = 1η C = 1 The maximum battery capacity is: 136 KW (1 full Cycle / day). Continuous Charge/Discharge Power is 68 kW. BMSC The charge/discharge power of each battery was limited to 6.8 kW, which is 10% of the continuous charge power specified for the batteries. Charge and discharge: the number of operations considered is 96. τ is the time (5 min). Nominal energy capacity per hour is: E b = 34kWh MG Sizing Island mode operation, with a voltage level of 15 kV. The MT-1 was set at 35 kW placed at node 1. The MT-2 at node 5 with operating limits between 35-50 kW. PW at node 6 with 40 kW capacity.
2022-01-15T16:32:47.933Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "4e283879902892ed7fec3bc6a3c9b94e4f63da8d", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09680700.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "86e8b291ddcfd857e0a00dafa74ad708a9b472a8", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
52006225
pes2o/s2orc
v3-fos-license
WWOX Phosphorylation, Signaling, and Role in Neurodegeneration Homozygous null mutation of tumor suppressor WWOX/Wwox gene leads to severe neural diseases, metabolic disorders and early death in the newborns of humans, mice and rats. WWOX is frequently downregulated in the hippocampi of patients with Alzheimer’s disease (AD). In vitro analysis revealed that knockdown of WWOX protein in neuroblastoma cells results in aggregation of TRAPPC6AΔ, TIAF1, amyloid β, and Tau in a sequential manner. Indeed, TRAPPC6AΔ and TIAF1, but not tau and amyloid β, aggregates are present in the brains of healthy mid-aged individuals. It is reasonable to assume that very slow activation of a protein aggregation cascade starts sequentially with TRAPPC6AΔ and TIAF1 aggregation at mid-ages, then caspase activation and APP de-phosphorylation and degradation, and final accumulation of amyloid β and Tau aggregates in the brains at greater than 70 years old. WWOX binds Tau-hyperphosphorylating enzymes (e.g., GSK-3β) and blocks their functions, thereby supporting neuronal survival and differentiation. As a neuronal protective hormone, 17β-estradiol (E2) binds WWOX at an NSYK motif in the C-terminal SDR (short-chain alcohol dehydrogenase/reductase) domain. In this review, we discuss how WWOX and E2 block protein aggregation during neurodegeneration, and how a 31-amino-acid zinc finger-like Zfra peptide restores memory loss in mice. The WW domain participates in protein/protein interactions for transducing signals McDonald et al., 2012;Reuven et al., 2015). The first WW domain of WWOX binds PPxY or PPPY-containing proteins (e.g., p73, ErbB-4, SIMPLE, WWBP1, WWBP2, Ezrin, AP-2g, Runx-2, and many others) (Ludes-Meyers et al., 2004;Jin et al., 2006;Chang et al., 2007;McDonald et al., 2012;Reuven et al., 2015). When Tyr33 in the first WW domain is phosphorylated, activated WWOX acquires an enhanced capability in binding a broad spectrum of proteins Reuven et al., 2015), including p53 (Chang et al., 2001(Chang et al., , 2003a(Chang et al., ,b, 2005a(Chang et al., ,b, 2007, c-Jun N-terminal kinase (JNK) (Chang et al., 2003a), Zinc finger-like protein that regulates apoptosis (Zfra) (Hong et al., 2007), c-Jun and cAMP response element binding protein (CREB) ) and others. The second tandem WW domain assists synergistically with the first WW domain in enhancing the protein/protein binding (Farooq, 2015). Transiently overexpressed WWOX frequently sequesters transcription factors in the cytoplasm, and thereby blocks their transcription for prosurvival proteins in the nucleus in cancer cells in vitro (Gaudio et al., 2006). In contrast, endogenous WWOX binds and co-translates with many transcription factors to relocate to the nucleus to enhance or block neuronal survival under sciatic nerve dissection ). Endogenous trafficking protein particle complex 6A (TRAPPC6A) acts as a carrier for WWOX to undergo nuclear translocation . Indeed, WWOX works together with many transcription factors to either support neuronal survival or death under physiological or pathological conditions. At temperatures lower than 37 • C, WWOX is needed for a recently described type of cell death, designated bubbling cell death Chang, 2016). BCD is not apoptosis, necroptosis, or necrosis. When cells are subjected to UV irradiation and cold shock followed by culturing at 37 • C, the cells undergo apoptosis (e.g., caspase activation, whole cell and nuclear condensation, DNA fragmentation, etc.). However, if the UV/cold shock-treated cells are incubated at a lower temperature (e.g., 4, 10, or 22 • C), they generate, in most cases, a nuclear nitric oxide (NO)-containing bubble per cell. Some cells may generate 2-3 bubbles. The bubble continues to inflate and finally is released from the cell membrane. The cells die later on. Membrane phosphatidylserine flip over, caspase activation and DNA fragmentation, which are found in apoptosis, are not involved in BCD. Raising the temperature back to 37 • C resumes the event to apoptosis. If cells are devoid of WWOX (e.g., Wwox −/− MEF), cell death is retarded Chang, 2016). Overall, UV energy is absorbed by the nucleus, and cold shock assists the rapid relocation of cytosolic p53, WWOX, and NOS2 to the nucleus. Nitric oxide synthase NOS2 is responsible for the bubble generation that leads to cell death Chang, 2016). WWOX in Neuronal Injury Constant light-induced retinal neural degeneration involves WWOX activation and pY33-WWOX accumulation in the mitochondria and nuclei to cause damage and death . Neurotoxin MPP + (1-methyl-4-phenylpyridinium) also induces pY33-WWOX upregulation and nuclear accumulation to cause neuronal death in rats (Lo et al., 2008). During the acute phase of sciatic nerve dissection, pY33-WWOX, along with its interacting transcription factors, becomes accumulated in the nucleus that leads to the rapid death of the large-sized neurons in vivo . WWOX blocks the prosurvival function of CREB-, CRE-, and AP-1-mediated promoter activation in vitro . In stark contrast, WWOX enhances the promoter activation governed by c-Jun, Elk-1 and NF-κB . Apparently, a balance in the protein levels for WWOX and transcription factors is critical in determining the fate of dissected neurons. Bubbling cell death can also occur at 37 • C. For example, when cells are transiently overexpressed with hyaluronidase Hyal-2 and WWOX followed by treating with high-molecularweight hyaluronan, BCD occurs at 37 • C (Hsu et al., 2017) ( Figure 1B). Hyaluronan binds membrane Hyal-2 to initiate the Hyal-2/WWOX signaling, and that both Hyal-2 and WWOX are accumulated in the nuclei. It is reasonable to assume that during TBI, the nuclear Hyal-2 and WWOX may exert BCD due to the production of NO. Formation of nuclear bubbles in the dying neurons in vivo is unknown. However, bubble formation in vivo is difficult to detect, because it is technically impossible to fix bubbles for microscopic examination. Reactive oxygens species (ROS) are rapidly upregulated during TBI (Bains and Hall, 2012). WWOX, via its C-terminal SDR domain, controls the generation of ROS in Drosophila (O'Keefe et al., 2011) and mammalian cells (Dayan et al., 2013). Also, the SDR domain of WWOX controls the cellular outgrowths caused by genetic deficiencies of the components of the mitochondrial respiratory complexes in Drosophila (Choo et al., 2015). Under physiologic conditions, oxidative phosphorylation sustains WWOX expression (Dayan et al., 2013). However, when glycolysis (or Warburg metabolism) goes up in aberrant cells, WWOX expression is downregulated (Dayan et al., 2013). Reduced WWOX levels in Drosophila allow cellular outgrowths to various extent caused by genetic deficiencies of components of the mitochondrial respiratory complexes and aberrant ROS production (Choo et al., 2015). Together, WWOX participates in TBI and this is related with ROS generation and brain tissue repair. Pathological Features in Neurodegeneration Neurodegenerative diseases (NDs) encompass a heterogeneous group of chronic progressive diseases, each affecting specific central nervous system (CNS) compartment. The pathologies of NDs are not specific for each individual disease. Neurofibrillary tangles and Lewy bodies, for example, may appear in nondemented and non-idiopathic Parkinson disease patients. Also, the pathological or clinical features may overlap. Over the past decades, many animal models have been established to seek potential propagation mechanisms and associated risk factors for NDs (Martin, 2012;Niccoli and Partridge, 2012;Dugger and Dickson, 2016;Hartl, 2017;Chételat, 2018). Aging-related stress, oxidative stress, reduced mitochondrial function, altered subcellular transport, and activation of the ER stress and unfolded protein response (UPR) pathways are considered important during neurodegeneration (Martin, 2012;Hartl, 2017). In the aging processes, chaperones may become dysregulated and the degradation machineries stop working properly, which leads to protein misfolding, aggregation, and accumulation for neuronal damage. Among these, UPR exists in the mitochondria and the endoplasmic reticulum, along with disordered cytosolic heat shock response, ubiquitin-proteasome system, and autophagy (Taylor and Dillin, 2013;Hartl, 2017). Presence of aberrant protein aggregates, inclusion bodies and/or tangled fibrous proteins in the aging neurons, glial cells, and brain matrix is the pathological hallmarks of neurodegeneration (Ross and Poirier, 2005;Richter-Landsberg and Leyk, 2013;Higuchi-Sanabria et al., 2018). Furthermore, formation and spread of prion-like Aβ aggregates occur during AD progression, and this is not due to overexpression of APP (amyloid precursor protein) (Ruiz-Riquelme et al., 2018). Prion protein in the exosomes facilitates the spreading and aggregation of neurotoxic Aβ (Hartmann et al., 2017). WWOX Deficiency Leads to Severe Neural Damage and Metabolic Disorders The WWOX protein is heterogeneously expressed in the central nervous system. WWOX-positive stains are found in the human cerebrum, specifically in the pyramidal neurons and astrocytes from the frontal and occipital cortices, and in the nucleus caudate, pons and nuclei olivaris of medulla. Neuropils and small neurons are also immunoreactive to WWOX antibody. However, parietal, limbic and temporal cortices and substantia nigra are minimal or negative for WWOX immunoreactivity (Nunez et al., 2005). In the developing mouse brain, WWOX protein expression is essentially present in every brain region and the expression level is reduced in the newborns (Chen et al., 2004). In the adult brain, WWOX is abundant in the epithelial cells of the choroid plexus and ependymal cells, while a low to moderate level of WWOX is observed within white matter tracts, such as axonal profiles of the corpus callosum, striatum, optic tract, and cerebral peduncle (Chen et al., 2004). Despite its role in cell death, WWOX is essential in homeostasis in vivo. WWOX/Wwox gene deficiency severely affects normal physiological functions, especially in embryonic neural development (Chen et al., 2004;Aldaz et al., 2014;Chang et al., 2014Tabarki et al., 2015). Deficiency of WWOX/Wwox gene due to point mutations or homozygous nonsense mutation may result in childhood onset autosomal FIGURE 1 | Potential role of WWOX and BCD in neuronal death during traumatic injury. (A) Needle insult to the brain was carried out in rats. Post injury for 3 and 24 h, the animals were sacrificed. By immunoelectron microscopy, accumulation of Hyal-2 and WWOX is found in the nuclei of dying neurons in the brain cortex (Hsu et al., 2017). (B) Nuclear accumulation of Hyal-2 and WWOX leads to BCD . Both schematic graphs and a real-time image are shown. If p53 competes with Hyal-2 to complex with WWOX, both p53/WWOX proteins are retained in the cytoplasm and the extent of Hyal-2/WWOX complex is reduced, no BCD occurs (Hsu et al., 2017). (Data in A is adapted from Hsu et al., 2017, republishing according to the guideline of Oncotarget). WWOX gene is involved in the regulation of lipid homeostasis and metabolism (Ludes-Meyers et al., 2004Lee et al., 2008;Yang et al., 2012;Dayan et al., 2013;Iatan et al., 2014;Li et al., 2014;. WWOX gene alteration is associated with the low plasma high-density lipoprotein cholesterol (HDL-C) levels and aberrant HDL-C and triglyceride levels Sáez et al., 2010). Furthermore, whole body and liver conditional Wwox knockout mice revealed a significant role for Wwox in regulating HDL and lipid metabolism (Iatan et al., 2014). Interference in lipid metabolism may be a critical contributor in the pathogenesis of neurological diseases. For example, WWOX is not expressed in the lipid-rich myelin sheath in the normal neurons, but activated pY33-WWOX is accumulated in the myelin sheath during neurotoxin MPP + -induced neuronal death (Lo et al., 2008). While both apolipoprotein E (Apo E) and WWOX are involved in AD and TBI, the functional relationship between these two proteins (e.g., binding) needs further elucidation. Taken together, WWOX plays a crucial role in neural development and lipid metabolism. Without WWOX, severe neural diseases, metabolic disorders and early death occur in humans and animals. WWOX Gene Expression in the Brain By analyzing the database in the Allen Brain Atlas 1 , WWOX gene expression levels are shown to be significantly downregulated in the postmortem normal hippocampus, compared to those in the pons and white matter (n = 6; age 42.5 ± 13.4; 3 Caucasians, 2 blacks, 1 Hispanic) (Figure 2A). There were only six normal brain samples exhibiting detectable signals for WWOX gene expression (as shown in the Supplementary Table S1). WWOX gene expression is upregulated in the cingulum bundle of the white matter by 2.31-fold, and the central glial substance of the myelencephalon by 2.78-fold. In the "Possible AD" group (77 to 100+ years old), WWOX gene expression levels are barely changed in the hippocampus ( Figure 2B). Also, compared to the hippocampus, WWOX gene expression is significantly downregulated in the parietal and temporal neocortex, but is significantly upregulated in the white matter of the forebrain ( Figure 2B). Interestingly, similar expression profiles are observed in the "Traumatic Brain Injury (TBI)" group (77 to 100+ years old; Figure 2C). Also, in other gene databases (GTEx, Illumina, BioGPS, and CGAP SAGE, as summarized in the GeneCard 2 ), WWOX gene expression levels in the brain, cerebellum, cortex, spinal cord and tibial nerve are similar to those from other tissues and organs in normal humans. However, WWOX protein expression FIGURE 2 | WWOX gene and protein expression in human brain. (A) WWOX gene expression was analyzed using the database in the Allen Brain Atlas (http://www.brain-map.org). Detectable signals for WWOX gene expression were found in six postmortem normal individuals (age 42.5 ± 13.4; 3 Caucasians, 2 blacks, 1 Hispanic). Representative WWOX gene expression levels in the brain white matter, pons and hippocampus are shown. Also, see the Supplementary Table S1 for WWOX gene expression in the normal brains (around one-fold changes for all indicated regions). (B,C) In the "Possible AD" and "Traumatic Brain Injury" groups (77-100+ years old), WWOX gene expression levels are shown in the indicated brain areas. (D,E) Expression of wild type WWOX (46 kDa) and isoform WWOX2 (41 kDa) is downregulated in the neurons of AD hippocampi compared with normal controls (a representative set from five immunostains; magnification, 200×; data from Sze et al., 2004). (F) In AD patients, the protein levels for WWOX (n = 8), isoform WWOX2 (n = 8), and pY33-WWOX (n = 6) are significantly downregulated in the hippocampi as determined by Western blotting, compared to age-matched controls (∼32 ± 5% reduction, p < 0.005; data with minor revisions for the art work are adapted from Sze et al., 2004; republishing according to the guideline of the Journal of Biological Chemistry). levels are significantly increased in the human fetal brains (GeneCard database shown above). This is in agreement with our observations using mouse fetal brains (Chen et al., 2004). WWOX Protein Downregulation in Alzheimer's Disease (AD) It is generally agreed that gene expression cannot always correlate with protein expression. The aforementioned WWOX gene expression levels do not correlate positively with the extent of WWOX protein expression. For example, downregulation of WWOX gene occurs in the hippocampi of young adults (Figure 2A) and many other areas (Supplementary Table S1). However, WWOX protein expression levels are detectable in neurons of many regions in the brain (Nunez et al., 2005). Indeed, significant downregulation of the protein level for WWOX, isoform WWOX2, and pY33-WWOX has been shown in the hippocampi of AD patients, compared to age-matched controls (Sze et al., 2004) (Figures 2D-F). However, during sciatic nerve injury, rapid upregulation of Wwox gene expression occurs in less than 30 min in the neurons of dorsal root ganglion, followed by significant upregulation of WWOX protein and its Tyr33 phosphorylation in the damaged neurons in 24 h . Activated WWOX is needed to initiate neuronal death in the damaged tissue. There is no positive correlation between WWOX/Wwox mRNA expression and protein expression. For example, translational blockade of Wwox mRNA has been shown in the development of skin squamous cell carcinoma (SCC) in hairless mice (Lai et al., 2005). During the acute exposure of hairless mice to UVB, both WWOX and pY33-WWOX proteins are upregulated in epidermal cells in 24 h. SCCs then start to develop in 3 months. There are significant reductions in WWOX and pY33-WWOX proteins in the SCC cells. However, no downregulation of Wwox mRNA occurs (Lai et al., 2005). In SCC patients, significant reduction of WWOX and pY33-WWOX proteins are observed in non-metastatic and metastatic cutaneous SCCs, whereas no downregulation of WWOX mRNA occurs (Lai et al., 2005). Together, WWOX/Wwox mRNA is subjected to translational blockade in the skin and probably other tissues and organs under pathological conditions. The WWOX also binds JNK via its Tyr33-phosphorylated first WW domain, and the binding results in neutralization of the functions of both proteins in a reciprocal manner (Chang et al., 2003a) (Figure 3). Additionally, the first WW domain of WWOX physically interacts with ERK (extracellular signal-regulated kinase) (Huang and Chang, 2018). ERK has been implicated in Tau hyperphosphorylation (Augustinack et al., 2002) (Figure 3). Cyclin dependent kinase 5 (Cdk5) hyperphosphorylates many substrates such as amyloid precursor protein, tau and many other proteins in the brain (Shah and Lahiri, 2015); however, functional interaction between WWOX and CDK5 has never been documented. TIAF1 and TRAPPC6A Protein Aggregates in the Hippocampi of Mid-Aged Normal Individuals In an inducible transgenic mouse model, neuron-specific expression of TGF-β in the neocortex, hippocampus and striatum for a long term results in deposition of amyloid fibrils in these brain areas (Ueberham et al., 2005). Deposits of apolipoprotein E (ApoE) are also found in perivascular areas (Ueberham et al., 2005). When TGF-β induction stops, the amyloid and ApoE aggregates stably remain in the brain and vascular lesions. We have discovered a few novel proteins, whose aggregation is found in the brain hippocampal and cortical areas of both nondemented healthy individuals and demented AD patients. TGF-β1-induced antiapoptotic factor 1 (TIAF1; 12 kDa) is involved in the pathogenesis of AD and cancer, as well as in allograft rejection by activated T helper cells (van der Leij et al., 2003;Lee et al., 2010;Hong et al., 2013;. Presence of aggregated TIAF1 protein in the dead neurons is shown in the hippocampi of middle-aged normal humans (Lee et al., 2010;. Notably, little or no Aβ aggregates are found in the TIAF1 plaques in the mid-aged humans (Lee et al., 2010) (Figure 4A). For example, TIAF1 aggregation is detected in 59% of non-demented control hippocampi (age 59.0 ± 17.0, n = 41), and only 15% of the total samples have Aβ aggregates, as determined by filter retardation assay (Lee et al., 2010). However, 54% of TIAF1 aggregation is shown in the hippocampi of older postmortem AD patients (age 80.0 ± 8.8, n = 97), in which 48% of the total AD samples possess Aβ aggregates. Presence of a representative TIAF1-containing plaque from the hippocampus of a 9-month-old APP/PS1 transgenic mouse is shown (Figure 4B). A minimal amount of Aβ aggregates is found within the center of the plaque. The observations imply that TIAF1 aggregates are difficult to remove with age by the ubiquitination/proteasomal degradation system. In vitro analysis revealed that TIAF1 undergoes self-polymerization and this leads to amyloid β formation (Lee et al., 2010). Together, TIAF1 aggregation occurs in the middle age and this may result in slow formation of amyloid β in humans (Lee et al., 2010). signaling event does not cause protein aggregation. However, under aberrant signaling, TGF-β1 causes TIAF1 aggregation and reduces its binding with membrane APP, thus leading to APP de-phosphorylation at Thr688 and then degradation and production of amyloid β monomer, intracellular domain of the APP intracellular domain (AICD), and amyloid fibrils (Henriques et al., 2009;Lee et al., 2010;Chang et al., 2012;Hong et al., 2013). Presence of aggregated TIAF1 in the peritumor coats of metastatic brain tumor cells does not cause cancer cell death (Lee et al., 2010;Chang et al., 2012;Hong et al., 2013). However, the coat-associated TIAF1 aggregates are cytotoxic to neurons (Lee et al., 2010). TRAPPC6A Protein Aggregation Is Upstream of TIAF1 We have identified a TGF-β-induced trafficking protein particle complex 6A (TRAPPC6A or TPC6A) FIGURE 4 | TPC6A and TIAF1 in a cascade of protein aggregation and WWOX blocks the aggregation. (A) Representative human AD hippocampal tissue sections were pre-stained with Bielschowsky stain, followed by staining with specific antibody against TIAF1 (green), and Aβ (red) and DAPI for nuclei. A representative confocal image of a plaque is shown (Lee et al., 2010). (B) Shown is a TIAF1-containing plaque from a hippocampal section of a 9-month-old APP/PS1 transgenic mouse, containing Aβ aggregates in the center (Lee et al., 2010). (C) In representative human brain cortical tissue sections from AD patients and age-matched controls, a pS35-TPC6A -containing plaque is shown. In negative controls, the immunizing peptide blocks the immunoreactivity . (D,E) Presence of pS35-TPC6A and pT181-Tau aggregates is shown in the cortex and hippocampus of 3-week-old Wwox knockout mice . (F) Endogenous TPC6A and TPC6A shuttle between nucleoli and mitochondria. Ser35 phosphorylation supports shuttling from the nucleus to the nucleolus, and Tyr112 phosphorylation is needed for translocation from the nucleolus to the mitochondrion . (G) Upon WWOX downregulation, a sequential protein aggregation cascade occurs. When WWOX level is reduced, pS35-TPC6A starts to polymerize and recruit pS37-TIAF1 for further polymerization and accumulation in the outer membrane of mitochondria . The aggregated pS35-TPC6A and pS37-TIAF1 cause caspase 3 activation and cytochrome c release. The activated caspase 3 leads to APP degradation and formation of Aβ and amyloid fibrils and Tau tangles. SH3GLB2 aggregation (Lee et al., 2017) occurs probably right after that of pS37-TIAF1. (All data are adapted with revisions in art work from Lee et al., 2010;, under the guidelines of the publishers). . TRAPPC6A/Trappc6a gene is associated with skin pigment formation in mice (Gwynn et al., 2006), AD in humans (Hamilton et al., 2011), and other neural diseases (Mohamoud et al., 2018). An intra-N-terminal deletion isoform of TRAPPC6A, designated TRAPPC6A or TPC6A , tends to spontaneously form aggregates or plaques in the extracellular matrix of the hippocampi of postmortem middle-aged normal humans and older AD patients ( Figure 4C) and 3-week-old Wwox gene knockout mice ( Figure 4D) . Presence of pT181-Tau, a marker for tau phosphorylation and aggregation in mice, is also shown in the cortex of Wwox knockout mice, but is barely detectable in the wild type and heterozygous Wwox mice ( Figure 4D). Conceivably, without WWOX, cellular proteins tend to undergo aggregation. TPC6A aggregates are also present in the human brain cortex and hippocampus, which are ∼50 and 40% positive, respectively, for both control (59 ± 17 years old; n = 42) and AD (80 ± 8.8 years old; n = 96) groups , suggesting that the aggregated proteins are stable and hard to undergo degradation with age. In comparison, protein aggregates for pY33-WWOX are significantly reduced by ∼40% in the AD samples, compared to non-demented controls . Again, compared with the non-demented controls, tangled tau and Aβ aggregates are significantly increased in the AD samples . If our observations hold true, TPC6A /TIAF1 starts polymerization in the middle age, and takes at least 10-40 years to generate significant amounts of tau and amyloid β protein aggregates for clinically defined AD symptoms. We have recently determined that endogenous TPC6A undergoes a novel mitochondrion-nucleolus shuttling ( Figure 4F) . TGF-β1 causes nuclear TPC6A to undergo Ser35 phosphorylation, followed by entering the nucleoli and then relocating to the mitochondria as a dimer, which probably requires phosphorylation at Tyr112. The mitochondrial TPC6A shuttles back to the nucleolus. TPC6A carries WWOX to the nucleus. TPC6A protein possesses an internal frame deletion of amino acids #29-42 at the N-terminus. Wild type TPC6A is less likely to undergo aggregation. Both TPC6A and TPC6A proteins are able to shuttle between nuclei and mitochondria . Under aberrant signaling, TPC6A molecules are accumulated as aggregates in the mitochondria, where TIAF1 binds TPC6A . Both proteins induce caspase activation and apoptosis ( Figure 4G) . A BAR domain-containing SH3GLB2 (SH3 Domain Containing GRB2 Like, Endophilin B2) is a potential downstream protein for aggregation via direct binding with TIAF1 (Pierrat et al., 2001) (Figure 4G). Aggregation of SH3GLB2 can be found in the brain cortex and hippocampus (Lee et al., 2017). Also, knockdown of WWOX by small interfering RNA (siRNA) induces spontaneous aggregation of TPC6A and TIAF1 in vitro. Knockdown of TPC6A fails to cause TIAF1 aggregation , suggesting that TPC6A aggregates first, followed by TIAF1 aggregation. Collectively, when WWOX is significantly downregulated, TPC6A becomes phosphorylated at Ser35 and forms aggregates in the nucleus, followed by relocating to the mitochondria to bind TIAF1 and both proteins become aggregated Sze et al., 2015) (Figure 4G). Thus, one line of in vitro evidence reveals that without WWOX, the TPC6A /TIAF1 aggregates cause formation of extracellular amyloid β and intracellular Tau aggregates (Lee et al., 2010;. Further, in vivo evidence revealed that when Wwox gene is knocked out in mice, aggregation of TIAF1, TPC6A , amyloid β, Tau, and many other proteins occurs in the brains in less than 3 weeks (Figures 4A-E). Taken together, WWOX plays a role in limiting protein aggregation in vivo. WWOX Phosphorylation at Ser14 and Its Potential Role in Neurodegeneration Site-specific WWOX phosphorylation is associated with cell differentiation and many other events (Huang et al., 2016;Huang and Chang, 2018). During forced cell differentiation, WWOX rapidly undergoes phosphorylation at Ser14 in leukemia cells (Huang et al., 2016;Huang and Chang, 2018) and in diseased organs (Lee et al., 2017). pS14-WWOX does not cause apoptosis. In contrast, overly expressed pY33-WWOX induces apoptosis . It suggests that the levels of pS14-WWOX and pY33-WWOX must be in a good balance in vivo. Under stress conditions, WWOX is phosphorylated at Tyr33 to induce apoptosis. During cell differentiation or disease progression (e.g., AD), WWOX is phosphorylated at Ser14 (Huang and Chang, 2018). Ten-month-old triple transgenic (3xTg) mice for AD develop memory loss probably due, in part, to accumulated aggregates of TPC6A , SH3GLB2, tau and Aβ, along with inflammatory NF-κB activation, in the hippocampal and cortical areas (Lee et al., 2017). Notably, significantly increased phosphorylation of WWOX at Ser14, but not Tyr33, is shown in their brain lesions (Lee et al., 2017). Zfra blocks Ser14 phosphorylation in WWOX, significantly reduces accumulation of TPC6A , SH3GLB2, tau and Aβ aggregates, suppresses NF-κB activation, and restores memory in these mice (Lee et al., 2017). In vitro analysis showed that Zfra binds cytosolic proteins for accelerating their degradation in ubiquitin/proteasome-independent manner (Lee et al., 2017). B16F10 melanoma-growing nude mice develop neuronal death in the hippocampus, amyloid plaque formation in the cortex, and melanoma infiltration in the lung in less than 2 months (Lee et al., 2017). Zfra inhibits pS14-WWOX expression in the lung and brain lesions, clears up cortical plaques, and thereby suppresses cancer growth and neuronal death (Lee et al., 2017). Together, WWOX phosphorylation at Ser14 supports the progression of neurodegeneration in the hippocampus and plaque formation in the cortex, as well as cancer progression (Huang and Chang, 2018). Is WWOX a Molecular Chaperone? WWOX retards neurodegeneration pathology by binding and blocking GSK-3β, ERK, JNK and probably other kinases and enhancing neurite outgrowth and neuronal differentiation (Sze et al., 2004;Wang et al., 2012). WWOX probably functions as a protein chaperone to prevent protein misfolding and degradation by the ubiquitin/proteasome system. Under stress conditions, activated WWOX with Tyr33 phosphorylation binds p53, and both proteins work synergistically to induce apoptosis (Chang et al., 2005a). Without binding, p53 relocates to the cytoplasm and undergoes degradation (Chang et al., 2005a). It has been proposed that the second WW domain of WWOX is an orphan module devoid of ligand binding function but is a chaperone necessary to stabilize the first WW domain in conducting protein/protein interactions (Farooq, 2015). SEX STEROID HORMONES IN NEUROPROTECTION Sex steroid hormones are decreased in menopause women and aged men. Deficiency of 17-β-estradiol (E2), a major form of estrogens, is implicated in age-related cognitive decline in human and non-human primates. Estrogens modulate hippocampal synaptic spine growth, structural plasticity, and neuronal excitability, which affect long-term potentiation in learning and memory (Teyler et al., 1980;Brinton, 1993;Warren et al., 1995;Engler-Chiurazzi et al., 2016;Muñoz-Mayorga et al., 2018). Decreased serum sex steroid hormone levels in postmenopausal women or in aged men increase the risk for developing NDs. Participation of steroid sex hormones in neuroprotection through the interaction of E2 and estrogen receptors (ER) during brain injury and neurodegeneration has been extensively investigated and very well reviewed (Brann et al., 2007;Arevalo et al., 2015;Engler-Chiurazzi et al., 2016). There are two classes of ERs, namely nuclear and membrane receptors. Upon stimulation with estrogens, ERα and ERβ translocate to the nucleus, bind chromosomal DNA, and function as transcription factors (Shang et al., 2000;Safe and Kim, 2008;Carroll, 2016). Membrane estrogen receptors (mERs) are mostly G protein-coupled receptors and are responsible for transducing signals upon stimulating with an estrogen. Known mERs are GPR30, ER-X, and G q -mER. During signaling, ERα and ERβ translocate to the nucleus and bind estrogen-responsive elements (EREs) in the promoter regions of specific genes to recruit transcriptional co-activators and co-repressors to control gene transcription (Shang et al., 2000;Safe and Kim, 2008;Carroll, 2016). Alternatively, ERs act as transcriptional partners at non-ERE sites. ERs are also associated with plasma membrane lipid rafts to bind neurotransmitters and proteins, which drives the growth factor receptor signaling to interact with other neuroprotective signaling pathways or elicit redundant neuroprotection signaling (e.g., PI3K-AKT, ERK1-ERK2, and JAK-STAT3) (Ramírez et al., 2009;Arevalo et al., 2015). WWOX AS A RECEPTOR FOR SEX STEROID HORMONES FOR SIGNALING The WWOX is a potential cytosolic or membrane receptor for sex steroid hormones (Chang et al., 2005b;Su et al., 2012). WWOX is highly expressed in hormone-or enzyme-secreting organs. WWOX is most abundant in the ductal epithelial cells such as in the breast and prostate. WWOX controls the growth and progression of breast and prostate cancers (Bednarek et al., 2000;Chang et al., 2005b;Nunez et al., 2005;O'Keefe et al., 2011). Loss of WWOX accelerates cancer growth and metastasis. The SDR domain of WWOX is associated with aerobic metabolism and control of the generation of reactive oxygen species (O'Keefe et al., 2011;Choo et al., 2015), which is crucial in limiting the progression of neurodegeneration (Su et al., 2012;Chang et al., 2014). Estrogens and Androgens Bind the SDR Domain of WWOX Estrogens or androgens bind the NSYK (Asn-Ser-Tyr-Lys) motif in the C-terminal SDR domain of WWOX (Chang et al., 2005b;Su et al., 2012). This binding causes nuclear accumulation of activated or Tyr33-phosphorylated WWOX (pY33-WWOX) (Chang et al., 2005b;Su et al., 2012). Excessive accumulation of pY33-WWOX in the nucleus induces apoptosis. Notably, estrogen or androgen-mediated WWOX activation is independent of ERs or androgen receptor (AR), suggesting that WWOX by itself acts as a receptor (Chang et al., 2005b;Su et al., 2012). WWOX expression is significantly upregulated during the early stage of normal prostate and breast tissue progression toward hyperplasia and cancerous stages (Chang et al., 2005b). Upon reaching metastatic stage, cancer cells do not express WWOX due, in part, to hypermethylation at the promoter region. Indeed, the expression levels of WWOX positively correlate with the hormone receptor status, but negatively correlate with the clinical stages of breast and ovarian cancers (Chang et al., 2005b;Guler et al., 2011). Loss of WWOX confers resistance to tamoxifen due to upregulation of ER and human epidermal growth factor receptor 2 (Her2) and their transcriptional activities (Guler et al., 2007;Salah et al., 2010). Tamoxifen is one of the estrogen receptor modulators, which regulates hormone-secreting tissue activities for treatment and prevention FIGURE 5 | Role of E2/ER/WWOX in initiating protective pathways. The pathways include: Route I, E2/ER-mediated upregulation of antiapoptotic Bcl-2 family proteins, and downregulation of proapoptotic Bcl-2 family members (Yao et al., 2007) (see the route in yellow line). Route II, Activation of the pro-survival ERK/WWOX and PI3K/Akt signaling cascades to block the pro-apoptotic JNK signaling and protect the neural tissues from damages (Tang et al., 2014) (route in blue). Route III, E2 activates PI3K via ERα and mERs, followed by activating Akt to phosphorylate GSK-3β at Ser9 for functional inactivation (Ruiz-Palmero et al., 2013) (route in green). Route IV, The SDR domain of WWOX binds and limits GSK-3β activity for neuroprotection (Wang et al., 2012) (route in light blue). Route V, Suppression of GSK-3β (e.g., by WWOX) leads to a reduced β-catenin degradation, which is regulated by E2 through the ERα/PI3K/AKT/GSK-3β signaling pathway (Perez-Alvarez et al., 2012) (route in purple). Route VI, In the Wnt/Frizzled signaling pathway, Wnt protein induces the activation of Dvl to block the activity of GSK-3β. Without Wnt, β-catenin is subjected to destruction by the complex of axin, APC, CK1α, and GSK-3β (Bouteille et al., 2009). Transiently overexpressed WWOX binds Dvl to suppress the Wnt signaling (Bouteille et al., 2009) (route in orange). of ER-positive cancers. Together, these observations suggest WWOX functions as an enzyme or a receptor involved in sex steroid metabolism to modulate disease progression. Crosstalk of ERs, WWOX, and Wnt Signaling Shown in the Table 1 is a comparison between WWOX and ERs/mERs regarding their molecular structures, actions and potential mechanisms. WWOX is involved in many signal pathways (Chang et al., , 2014Chang, 2015;Huang and Chang, 2018), and this allows its crosstalk with the signaling from ERs and mERs. For example, ERs activate the ERK1/2 and PI3K signaling cascades (Mannella and Brinton, 2006;Jover-Mengual et al., 2010;Tang et al., 2014), and that WWOX physically binds ERK1/2 for supporting cell survival (Lin et al., 2011) (Figure 5) and lymphocyte differentiation (Huang et al., 2016). PERSPECTIVES A Focus on WWOX and Protein Aggregation in Middle Age Both aggregated tau and Aβ are considered as the key pathological markers of AD, and have been the center of focus for drug development over the past several decades. Aggregated tau and Aβ are usually found in the brain of AD patients over 70 years old while normal individuals from 40-70 years old possess very low amounts of aggregated tau and Aβ. We have determined the presence of aggregated proteins such as TPC6AD and TIAF1 in approximately 50% of the brains of mid-aged normal humans Sze et al., 2015). Indeed, WWOX downregulation causes self-aggregation of TPC6AD and TIAF1 in vitro (Lee et al., 2010;. Wwox gene knockout mice rapidly exhibit aggregation of many proteins in the brains just in 15 days after birth. These proteins include TPC6AD, TIAF1, and SH3GLB2, tau and Aβ (Lee et al., 2017). Notably, human newborns with WWOX deficiency rapidly develop severe neural diseases, metabolic disorders, retarded growth and early death. While TRAPPC6A and TIAF1 are starters for protein aggregation, these proteins are indeed potential targets for drug development. Development of therapeutic peptides and humanized monoclonal antibodies is under way. Zfra Initiates a Novel Immune Response to Block Protein Aggregation and Restores Memory Loss Zfra restores memory deficits in Alzheimer's disease tripletransgenic mice by blocking the aggregation of TPC6A , SH3GLB2, Tau, and amyloid β, and reducing inflammatory NF-κB activation (Lee et al., 2017). As a WWOX-binding protein, exogenous Zfra peptide, when introduced in the circulation in mice, is mainly deposited in the spleen. Zfra binds membrane hyaluronidase Hyal-2 in non-T/non-B Z lymphocytes. Z cells then become activated to suppress cancer growth . Intriguingly, Z cells exhibit a memory function in killing cancer cells, even though these cells have never exposed to the cancer cells. Autologous Z cells, once activated by Zfra, are of great therapeutic use in treating cancer and probably neurodegeneration such as AD. Both full-length Zfra and a truncated 7-amino-acid Zfra4-10 are effective in suppressing cancer growth and restoring memory loss (Lee et al., 2017). Since Zfra is stably retained on the Z cell surface, Zfra activates the Hyal-2/WWOX/Smad4 signaling in Z cells. Peptides or monoclonal antibodies are being developed to target membrane Hyal-2 as well as WWOX and to activate Z cells in blocking cancer and neurodegeneration. A pTyr33-WWOX Peptide as an Agent for Blocking Neuronal Injury and Death Finally, an 11-amino-acid phospho-Try33 WWOX peptide was developed to block neurotoxin MPP+-induced neuronal death in the brain (Lo et al., 2008). This phospho-peptide effectively suppresses neuronal death via inhibition of JNK1 activation. In controls, non-phospho-WWOX peptide has no effect. The phospho-Try33 WWOX peptide is now being tested for its efficacy in blocking neuronal death in AD and traumatic brain injury. AUTHOR CONTRIBUTIONS C-CL and C-CT carried out the literature review. Y-AC, C-CL, P-CH, and N-SC prepared schematic graphs. C-HC reviewed and revised the manuscript. C-IS and N-SC wrote the manuscript. N-SC completed the final version and provided rebuttal letters to all reviewers. All authors read and approved the final manuscript.
2018-08-15T13:05:05.917Z
2018-08-15T00:00:00.000
{ "year": 2018, "sha1": "b02075e5b96fd9f85a2bceca3c8879e8f381adb9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2018.00563/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b02075e5b96fd9f85a2bceca3c8879e8f381adb9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
2760731
pes2o/s2orc
v3-fos-license
Linkage of OGC WPS 2.0 to the e-Government Standard Framework in Korea: An Implementation Case for Geo-Spatial Image Processing : There are many cases wherein services offered in geospatial sectors are integrated with other fields. In addition, services utilizing satellite data play important roles in daily life and in sectors such as environment and science. Therefore, a management structure appropriate to the scale of the system should be clearly defined. The motivation of this study is to resolve issues, apply standards related to a target system, and provide practical strategies with a technical basis. South Korea uses the e-Government Standard Framework, using the Java-based Spring framework, to provide guidelines and environments with common configurations and functions for developing web-based information systems for public services. This web framework offers common sources and resources for data processing and interface connection to help developers focus on business logic in designing a web system. In this study, a geospatial image processing system—linked with the Open Geospatial Consortium (OGC) Web Processing Service (WPS) 2.0 standard for real geospatial information processing, and based on this standard framework—was designed and built utilizing fully open sources. This is the first case of implementation based on WPS 2.0 running on the e-Government Standard Framework. Establishing a standard for its use will be important, and the system built in this study can serve as a reference for the foundational architecture in building geospatial web service systems with geodata-processing functionalities in government agencies. Introduction With advances in computer technology and growth in demand for services utilizing it, diverse information has become available.For example, geospatial information and geo-based satellite images are used as base maps in services such as route guidance navigation, portal map service, facility management, and site suitability analysis, and are used in environmental application of nearly inaccessible areas and regional analysis through geo-based image processing.In addition, remotely sensed data and information may be used by stakeholders to make effective decisions in managing disasters [1].It is expected that services will emerge in a much improved form as better means and methods are applied because of information technology development [2].We should consider designs that are capable of managing complicated services systemically and expanding them easily.For engineering issues in geospatial applications, integration, customization, or optimization with multiple close or loosely coupled technological components on matured or maturing stages are also important.Accordingly, necessity of international standard interfaces and standardized frameworks has been increased.This work presents an integrated application based on an open source strategy with heterogeneous standard sources, such as international standards for geospatial area and a standard framework for so-called electronic government. The International Organization for Standardization (ISO), which develops and distributes internationally accepted standards, and the Open Geospatial Consortium, Inc. (OGC), which leads geospatial industry standards, are developing a standard for geospatial information.ISO is in the process of standardizing content related to collecting, processing, analyzing, and presenting geospatial information via the technical committee (ISO/TC) 211 applicable to Geographic Information System (GIS) standards [3].OGC-an organization oriented toward open standards-researches and establishes technical standards for data compatibility and interoperability technical standards.The standards include Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), and Web Processing Service (WPS) [4].A system developed in compliance with international standards shows many advantages, such as information integration, quality improvement, consistency, easy maintenance, and cost reduction in utilization of geospatial information [5,6].Additionally, it allows existing and new systems to share and distribute information, thereby improving compatibility and interoperability.Therefore, applying international standards should be considered in system design stages to facilitate efficient operation and geospatial service management.Among OGC standards, WPS-an interface and communication method in which geospatial processing can be defined and accessed from the Web-has compatibility with other OCG standard web services [7]. An electronic government, e-Government, in the Web environment is currently an important technical theme in most countries [8][9][10].In South Korea, the e-Government Standard Framework is developed and distributed for free to improve web system quality and help standardize and ease development, and it is based on the Spring Framework.The Spring Framework provides a comprehensive programming and configuration model for Java-based application and supports infrastructure at the application level [11].The e-Government Standard Framework-which offers application architecture, basic functions and common components necessary for web system development-provides features such as Development Environment, Runtime Environment, Administration Environment, Operation Environment, and Common Components.Through these functions, the framework resolves dependency on a specific vendor solution and provides standards that can be linked with commercial solutions, ensuring their interoperability.In addition, it provides a function for hybrid app development.A hybrid app utilizes elements of both native applications for a specific platform on a mobile device and web applications for multiple platforms available over the Internet by any browser.Because common components can receive common modules premade at the time of building the Web system, rapid development and quality gap reduction relative to different systems can be predicted.There is also the advantage of increased reusability of developed modules. As recent web services deal with multiple types of data and a large volume of content, distributed systems are inclined to be dominant over centralized ones.Furthermore, interoperable geoprocessing functionalities via WPS interface standards offers user operation to the client by requesting algorithms or functions in remote servers without installation of external ones that the user wants.Meanwhile, the e-Government framework is a basic requirement to develop a public web services system.If geo-based service systems with geodata-processing functions should be operated in the distributed environments in public sectors, both WPS and the e-Government framework are crucial factors from the viewpoint of software engineering.This is the case for Korea, but this situation may occur in other countries that already have the e-Government framework, or plan to establish it.That is the main motivation of this study. This study reflects some benefits of standards and technology that are deemed important as systems become more complicated, and constructs a number of integrated trial system examples in connection with these merits.Among the reflected standards and technologies are WPS, a geospatial standard, and the e-Government Standard Framework.The trial system had the ability to perform satellite image-processing functionalities and used a request interface based on the WPS 2.0 standard.The ZOO-Project [12], an open source framework to create and chain WPS-compliant web services, was used as a WPS platform to conform to WPS 2.0.The ZOO-Project provides various components to utilize WPS and comprises Server, Services, API (application programming interface), and Client.A geospatial information-processing interface can be used, and its function is provided via Server and Services.While the WPS interface offers ways to implement geospatial processing on the Web, appropriate functions should be developed separately for real processing.For this purpose, the Geospatial Data Abstraction Library (GDAL), a geospatial input/output library, and the Orfeo ToolBox (OTB), which focuses on satellite image-processing functions, were used.Both are open sources.The trial system was developed based on the e-Government Standard Framework.In South Korea, the e-Government Standard Framework is a set of specifications to guide the implementation of all types of web-based information systems for public services supported by governments and public enterprises.It uses basic requirements to develop web services.When government institutions or agencies develop a web-based system to serve geo-based data services and their derived contents, they should adopt this standard guide.When a development project is conducted based on this framework, complicated tasks-such as specific data processing in a certain application field-are partially supported, as is the case with Spring Framework features.Because common component functions are also available, the burden of development may be eased once the utilization method is understood.It is possible to deliver various information to the client-such as function lists, input variables, and status values-through developed modules.This trial system is a prototype for public services for interoperable processing of geo-based images among two or more remote servers managed by governments or public sectors.Therefore, it needs the WPS platform to receive satellite image-processing requests and execute them accordingly.The module in the trial system follows the WPS 2.0 communication method and is able to implement communication with other WPS 2.0-compliant servers. OGC WPS 2.0 and Open Source The OGC, an international standardization body, has established several standards to provide geospatial services on the Web.These OGC standards, widely used in industry and the academy, have also had great influence internationally [13][14][15].Among them, WPS is a standard referred to when developing a web system to support geospatial processing.If developed in compliance with the standard, web systems can interact with other servers and improve system reusability.In addition, geospatial-processing applications can be improved with enhancement of interoperability and accessibility of geospatial information [16].WPS 2.0 is currently the latest version, and it has been modified with added functions to meet the requirements of enhanced web technology.One of the major changes is that WPS 2.0 supports synchronous and asynchronous processing, while WPS 1.0 supports synchronous processing only.In conducting geospatial processing on the Web with the support of asynchronous processing, it is possible to build a system in which a new or the next geospatial-processing service can be initiated without waiting for completion of a previously executed geospatial process.WPS 2.0 has six interface definitions on flows, usages, requests, and responses: GetCapabilities, DescribeProcess, Execute, GetStatus, GetResult, and Dismiss [17]. GetCapabilities is an interface that returns metadata on WPS servers with a form of XML documents.Metadata include geospatial-processing function lists and WPS interface communication methods.The DescribeProcess interface returns detailed information on geospatial-processing functions in XML document form.When requesting DescribeProcess, identifiers should be sent together for geospatial-processing functions for which detail is desired.The returned detailed information includes descriptions of the function, input value, and result values.Execute is an interface that requests the execution of geospatial function processing.This interface does not wait for completion of the execution request, but it immediately returns a JobID (a job identifier) to XML documents.The processing progress status and result value can be identified through the JobID.Several JobIDs can be created for one function, and each JobID performs geospatial processing individually.This new feature was added with the structure change, which supports asynchronous processing.GetStatus, an interface that shows progress status, returns one of four statuses to XML documents: Running, Succeeded, Failed, and Accepted.GetResult is an interface that returns geospatial-processing function result values corresponding to a JobID.If the progress status obtained through GetStatus is "Succeeded", XML documents containing the result value can be created on request.Dismiss is an interface that terminates a process corresponding to a JobID.Examples of WPS standard-compliant open sources include 52 • North, Deegree, GeoServer, and PyWPS [18]. The ZOO-Project, an open source platform that supports both WPS 1.0 and WPS 2.0, was applied to the trial system for WPS 2.0 application.The ZOO-Project is developed in C, Python, and JavaScript and provides a developer-friendly framework for building WPS servers.For this purpose, it offers components to build and utilize WPS: ZOO-Kernel, ZOO-Services, ZOO-API, and ZOO-Client.ZOO-Kernel, a CGI (Common Gateway Interface) program, has interfaces and communication methods following WPS standard, with requests and responses conforming to the standard.ZOO-Services is a component compatible with ZOO-Kernel, and its main role is to manage and provide geospatial-processing function as a service.Services offered by ZOO-Services are composed of configuration files and source codes to be executed.These services can be developed and configured independently, or they can support interworking with geospatial open sources such as GDAL, OTB, the Computational Geometry Algorithms Library (CGAL) [19], Geographic Resources Analysis Support System (GRASS) GIS [20], and System for Automated Geoscientific Analyses (SAGA) GIS [21].In the case of separate developments, interoperability is ensured across diverse environments because many types of development languages (including Python, PHP, Java, and JavaScript) are supported.When the developed service is registered on ZOO-Services, ZOO-Kernel provides the corresponding service via WPS request.A WPS server can be built using these two services as essential components of the ZOO-Project.ZOO-API, a library composed in JavaScript, provides an API (Application Programming Interface) that creates or executes services to be registered on ZOO-Services on servers.ZOO-Kernel and the JavaScript engine, SpiderMonkey, are required on the server-side to use ZOO-API.Lastly, ZOO-Client is a JavaScript API that can be used on the client side.It offers ways to interact with other WPS servers, including the ZOO-Project.ZOO-API and ZOO-Client, which are optional, provide web system development methods following WPS.With all the components provided by ZOO-Project and additional client developments, it is possible to build a geospatial-processing web system.However, the main focus of this study is building a system following and utilizing WPS 2.0 linked to the e-Government Standard Framework.Furthermore, the system was built considering linkages to open sources that will support WPS 2.0 in the future.Therefore, only essential components of the ZOO-Project were used, excluding ZOO-API and ZOO-Client, which were optional components. Using interfaces and communication methods defined in WPS 2.0 makes it possible to show process information and build process-handling user interfaces.Process progress status can be checked in real time after a process execution request, and functions such as killing processes during processing can be implemented.This functionality is possible because the WPS 2.0 standard supports asynchronous processing when handling geospatial information, and a system structure capable of multiprocessing can be built using this feature. The e-Government Standard Framework in South Korea In South Korea, the e-Government Standard Framework (eGovframework) has been developed and applied so as to establish a basic environment standard required for development of web service systems applied to public projects.The eGovframework aims to standardize software and improve qualities and reusability of web services.It also intends to reduce product quality gaps between businesses and improve investment efficiency.The major features of the eGovframework include open standard conformance via open source utilization, submission of standards linkable to commercial solutions, implementation of nationwide standardization of web system development, flexibility and ease of replacement through modularization of each service, support for mobile web and hybrid apps, and environment provisions for web system development.Use of the eGovframework for web system development offers many advantages, such as cost reduction through reuse of common components, resolution of dependency on specific vendor solutions, improved interoperability with commercial solutions, and easy maintenance.Further, the eGovframework has been distributed for free to promote its usage in private sectors as well as public projects, and as of June 2016, it has been applied to 649 public and private information system projects in areas including administration, housing, disaster prevention, and statistics.As proof of its practicality, the eGovframework has been rapidly spreading across South Korea and has been applied to government information systems in various countries: the e-learning system in Saudi Arabia, the urban administration system of Da Nang city in Vietnam, the medical information platform in Mexico, the electronic customs system in Ecuador, the e-bidding system in Tunisia, and so forth [ This is a classification of results considering frequency of redundant developments, reusability, and standardization application.This classification also elicits functions with high development productivity and efficiency, required for building web systems.Explanations for each function are as follows: Common Technological Service, a common component that runs on the eGovframework, comprises user directory/authentication, security, statistics/reporting, collaboration, user support, system management, system/service integration, and digital asset management, providing 136 components in total.Elementary Technological Service is a common component that works in a normal Java environment, regardless of the eGovframework.The Elementary Technological Service provides 104 components including utilities such as calendar and format/calculation/conversion.The New Mobile Common Component offers functions optimized for mobile devices, utilizing the User Experience (UX) support function.Additionally, the New Mobile Common Component, which includes general common components, provides 11 components, including mobile common technology, support service, and mobile device support components. Linkage of WPS and the eGovframework Various model studies based on WPS were reviewed for reference to design an integrated trial system linking WPS 2.0 to the eGovframework.The system design and WPS utilization case studies include development of geospatial-processing workflow design tools.First, Open Modeling Interface (MI), WPS, and Sensor Web Enablement (SWE) were implemented; Second, various geospatial information analysis model encapsulation methods were developed following the WPS standard; Third, combining Open MI and WPS, a web service model was formed; Finally, a web service design for automatic quality evaluation was applied [23][24][25][26].These study cases used WPS-related information for reference in stages for designing and building systems.Research cases on developing open sources, such as 52 • North [27] and PyWPS [28], were also consulted.Subjects of other open source case studies included models capable of creating thematic maps on the Web by linking WMS, WFS, and WPS; geospatial information distributed processing implementation utilizing WPS; and design and development of geospatial automatic interpolation web services [29][30][31].Reference cases also included use of WPS 1.0 and WPS 2.0 together, linking the Spring Framework to WPS, and visualization of public data and geospatial data based on the eGovframework [32][33][34]. The integrated trial system, designed by linking WPS 2.0 with the eGovframework and comprised of the server and client, was built using a number of open sources.Table 2 shows the environments and open sources used to build the trial system.The Web environment was constructed using Ubuntu, Apache, and Tomcat, on which the eGovframework-based web system was built.WPS standard application and satellite processing were implemented using open sources, without its direct implementation.The ZOO-Project was utilized for WPS standard application, and GDAL and OTB were used for satellite image processing.GeoServer [35], a geospatial data server, was employed to manage satellite images and processing results; the client can call geospatial data easily and visualize it.On the client side, JQuery [36], a JavaScript library, and OpenLayers 3 [37], a web-mapping library, were used to compose the user interface (UI) and visualize processing results based on data returned to the WPS interface. Figure 1 shows the design diagram of the integrated trial system.The client is composed of WPS 2.0 request modules, modules composing the UI based on returned data, and modules visualizing satellite images and processing results.The request modules conduct requests through XML binding in accordance with the request method of the WPS interface.For this purpose, a request schema appropriate to each interface needs to be built for each module.For building schema and request, GetStatus and GetResult request modules can be used after an Execute request because they require JobID values.The UI composition module comprises satellite image-processing lists, processing function, and progress status on the client screen.The visualization module visualizes background maps, satellite images registered on GeoServer, and processing results.Processing results are visualized using GeoServer layer names returned from GetResult requests.All client-side modules were implemented using jQuery, and, in the case of the visualization module, OpenLayers 3 was utilized to visualize required geospatial information.The server is composed of a web system based on the eGovframework, the ZOO-Project, and GeoServer.The Rest Controller receives WPS 2.0 requests from the client on the Web system; GetCapabilities, DescribeProcess, GetStatus and GetResult requests are made with the GET method and Execute requests with the POST method.Rest Controller runs services corresponding to the received requests, connects to the ZOO-Project via the Data Access Object (DAO), and retrieves XML documents matching the request.The service extracts only necessary information from the XML documents, sends it to Rest Controller, and returns it to the client.Information extracted per request The server is composed of a web system based on the eGovframework, the ZOO-Project, and GeoServer.The Rest Controller receives WPS 2.0 requests from the client on the Web system; GetCapabilities, DescribeProcess, GetStatus and GetResult requests are made with the GET method and Execute requests with the POST method.Rest Controller runs services corresponding to the received requests, connects to the ZOO-Project via the Data Access Object (DAO), and retrieves XML documents matching the request.The service extracts only necessary information from the XML documents, sends it to Rest Controller, and returns it to the client.Information extracted per request includes the satellite image-processing function list, satellite image-processing function information details, processing result, and JobIDs, which are contained in metadata.The ZOO-Project is designed to return information requested from the DAO or carry out an Execute request.This function exists because the DAO conducts connection requests to the ZOO-Kernel through the ZOO-Project CGI Connector.The ZOO-Kernel plays a server role through interaction with ZOO-Services, and ZOO-Services enables responses corresponding to WPS requests through registered services.While services included in ZOO-Services can be developed either independently or with open sources capable of interworking, this study provides services linking GDAL, OTB, and GeoServer.Because only open source linkage was considered in service development, Python was used among the development languages supported by the ZOO-Project. As an open source utilized in linkage services, GDAL carries out tasks converting server satellite images into applicable satellite images, and OTB conducts satellite image-processing using the converted images.Finally, the processing result is registered on the GeoServer, and the client visualizes the processing result.The DAO of the Web system can carry out information and Execute requests on the satellite image-processing function by linking to ZOO-Project.On an Execute request, the ZOO-Project creates a JOB and executes registered services.The JOB has a JobID, progress status of service, and result values and can respond to service status or result requests. A Trial System with Geospatial Image-Processing Function A trial satellite image-processing system was built based on the diagram designed in Figure 1.The eGovframework-based web system, which can be provided with a common component function, offers a development environment based on Eclipse [38].In the development environment, it is possible to build an eGovframework-based web project.The project can add a common component function, which is shown in Figure 2. When common components are added, required functions can be selected and added; in the figure, the mobile common function and user authentication functions were added.When the common component function is added, the eGovframework package is created in the Java package, which includes packages related to mobile common functions and the eGovframework.By utilizing added packages, an ability to construct a web environment capable of running in a mobile environment and an authentication function, such as login, can be applied to the Web system.As the result of the application, Figure 3 shows a login combo box of the Web system and the entry panel accessed from a mobile device.Because of the user authentication common component, the login page is loaded when the Web system is accessed.Likewise, adding the mobile common component function causes the loading of a login page dedicated to mobile devices when accessed from the mobile web.Therefore, the common component function enables developers to add necessary functions immediately without the need for developing necessary functions on a web system. Figure 4 shows the client UI screen constructed through WPS 2.0 requests.Figure 4a presents the satellite image-processing function list.When the client requests a function list, the Web system conducts the GetCapabilities service and receives XML documents containing metadata.The Web system refines the documents, extracts lists of satellite-processing functions and returns them to the client.Based on this, the satellite image-processing function list is visualized.Figure 4b is a modal view to implement the satellite image-processing function.When the data requested are necessary for modal view composition, the Web system runs the DescribeProcess service.After running the service, XML documents are received that contain detailed information of the satellite image-processing function.The Web system extracts only necessary information to conduct the satellite image-processing function, and transfers it to the client.The client creates a modal view, based on the information received. were added.When the common component function is added, the eGovframework package is created in the Java package, which includes packages related to mobile common functions and the eGovframework.By utilizing added packages, an ability to construct a web environment capable of running in a mobile environment and an authentication function, such as login, can be applied to the Web system.As the result of the application, Figure 3 shows a login combo box of the Web system and the entry panel accessed from a mobile device.Because of the user authentication common component, the login page is loaded when the Web system is accessed.Likewise, adding the mobile common component function causes the loading of a login page dedicated to mobile devices when accessed from the mobile web.Therefore, the common component function enables developers to add necessary functions immediately without the need for developing necessary functions on a web system.Figure 4 shows the client UI screen constructed through WPS 2.0 requests.Figure 4a presents the satellite image-processing function list.When the client requests a function list, the Web system conducts the GetCapabilities service and receives XML documents containing metadata.The Web system refines the documents, extracts lists of satellite-processing functions and returns them to the client.Based on this, the satellite image-processing function list is visualized.Figure 4b is a modal view to implement the satellite image-processing function.When the data requested are necessary is completed, displaying the processing result on the client.The image on the right indicates that the Cloud Detection function is completed and shows the processing result.This proves that support for asynchronous processing enables implementation of such a multiprocessing function.This result also shows that another function can be implemented without having to wait for the completion of the previously executed processing function.Figure 5 shows the progress status and results of the satellite image-processing function.Click event of the Execute button in the satellite image-processing modal view initiates an Execute request.Then, the WPS server receives JobID values and implements satellite image-processing based on input values in the modal view.The received JobID values are used to receive progress status and results of the satellite image processing function.Figure 5a shows the current progress status of the satellite image-processing function.When the progress status information on the Web system is requested, it is retrieved and returned through the GetStatus service.To show processing status continuously, progress status information is requested periodically until the satellite image-processing function is completed.Upon completion of processing, it is indicated as complete, and the processing result information is requested.In the Web system, the GetResult service is carried out to fetch a processing result.The processing result is a layer name registered on GeoServer.Using the GeoServer layer name, the client brings the processing result, visualizes it on the client, and shows it as in Figure 5b. Figure 5c indicates the satellite image multiprocessing stages and processing result.The screen image on the left, which displays the satellite image-processing progress status, shows that two processing functions have been activated.The satellite image-processing functions under activated states are Cloud Detection [39] and Gradient Magnitude functions [40].In the UI showing processing progress status, the progress status on top indicates the processing function executed first.The UI shows that the Cloud Detection function is still running, but the Gradient Magnitude function is completed, displaying the processing result on the client.The image on the right indicates that the Cloud Detection function is completed and shows the processing result.This proves that support for asynchronous processing enables implementation of such a multiprocessing function.This result also shows that another function can be implemented without having to wait for the completion of the previously executed processing function. Discussion In comparison with WPS 1.0, improved features in the standard interface WPS 2.0 were observed when the processing system using WPS 2.0 was implemented.The ZOO platform offers WPS communication through CGI.When a service is provided using CGI, the server may become vulnerable due to excessive loads caused by multiple connections.Because this can cause real service trouble, other communication methods should be presented.Existing open sources, excluding the ZOO platform, comply with WPS 1.0, while technical implementation for WPS 2.0 is underway or is not yet planned.In addition, WPS 2.0 research is trailing far behind those on WPS 1.0; performance studies on WPS 2.0 are required.Information and communication sectors provide a variety of technologies and platforms.Understanding current trends and measuring performance in various infrastructures can maximize utilization. The link to the user authentication function with WPS 2.0 is one of the common component functions provided by the eGovframework-based system.Therefore, functions required for a web system can be provided without the need for independent development, enabling a highly scalable structure to be built.Also, this structure enables system development, testing, and management through various components offered by the eGovframework.Because the eGovframework is provided through a virtual machine, which is a feature of Java, basic hardware specifications should be good enough to implement services.It is convenient that the standard framework common components can be added immediately according to the specific functions required.However, some unwanted components can be added when common components are added due to their interdependency, which affects system quality and maintenance.The organization behind the eGovframework provides a lightweight framework; therefore, a light version could be considered to solve this issue.When the eGovframework is utilized, areas of improvement need to be verified through performance and code quality tests. Conclusions Information and communication technologies will continue to progress, promising users more convenient services in various sectors.However, more time needs to be spent on management as systems grow more complicated.The same applies to systems in the geospatial sectors, and solutions should be sought to provide and manage geospatial services effectively.For this purpose, in this study, a trial system was built linking WPS 2.0, an international standard related to geospatial processing, to the eGovframework developed in South Korea.The objective was to suggest plans to provide geospatial services effectively.The trial system, a web system capable of online satellite image processing, used an open source method.Because WPS 2.0 was applied in building the trial system, the system has a structure capable of implementing consistent interfaces and sharing functions with other systems when providing geospatial processing functions on the Web.In addition, with asynchronous processing capability, flexible processing is possible in terms of getting functions.The ability to use and comply with international standards when providing geospatial processing on a Web system has been confirmed by using the ZOO-Project platform in order to comply with WPS 2.0.This system has a structure that can be modified to link open sources once WPS-related open sources support WPS 2.0 in the future.At present, many geospatial service systems are available on the Web, but this trial system is the first with WPS 2.0 and the eGovframework based on no-cost full open sources.The geo-based image operation in this system is an example demonstrating linkage of applied technologies.Other functionalities or algorithms for further geodata processing can be added to this design and architecture.This is an implementation case for Korea.However, this case can be a useful example for other countries that already have the e-Government framework or plan to establish it, because WPS and e-Government framework are crucial elements if geo-based service systems with geodata-processing functions are to be operated in the distributed environments and in the public sectors. Figure 1 . Figure 1.System design based on the eGovframework using open sources for geoprocessing and manipulation of geo-based images, including the ZOO-project. Figure 1 . Figure 1.System design based on the eGovframework using open sources for geoprocessing and manipulation of geo-based images, including the ZOO-project. Figure 2 . Figure 2. Application of common components for the eGovframework in South Korea.Figure 2. Application of common components for the eGovframework in South Korea. Figure 2 .of 13 Figure 3 . Figure 2. Application of common components for the eGovframework in South Korea.Figure 2. Application of common components for the eGovframework in South Korea.ISPRS Int.J. Geo-Inf.2017, 6, 25 9 of 13 Figure 3 . Figure 3. Automatic loading of additional common components by user authentication. Figure 4 . Figure 4. Configuration of the user interface on the client side: (a) select algorithm-GetCapabilities request; (b) algorithm modal view-DescribeProcess request. Figure 5 . Figure 5. Application of the progressing algorithm: (a) running process-GetStatus request; (b) completed process and result-GetResult request; (c) multiprocessing and results. Figure 4 . Figure 4. Configuration of the user interface on the client side: (a) select algorithm-GetCapabilities request; (b) algorithm modal view-DescribeProcess request. Figure 4 . Figure 4. Configuration of the user interface on the client side: (a) select algorithm-GetCapabilities request; (b) algorithm modal view-DescribeProcess request. Figure 5 . Figure 5. Application of the progressing algorithm: (a) running process-GetStatus request; (b) completed process and result-GetResult request; (c) multiprocessing and results. Figure 5 . Figure 5. Application of the progressing algorithm: (a) running process-GetStatus request; (b) completed process and result-GetResult request; (c) multiprocessing and results. 22].The eGovframework is composed of application architecture, Runtime Environment, Development Environment, Operation Environment, Management Environment, Mobile Device API, and Common Components, which are required for building web systems.The Runtime Environment, based on the Spring Framework, is an application environment that provides common modules necessary for execution.The Spring Framework, an open source web framework based on Java, offers various services for dynamic web system development.The Runtime Environment is comprised of 7 service groups-including common foundation, display processing, mobile display processing and data processing-and provides 38 services in total.The Development Environment is a component offering an environment required for web system development.A host of environments, including Data Development Tool, Test Automize Tool, Code Inspection Tool, Template Project Generation Tool, and Common Component Tool, can facilitate building of automated and optimized development environment.The Operation Environment provides a monitoring tool, a communication tool, and a batch operation tool in the Runtime Environment.The Management Environment manages the version and status of the eGovframework.The Mobile Device API offers various APIs capable of directly accessing and using mobile device resources in mobile hybrid apps.In addition, it provides Runtime Environment APIs that support implementation and execution of device applications based on web resources, and Development Environment APIs that can facilitate device application development in the Android-based environment.Lastly, Common Components are a collection of developed components focusing on common reusable functions in building web systems.The Common Component is designed and developed in accordance with Model, View, and Controller (MVC) Architecture, based on the eGovframework.Table 1 shows the compositions and types of Common Components that are composed of Common Technological Service, Elementary Technological Service, and New Mobile Common Component. Table 1 . South Korean e-Government Standard Framework (eGovframework) component list. Table 2 . Web Processing Service (WPS) 2.0 processing system based on the eGovframework using open source. Table 2 . Web Processing Service (WPS) 2.0 processing system based on the eGovframework using open source.
2017-01-23T08:43:12.842Z
2017-01-20T00:00:00.000
{ "year": 2017, "sha1": "16c4295f933219e236c66c6f9d09535279472af7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2220-9964/6/1/25/pdf?version=1484917913", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "16c4295f933219e236c66c6f9d09535279472af7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
235127724
pes2o/s2orc
v3-fos-license
Novel Affibody Molecules Targeting the HPV16 E6 Oncoprotein Inhibited the Proliferation of Cervical Cancer Cells Despite prophylactic vaccination campaigns, high-risk human papillomavirus (HPV)-induced cervical cancer remains a significant health threat among women, especially in developing countries. The initial occurrence and consequent progression of this cancer type primarily rely on, E6 and E7, two key viral oncogenes expressed constitutively, inducing carcinogenesis. Thus, E6/E7 have been proposed as ideal targets for HPV-related cancer diagnosis and treatment. In this study, three novel HPV16 E6-binding affibody molecules (ZHPV16E61115, ZHPV16E61171, and ZHPV16E61235) were isolated from a randomized phage display library and cloned for bacterial production. These affibody molecules showed high binding affinity and specificity for recombinant and native HPV16 E6 as determined by surface plasmon resonance, indirect immunofluorescence, immunohistochemistry, and near-infrared small animal optical imaging in vitro and in vivo. Moreover, by binding to HPV16 E6 protein, ZHPV16E61235 blocked E6-mediated p53 degradation, which increased the expression of some key p53 target genes, including BAX, PUMA and p21, and thereby selectively reduced the viability and proliferation of HPV16-positive cells. Importantly, ZHPV16E61235 was applied in combination with HPV16 E7-binding affibody ZHPV16E7384 to simultaneously target the HPV16 E6/E7 oncoproteins, and this combination inhibited cell proliferation more potently than either modality alone. Mechanistic studies revealed that the synergistic antiproliferative activity depends primarily on the induction of cell apoptosis and senescence but not cell cycle arrest. Our findings provide strong evidence that three novel HPV16 E6-binding affibody molecules could form a novel basis for the development of rational strategies for molecular imaging and targeted therapy in HPV16-positive preneoplastic and neoplastic lesions. INTRODUCTION Cervical cancer is the second most common cause of cancerrelated mortality among females worldwide, with approximately 570,000 new cases and 311,000 deaths annually (Serrano et al., 2018). Compelling evidence suggests that persistent infection with high-risk human papillomaviruses (HR HPVs), including HPV 16,18,31,33,35,39,45,51,52,56,58, and 59 is the major etiological agent in cervical carcinogenesis (zur Hausen and de Villiers, 1994). Among these, HPV16 and HPV18 are the most prevalent genotypes, accounting for approximately 62.6 and 15.7% of cervical neoplasias, respectively (Walboomers et al., 1999;Tommasino, 2014). Concurrent chemoradiation is the main treatment for locally advanced cervical cancer, and this approach yields a 5-year disease-free survival rate of 65 to 78%, indicating that there is still ample room for improvement (Cohen et al., 2019). Diagnostically, HPV DNA tests are very sensitive for the diagnosis of HPV infection; however, their specificity is limited for cervical precancer and early cancers, as they detect the many benign HPV infections in addition to the less frequent, clinically important infections linked to disease (Simon et al., 2011;Schmitt et al., 2013). Therefore, the development of diagnostic and treatment strategies is urgently required to improve the diagnosis and clinical outcomes of patients with cervical cancer. The HPV viral genome contains 6 "early" (E1, E2, E4, E5, E6, and E7) genes and 2 "late" (L1 and L2) genes and is actively transcribed in infected cells (Estêvão et al., 2019). For persistent infections, the episomal viral genome integrates into the host chromosome, leading to invariably retained and expressed E6 and E7 oncoproteins. In contrast to E6, E7 plays a crucial role in the early stage of carcinogenesis by stimulating proliferation. E6 protein primarily promotes malignant progression: recruitment of a cellular ubiquitin ligase (E6AP) and degradation of tumor suppressor p53 overcomes cell cycle arrest and/or apoptosis, allowing increased DNA damage, and induction of telomerase contributes to immortalization and cancer development. HPV16 E6 and E7 oncoproteins also target multiple signaling proteins by perturbing several signaling pathways that are vital for malignant transformation and for retaining the malignant phenotype of cervical cancers (Tan et al., 2012;Almeida et al., 2019;Estêvão et al., 2019;Pal and Kundu, 2020). Indeed, interference with HPV16 E6/E7 activity by siRNA (Jung et al., 2015(Jung et al., , 2012 or small-molecule inhibitor (Dymalla et al., 2009;Celegato et al., 2020) has been found to exert strong antioncogenic effects on HPV16-positive cancer cells in vitro and in vivo. In addition, it is worth noting that HPV16 E6 and E7, as two non-cellular oncoproteins, are not expressed in normal cells (Dymalla et al., 2009). Therefore, HPV16 E6 and E7 oncoproteins are the ideal targets for molecular diagnosis and targeted therapy of HPV16related malignancies. Affibody molecules, a newly emerging class of affinity proteins based on scaffolds other than the immunoglobulin fold, are derived from one of the IgG-binding domains of staphylococcal protein A (Nilsson et al., 1987). By randomly mutating thirteen specific amino acid residues of the three α-helix regions, large affibody libraries can be constructed, from which potent binders for theoretically any desired target molecules can be selected using various display technologies, e.g., phage display technology (Löfblom et al., 2010;Gebauer and Skerra, 2020). With the simple, robust structure addition to small molecular size (58 amino acids, 6.5 kDa) and cost-efficient production, affibody molecules are widely applied, for example, as in vivo molecular imaging reagents and to block receptor signals (Ståhl et al., 2017;Tolmachev and Orlova, 2020). To date, over 500 studies have been published in which affibody molecules targeting approximately 50 different proteins have been isolated and serve as high-affinity ligands in a variety of applications 1 . The affibodytargeted proteins, including HER2 (human epidermal growth factor receptor 2; Orlova et al., 2006), EGFR (epidermal growth factor receptor; Wu et al., 2020), TNF-α (tumor necrosis factorα; Löfdahl et al., 2009), and transcription factor c-Jun (Lundberg et al., 2009), as well as EBV LMP2 (Epstein-Barr virus latent membrane protein 2; and HPV16 E7 (human papillomavirus type 16 E7; Xue et al., 2016;Jiang et al., 2018), which have been reported by our research team. Herein, we reported the selection and characterization of three HPV16 E6-binding affibody molecules (Z HPV16E6 affibodies) for their target binding ability to recombinant and native HPV16 E6 protein in vitro and their usage in molecular imaging in tumor-bearing mice. Further investigations showed that by binding to HPV16 E6, affibody Z HPV16E6 1235 blocked E6-mediated p53 degradation and specifically inhibited the cell viability and proliferation of HPV16-positive cancer cells. Moreover, we also showed that the combination of Z HPV16E6 1235 with Z HPV16E7 384 to simultaneously target the HPV16 E6 and E7 oncoproteins had a greater efficacy than either modality alone. Mechanistically, our data revealed that the synergistic antiproliferative activity primarily depends on the induction of cell apoptosis and senescence but is not related to cell cycle arrest. To our knowledge, this is the first report of HPV16 E6-binding affibody molecules as novel probes for the in vivo imaging diagnosis of HPV16-positive tumors. Most importantly, our study provides the first evidence that simultaneous targeting of HPV16 E6 and E7 with affibodies Z HPV16E6 1235 and Z HPV16E7 384 can significantly enhance antiproliferative activity in HPV16positive cancer cells. Animals, Cells and Vectors Female BALB/c-nude mice, 4 to 6 weeks old, were purchased from Shanghai SLAC laboratory animal CO., LTD (Shanghai, China), and kept at the animal facility of Wenzhou Medical University, China under specific pathogen-free (SPF) conditions. The near-infrared (NIR) small animal optical imaging experiment was approved by the Ethical Committee of Wenzhou Medical University. Murine HPV16-positive TC-1 cells, obtained from primary epithelial cells of C57BL/6 mice cotransformed with c-Ha-ras and HPV16 E6/E7 oncogenes, were kindly offered by Xuemei Xu (Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China). Human HPVpositive CaSki (HPV16), HeLa229 (HPV18 cell line, applied as HPV16 negative control) cells and nasopharyngeal carcinoma C666-1 cells (HPV negative control cell line) were bought from the ATCC (American Type Culture Collection) and cultured according to the supplier's instructions. The Escherichia coli BL21(DE3) and pET21a(+) vector were purchased from ATCC and Novagen, respectively. Selection of Affibody Ligands In our previous study, a 14-kDa high-purity HPV16 E6 recombinant protein was prepared (Zhang et al., 2020) and used as a target protein during selection. Screening for potential HPV16 E6-binding affibodies was conducted following protocols established in our laboratory, which are described in detail in several previous studies (Xue et al., 2016;. After pressure biopanning, phage-based ELISA screening and DNA sequencing, the sequences derived from inserted fragments in selected phage clones were potential affibody molecules with elevated affinity for HPV16 E6. Subcloning and Production of Affibody Molecules The genes encoding selected affibody molecules were subcloned into the NdeI and XhoI sites of pET21a(+), generating affibody constructs in fusion with a C-terminal 6 × His-tag. The resulting plasmids encoding each affibody molecule were prepared and transformed into E. coli BL21(DE3). After 6 h of isopropyl β-D-1-thiogalactopyranoside (IPTG, Sigma-Aldrich Co., St. Louis, MO, United States) induction, the proteins were purified with Ni-nitrilotriacetic acid (NTA) agarose resin according to the manufacturer's recommendations (Qiagen, Hilden, Germany). Purified proteins with the correct molecular mass were confirmed by SDS-PAGE and a Western blotting assay using an anti-His-tag monoclonal antibody (mAb, MultiSciences, Hangzhou, China). SPR-Based Binding Assay A surface plasmon resonance (SPR) assay was conducted to assess the binding ability of Z HPV16E6 affibodies to recombinant HPV16 E6 on a BIAcore T200 (GE Healthcare, Uppsala, Sweden) as previously described . As a negative control, affibody Z WT was used. With a 1:1 L binding model, data on SPR were fit globally and evaluated via BIA 3.0.2 software. Immunofluorescence Staining An indirect immunofluorescence assay (IFA) was performed following previously protocols . Briefly, we evenly cultured TC-1, CaSki, HeLa229, and C666-1 cells on coverslips in 6-well cell culture plates, and subjected them to a 6-h incubation with the Z HPV16E6 affibodies or Z WT affibody control, with a final concentration at 100 µg/ml. After five times PBS wash aimed to eliminate the free molecules, we then fixed the cells with 4% paraformaldehyde for 20 min at room temperature and permeabilized to promote the binding of the primary (mouse anti-His-tag mAb) and secondary [fluorescin thiocyanate (FITC)-conjugated goat anti-mouse IgG (H + L)] antibodies (Life Technologies, Carlsbad, CA, United States). Exactly 50 µM propidium iodide (PI; Beyotime Biotech Co., Ltd., China) was used for cell nuclei staining at 37 • C for 10 min, and images were obtained with a confocal fluorescence microscope (Nikon C1i, Japan). Cervical Cancer Tissue Sample Collection Eight cervical cancer tissue specimens were obtained from HPV 16-positive cervical cancer patients in the Department of Pathology, The First Affiliation Hospital of Wenzhou Medical University. In addition, 5 samples were collected from normal cervixes, proven negative for HPV by PCR, and used as negative controls. The project was approved by the Scientific and Ethical Committee of The First Affiliation Hospital of Wenzhou Medical University. Immunohistochemical Staining Three-micrometer sections were cut from the tissue blocks, placed on glass slides, baked at 60 • C for 60 min, deparaffinized, and rehydrated through graded alcohol rinses. Then, the slides were immersed in Tris/EDTA pH 9.0 buffer (10 mM Tris, 1 mM EDTA) and subjected to high-pressure processing for 3 min for heat-induced antigen retrieval. After washing in tap water, non-specific site blocking was followed by the addition of 10% normal goat serum in PBS buffer. Then, the slides were treated with 0.3% H 2 O 2 in methanol for 10 min to inactivate endogenous peroxidase activity. Subsequently, the tissues were incubated with Z HPV16E6 affibodies (100 µg/ml) for 1.5 h at 37 • C and then with mouse anti-His-tag mAb overnight at 4 • C and HRP-conjugated goat-anti-mouse antibody at 37 • C for 1.5 h. In addition, rabbit anti-HPV16 E6 polyclonal (prepared in-house) was used as a positive control, and Z WT and PBS were used as negative controls. Labeling of Affibody Molecules With DyLight 755 The labeling of Z HPV16E6 affibodies with DyLight 755 (Thermo Fisher Scientic, United States) was carried out in accordance with the manufacturer's instructions. The labeled affibody molecules were confirmed by SDS-PAGE and further detected at wavelengths of 730-950 nm by an in vivo fluorescence imaging system (CRi Maestro 2.10, United States; Supplementary Figure 3A). Biodistribution in Tumor-Bearing Mice The tumor targeting ability and dynamic distribution of the Z HPV16E6 affibodies were investigated in nude mice using NIR optical imaging. In brief, 1 × 10 6 TC-1 and HeLa229 cells were injected subcutaneously into the upper axillary fossa of nude mice (at least three mice per group). When the volume reached 300 ∼ 500 mm 3 , mice received an intravenous injection of DyLight 755-conjugated Z HPV16E6 affibodies (100 µg; 150 µL per mouse). Imaging was carried out at different time points post-injection (pi) using the NIR Imaging System (Cri Maestro 2.10, United States). To confirm whether uptake was mediated by specific targeting of HPV16 E6, HPV16-negative xenografts treated with Z HPV16E6 affibodies and HPV16-positive xenografts treated with Z WT affibody were used as negative controls. In addition, the tumor/skin ratios = (tumor signal -background signal)/(skin signal -background signal) × 100% at different pi time points were analyzed. Western Blotting Analysis CaSki cells seeded at 1 × 10 5 per well in 6-well plates were incubated with medium containing either Z HPV16E6 1235 or Z WT or medium alone for the indicated time. Equal amounts of protein (30 µg) were evaluated using the Western blot technique with reference to a previously highlighted protocol . In addition, we have listed all the primary antibodies in Supplementary Table 1. Immunoprecipitation CaSki cells (6 × 10 5 ) were plated in 10-cm tissue culture dishes and treated with 10 µM Z HPV16E6 1235 or Z WT control for 24 h. Following washes with PBS, pellets were lysed on ice for 20 min with cell lysis buffer (Beyotime Biotechnology, Shanghai, China) supplemented with protease inhibitors (Roche Molecular Biochemicals, Indianapolis, IN, United States) and cleared by centrifugation at 12,000 × g for 20 min at 4 • C. Then, 5 µg anti-p53 antibody was added to 400 µg whole cell lysates and gently rotated at 4 • C overnight. According to the manufacturer's instructions, the immunocomplex was collected with Protein A/G agarose (Beyotime Biotechnology, Shanghai, China). Finally, proteins were released by boiling in reducing SDS sample buffer and analyzed by Western blotting. Efficacy of the Combination of Z HPV16E6 1235 and Z HPV16E7 384 in vitro The Cell Counting Kit-8 (CCK-8) assay was performed to evaluate the efficacy of Z HPV16E6 1235 and Z HPV16E7 384, alone and in combination, in TC-1 and Caski (HPV16 positive) cells. Briefly, 5 × 10 3 cells were seeded onto 96-well plates and subsequently incubated for 2 days with the indicated agents at increasing concentrations (0.5, 1, 2.5, 5, 10, and 20 µM). HPV16positive cells treated with the Z WT affibody and HPV16-negative cells (including HPV18-positive HeLa229 cells and HPV-negative C666-1 cells) treated with the indicated agents were used as negative controls. Next, CCK-8 solution (10 µL, Dojindo, Japan) was put into every well and subjected to another 30 minincubation. Using a microplate reader, we measured absorbance at 450 nm, from which cell viability was determined. The half maximal inhibitory concentration (IC50) values were calculated using GraphPad Prism software (GraphPad Software, Inc.). The CCK−8 assay was performed at least three times. Plate Colony Formation Assay Experiments for plate colony formation were also conducted for cell proliferation ability analysis. Briefly, we seeded cells (5 × 10 3 ) in a 6-well culture plate. Cultures were maintained in a medium comprised of the indicated agents or medium alone for 14 days. Then, after cell fixation, a 20 min-staining was conducted with 0.1% crystal violet (Amresco, Solon, OH, United States). Images of the stained colonies were taken and counted. Chou-Talalay Analysis Pharmacological interaction between Z HPV16E6 1235 and Z HPV16E7 384 was determined using Chou-Talalay analysis (Chou, 2010). Briefly, 5 × 10 3 cells were seeded onto 96-well plates and subsequently incubated for 2 days with the indicated agents at different concentration combinations. Then, a CCK8 assay was used to measure the combined effect of the two therapeutic agents from different groups on TC-1 and CaSki cells. Using Compusyn software 2 , dose-effect curves for each regimen and for the combination of agents were plotted and an estimate of the combination index (CI) is achieved. A CI of >1, <1, =1 denotes antagonism, synergy, and additive effects, respectively. Flow Cytometric Analysis and Senescence-Associated ß-Galactosidase Assay Cell cycle alteration and apoptosis induction were evaluated using Flow cytometry analysis in HPV16-positive cell lines following exposure to Z HPV16E6 1235 and Z HPV16E7 384 alone and in combination. PI (MultiSciences, Hangzhou, China) was used to stain the cells for cell cycle analysis as described by the manufacturer's instructions and analyzed by flow cytometry (BD Biosciences, SanJose, CA, United States). G0/G1, S, and G2/M phase percentages were calculated and compared using ModFit LT 3.0 software. PI and Annexin V-FITC (Invitrogen, Carlsbad, CA, United States) were used to stain the apoptotic cells as described by the manufacturer's recommendations and quantified using flow cytometry (CytoFLEX, Beckman Coulter, United States). Cells were categorized into viable, dead, early apoptotic, and apoptotic cells and the ratio of apoptotic (including early apoptotic) cells were compared with the control for each experiment. Statistical Analysis Data were presented as mean ± standard deviation (SD). Statistical analysis of the significance between groups was conducted using 2-tailed unpaired Student's test, and P < 0.05 was considered to be statistically significant. All calculations were performed with the software SPSS16.0. Generation and Purification of Z HPV16E6 Affibodies DNA sequencing was performed on 66 ELISA-positive clones (OD > 0.5, Supplementary Figure 1) after three rounds of screening of a combinatorial affibody library and identified 42 unique phagemid inserts, occurring one to seven times. As shown in Supplementary Figure 2, these 42 clones showed high homology in the framework region of the original affibody scaffold molecule Z WT but were highly diverse in the 1 and 2 helical regions. Three potential affibodies, Z HPV16E6 1115, Z HPV16E6 1171, and Z HPV16E6 1235, were selected for further analysis on the basis of the following criteria: (a) high binding affinity in the ELISA screening, (b) appearance frequency for the particular clone, and (c) relatively high-yield expression and purification as recombinant proteins in E. coli BL21(DE3). The three affibody genes were then inserted into pET21a (+) using the NdeI and XhoI restriction sites to generate the recombinant plasmid pET21a(+)/Z HPV16E6 ( Figure 1A). The resulting plasmids were transformed into E. coli BL21(DE3) and further induced to express recombinant His-tag fusion affibody protein by the addition of 1 mM IPTG for 6 h at 37 • C. As shown in Figure 1B, channels 3-6, the band was detected at 6.5 kDa in E. coli BL21(DE3) transformed with pET21a(+)/affibody after induction, which was consistent with the expected size of the affibody molecules. However, in the test, E. coli BL21(DE3) bacteria alone and E. coli BL21(DE3) transformed with pET21a(+) empty vector did not show the 6.5 kDa band, indicating that the bacteria itself and the pET21a(+) empty vector did not express the protein ( Figure 1B, channel 1-2). After the successful induction of pET21a(+)/Z HPV16E6 expression, we further purified His-tag fusion affibodies by affinity chromatography using Ni-NTA agarose resin. SDS-PAGE analysis showed a distinct band of the expected molecular mass, indicating that the final affibody molecules were pure and stable ( Figure 1C). In addition, Western blotting results showed that these purified proteins could specifically react with mouse anti-His-tag mAb ( Figure 1D). Biosensor Binding Analyses of Z HPV16E6 Affibodies The real-time biospecific interaction of the selected affibody molecule with the target protein was investigated using a BIAcore T200 instrument. The target protein, HPV16 E6, was immobilized on the carboxylate glucan surfaces in a CM5 chip, and different amounts of Z HPV16E6 (i.e., 0.8 to 6.4 µM) passed through the chip at 30 µL/min at 25 • C. As shown in Figures 2A-C, concentration-dependent increases in resonance signals were detected, suggesting that the three affibodies bound well to recombinant HPV16 E6. As expected, Z WT control could not be detected in any effective reaction units in resonance signals ( Figure 2D). To further determine the dissociation equilibrium constant (KD), the association rate constant (k on ), and the dissociation rate constant (k off ), kinetic BIAcore analysis was performed using BIA evaluation 3.0.2 software (Biacore) through a one-to-one Langmuir binding model. Analysis results showed that the dissociation equilibrium constant (KD) values of Z HPV16E6 1115, Z HPV16E6 1171, and Z HPV16E6 1235 were 5.475E-06 mol/L, 6.229E-05 mol/L, and 1.280E-07 mol/L, respectively, which were significantly lower than that of the Z WT affibody (1.404E-00 mol/L). Conversely, the association rate constants (K on ) values of the three affibody molecules were significantly higher than that of the Z WT affibody ( Table 1). SPR data demonstrated very clearly that all three Z HPV16E6 affibodies we selected had high binding affinity to recombinant HPV16 E6. Specificity Analysis of Z HPV16E6 Affibodies After confirming the binding ability of Z HPV16E6 to recombinant HPV16 E6, we next investigated whether Z HPV16E6 affibodies could specifically bind to native HPV16 E6 expressed in HPV16positive cells. We therefore incubated Z HPV16E6 affibodies or Z WT control with live cells for 6 h and analyzed by IFA using confocal microscopy (400 × magnification). The results showed a large number of green dots in a punctate pattern inside HPV16-positive TC-1 and CaSki cells, suggesting efficient internalization of the 6.5-kDa affibody molecule and targeting of HPV16-positive cells. In contrast, HPV16negative cells (including HPV18-positive HeLa229 cells and HPV-negative C666-1 cells) treated with Z HPV16E6 affibodies and HPV16-positive cells treated with Z WT did not produce any fluorescent signal after the same duration of incubation ( Figure 3A). Moreover, the immunohistochemistry (IHC) assay offered additional evidence for a specific interaction of the Z HPV16E6 affibodies with HPV16 E6. All three Z HPV16E6 affibodies functioned very well as detection reagents and showed brown signals in HPV16-positive human cervical cancer tissue specimens but not in HPV-negative normal human tissue specimens, which concur to the staining pattern of the anti-HPV16 E6 polyclonal antibody ( Figure 3B). These results revealed that the Z HPV16E6 affibodies exhibit strong specific binding to native HPV16 E6 expressed in HPV16-positive cell lines and tissues. Tumor Targeting Ability of Z HPV16E6 Affibodies in vivo Encouraged by the impressive results obtained in vitro, we further investigated whether the Z HPV16E6 affibodies could also efficaciously and specifically accumulate in HPV16-positive tumor xenografts in vivo by using DyLight 755-labeled affibody molecules. Athymic nude mice carrying TC-1 (HPV16 positive) or HeLa229 (HPV18 positive) xenografts were injected with DyLight 755-conjugated Z HPV16E6 affibodies or Z WT . A nearinfrared fluorescence (NIR) optical imaging system was used to determine the in vivo biodistribution and tumor uptake efficacy of the Z HPV16E6 affibodies over a time course of 5 min to 72 h. As shown in Figure 4A, the fluorescence signal of DyLight 755-Z HPV16E6 affibodies in the TC-1 xenograft model was detectable as early as 30 min post injection. Subsequently, high-contrast fluorescent signals were obtained 1 h post injection (hpi), peaked at 2 hpi, and remained steady for over 8 hpi with DyLight 755-Z HPV16E6 1115 and DyLight 755-Z HPV16E6 1171 and over 12 hpi with DyLight 755-Z HPV16E6 1235 (Figures 4A,B). However, in the HeLa229 xenograft model, a non-specific fluorescent signal of DyLight 755-labeled Z HPV16E6 affibodies in the tumor was observed at 30 min pi and cleared within 1-2 h, which is similar to the results in the xenograft model (both TC-1 and HeLa229 cells) treated with DyLight 755-Z WT control ( Figure 4A, Z WT panel and Figure 4C). In addition, affibody molecular accumulation in the kidneys was observed in athymic nude mice with or without tumor xenografts, indicating that the small DyLight 755-labeled affibody proteins were cleared by kidney filtration (Figure 4 and Supplementary Figures 3B,C). Since Z HPV16E6 1235 has a better affinity for SPR detection and residence time in the mouse body than the other two Z HPV16E6 affibodies, Z HPV16E6 1235 was selected for further research. Affibody Z HPV16E6 1235 Restores the Intracellular Expression and Transcriptional Activity of p53 in HPV16-Positive Cells Given that the affibody molecule investigated was able to bind HPV16 E6 with high binding affinity and specificity, we next asked whether targeting HPV16 E6 with Z HPV16E6 1235 could protect p53 from HPV16 E6-mediated degradation in cells endogenously expressing HPV16 E6 (i.e., CaSki, HPV16positive). As reported in Figure 5A, the Western blotting results showed that incubation with Z HPV16E6 1235 for 24 h was indeed capable of impeding the degradation of p53 induced by HPV16 E6, as indicated by a marked increase in p53 expression level in HPV16-positive CaSki cells in a concentration-dependent manner. In contrast, treatment with Z WT failed to result in the accumulation of p53, further underlining the specific activity of Z HPV16E6 1235 in HPV16-positive cells. To further validate that Z HPV16E6 1235 exerts biological activity in HPV16-positive cells through impeding the physical interaction between HPV16 E6 and p53, we performed coimmunoprecipitation (IP) experiments and analyzed the relative amount of HPV16 E6 and E6AP bound to p53 by Western blotting. For this purpose, CaSki cells were treated with 10 µM Z HPV16E6 1235 or Zwt control for 24 h, followed by IP with an anti-p53 antibody. As shown in Figure 5B, Z HPV16E6 1235 significantly decreased the amount of HPV16 E6 and E6AP that coimmunoprecipitated with p53 in Z HPV16E6 1235-treated cells compared to the control-treated cells (mock or Z WT ), suggesting that Z HPV16E6 1235 can directly block HPV16 E6/p53 binding. We then focused on whether Z HPV16E6 1235 might be able to restore the transcriptional activity of p53. Toward this aim, Western blotting assays were performed to detect the expression of some known p53 target genes, including BAX, BBC3 (PUMA), and CDKN1A (p21), which are closely related to apoptosis and cell cycle arrest. The results showed that compared to treatment with the control (mock or Z WT ), Z HPV16E6 1235 significantly up-regulated the expression of PUMA, BAX, and p21 in a time-dependent manner, suggesting that the restored p53 protein is functionally active (Figure 5C). Taken together, these results demonstrate that affibody Z HPV16E6 1235 can rescue both the expression and transcriptional activity of p53 in HPV16positive cancer cells. Synergistic Inhibition of HPV16-Positive Cell Growth by Combination Treatment With Z HPV16E6 1235 and Z HPV16E7 384 The high level of both p53 and the activation of its target genes (BAX, PUMA, and p21) implied a pro-death program in HPV16positive cells treated with Z HPV16E6 1235, which prompted us to verify whether Z HPV16E6 1235 affects the proliferation of HPV16-positive cells. Additionally, we previously reported that The effect of Z HPV16E6 1235 on the intracellular binding between HPV16 E6 and p53 was assessed by immunoprecipitating the E6/E6AP/p53 trimeric complex using an anti-p53 antibody bound to Protein A/G agarose beads from CaSki cells treated for 24 h. A parallel negative control assay was run for each group by incubating cell lysates with control IgG. The bar graph represents the amount of E6 bound to the relative amount of immunoprecipitated p53 after the quantification of p53 and E6 protein bands with ImageJ software. Data are presented as the mean ± SD of three independent experiments. **P < 0.01. (C) CaSki cells were treated with 10 µM Z HPV16E6 1235 for the indicated periods and the expression of p53 target genes, including PUMA, BAX and p21, was evaluated by Western blotting. Cells without any treatment (Mock) or treated with 10 µM Z WT for 48 h were used as negative controls. GAPDH served as an internal reference standard. (D) The effects of Z HPV16E6 1235 and Z HPV16E7 384 alone or in combination on the viability of HPV16-positive cancer cells (TC-1, CaSki), HPV18-positive cervical cancer cells (HeLa229) and HPV-negative cancer cells (C666-1) were assessed by CCK-8 assay after 48 h of treatment with the indicated concentrations; these cells were compared to Z WT -treated cells. Data are shown as the mean ± SD of three independent experiments. (E-F) Colony formation assays of HPV16-positive TC-1 and CaSki cells or HPV18-positive HeLa229 cells following treatment with 2.5 µM test affibody molecules for 14 days. The Z WT affibody and medium groups were set as controls. **P < 0.01, ***P < 0.001 vs. the control group. # P < 0.05 vs. the Z HPV16E6 1235 or Z HPV16E7 384 alone treatment group. targeting HPV16 E7 with affibody Z HPV16E7 384 had significant in vivo antitumor efficacy (Jiang et al., 2018). Therefore, we were interested in whether targeting E6 or simultaneously targeting E6 and E7 could inhibit the proliferation of HPV16-positive cells more effectively than targeting the E7 oncoprotein alone. CCK-8 assays showed that the cell viability of two HPV16-positive tumoral cell lines was inhibited by Z HPV16E6 1235 over the range of 0.5 to 20 µM compared to that in control cells, which was similar to that with Z HPV16E7 384 treatment ( Figure 5D). Of note, we also detected that the combination of Z HPV16E6 1235 and Z HPV16E7 384 therapy was significantly superior to either agent used alone at the same concentration in HPV16-positive cell lines (Figure 5D), and these results were further confirmed by long-term colony formation assays (Figures 5E,F), which are regarded as the "gold standard" for measuring cellular sensitivity to drug treatment. As expected, HPV16-positive cells treated with the Z WT affibody and HPV18-positive cells treated with Z HPV16E6 1235 and Z HPV16E7 384 alone or in combination remained fully viable (Figures 5D-F). Following statistical analysis, the half-maximal inhibitory concentration (IC50) values for Z HPV16E6 1235 alone, Z HPV16E7 384 alone, or the combination in TC-1 cells were 7.202, 11.460, and 3.071 µM, respectively. In Caski cells, these values were 9.975, 14.480, and 4.843 µM. To further determine if Z HPV16E6 1235 and Z HPV16E7 384 combination therapy had synergistic, additive, or antagonistic effects in TC-1 and CaSki cells, Chou-Talalay methods was used. Based on the IC50 values, the synergistic activity of Z HPV16E6 1235 (1, 5, or 10 µM) and Z HPV16E7 384 (10 µM) was proved to be statistically significant ( Table 2). These findings thus validate that Effects of Z HPV16E6 1235 in Combination With Z HPV16E7 384 on the Cell Cycle, Apoptosis, and Cellular Senescence To uncover the potential mechanism of the inhibitory effect of targeting HPV16 E6 and/or E7 on HPV16-positive cancer cell proliferation, flow cytometric analysis and senescenceassociated β-galactosidase (SA-β-Gal) were performed. As shown in Figures 6A,B, HPV16-positive (TC-1 and CaSki) cells treated with Z HPV16E6 1235 and Z HPV16E7 384 alone or in combination all showed significant cell cycle arrest at the G0-G1 phase compared to cells treated with Zwt. However, there was no obvious difference between the combination treatment group and either single-agent group in terms of the number of G0/G1 phase cells, indicating that the synergistic antiproliferative effect may not be related to cell cycle arrest. Subsequent experiments were performed to evaluate cell apoptosis in vitro. Figures 6C,D shows that treatment with Z HPV16E6 1235 resulted in an approximately 25% increase in apoptotic HPV16-positive cells, which was similar to the treatment with Z HPV16E7 384. Notably, Z HPV16E6 1235 in combination with Z HPV16E7 384 significantly elevated cell apoptosis to levels greater than those observed with single-agent treatments (Figures 6C,D). Next, we examined whether the combination of Z HPV16E6 1235 and Z HPV16E7 384 could more effectively induce senescence in TC-1 and CaSki cells. SA-β-Gal activity was assessed 24 h after exposure to the combination of Z HPV16E6 1235 and Z HPV16E7 384. Significantly higher SA-β-Gal activity was observed after treatment with the combination compared to either agent alone, as indicated by strong blue staining. However, exposure to Z HPV16E6 1235 and Z HPV16E7 384 alone resulted in moderate SA-β-Gal activity. The ratio of SA-β-Galpositive cells is summarized and shown in Figures 6E,F. Taken together, these data revealed that the synergistic antitumor activity may be mainly attributed to the induction of cellular senescence and apoptosis but is not related to cell cycle arrest. DISCUSSION Although HPV-related cancers can be prevented to a great extent by prophylactic HPV vaccines that are commercially available, these vaccines have little preventive or therapeutic effects against pre-existing HPV infections. Additionally, a considerably long time would be needed for preventive vaccines to lower the incidence of cervical cancer owing to the limited use of prophylactic HPV vaccines attributed to high costs and medical infrastructure challenges (Herrero et al., 2015). Therefore, the research and development of effective diagnostic and therapeutic strategies for HPV-related cancer, such as molecular imaging and targeted tumor therapy, are urgently needed. Affinity proteins are invaluable tools in the advancement of next-generation imaging and therapeutic agents (Löfblom et al., 2010). To date, monoclonal antibodies (mAbs) are the most widespread and successful affinity proteins for life science applications. However, due to their large mass (∼150 kDa), mAbs are associated with several intrinsic drawbacks, including poor tissue-penetrating ability and a long residence circulation time, which lead to poor imaging quality . In comparison to mAbs, affibody molecules, a novel category of affinity proteins, are very small in molecular size (∼6.5 kDa) and hence have favorable properties for imaging diagnostic and various biological applications (Löfblom et al., 2010;Ståhl et al., 2017;Gebauer and Skerra, 2020;Tolmachev and Orlova, 2020). Recently, human clinical trials strikingly confirmed that HER2-specific affibody molecules labeled with 111 In can be used for targeted detection of HER2 overexpression in metastatic breast cancer using single-photon emission computed tomography (SPECT; Baum et al., 2010). Because of their simple structure and small size, affibody molecules are readily produced by conventional peptide synthesis or bacterial fermentation methods, which could greatly facilitate the manufacturing process and clinical application. In the present study, we conducted biopanning, phage-ELISA screening and DNA sequencing from a combinatorial phage library to obtain three potential HPV16 E6-binding affibody molecules (Z HPV16E6 1115, Z HPV16E6 1171, and Z HPV16E6 1235). We then successfully produced these affibody molecules with high purity and solubility in a prokaryotic expression system. High targetbinding affinity is an important feature for the successful application of a novel affinity protein candidate in tumor diagnosis and therapy. Our work showed that by SPR, the binding affinity of all three Z HPV16E6 affibodies selected in this study to HPV16 E6 was approximately 10 6 times higher than that of Zwt. Moreover, indirect IFAs showed bright spotty or patchy fluorescence in HPV16-positive cell lines only, and the specificity was further supported by immunohistochemical staining analysis. Of note, in tumor-bearing nude mice, Z HPV16E6 affibodies were capable of target-specific accumulation in HPV16-positive xenografts, highlighting that the Z HPV16E6 affibodies may be promising candidates for molecular imaging. In tumor imaging and diagnosis, the rapid internalization of imaging traces by cancer cells and the efficient clearance of unbound tracking agents by excretory organs are other important properties for ideal probes to provide high-contrast FIGURE 6 | Effect of Z HPV16E6 1235 in combination with Z HPV16E7 384 on the cell cycle, apoptosis, and cellular senescence. TC-1 and CaSki cells were treated with 10 µM of test affibody molecules for 24 h and analyzed by flow cytometry assay for the cell cycle (A,B) and apoptosis (C,D) with PI and Annexin V/PI. Data are presented as the mean ± SD in three independent experiments. (E,F) SA-β-gal staining of TC-1 and CaSki cells treated with 10 µM test affibody molecules for 24 h. Representative images taken using a bright-field inverted microscope (100 × magnification) are shown. In all panels, TC-1 and CaSki (HPV16-positive) cells treated with Z WT and HeLa229 (HPV18-positive) treated with selected affibodies were used as negative controls. Significance: *P < 0.05, **P < 0.01 vs. the control group. # P < 0.05 vs. the Z HPV16E6 1235 or Z HPV16E7 384 alone treatment group. tumor imaging. Dynamic optical imaging results showed that DyLight 755-labeled Z HPV16E6 affibodies circulated to tumor tissues as early as 30 min pi. These affibodies quickly and specifically accumulated for clear, high-contrast tumor imaging within 2 h and were retained in tumors for over 8-12 h. In addition, similar to the observations of several previous studies (Orlova et al., 2006;Xue et al., 2016;Jiang et al., 2018;, Z HPV16E6 affibodies were also detectable in the kidneys. This could be explained by the passage of small proteins through the glomerular membrane, which is eventually absorbed by the proximal tubules (Behr et al., 1998;Vegt et al., 2010;Wang et al., 2019). Also, the high affibody levels in the kidney could be attributed to the phenomenon that proteins less than 60 kDa are typically cleared by the renal (Wang et al., 2019). Taken together, these characteristics strongly imply the favorability of Z HPV16E6 affibodies for molecular imaging and may improve the early diagnosis of HPV-related cancer for appropriate treatment choices. Thus far, we have demonstrated that Z HPV16E6 affibodies can specifically target the HPV16 E6 oncoprotein with high affinity; in addition, it is necessary to discuss whether affibody molecules targeting HPV16 E6 can block its intracellular activity. In epithelial tumors induced by HR-HPV, including cervical carcinoma and head and neck tumors, p53 is degraded by the E6 viral oncoprotein (Scheffner et al., 1990). In this process, E6 binds to a short leucine (L)-rich LxxLL consensus sequence within the cellular ubiquitin ligase E6AP3. Subsequently, the E6/E6AP heterodimer recruits p53 for proteasome-mediated degradation, ultimately leading to cell immortalization and cancer development (Scheffner et al., 1990;Martinez-Zapien et al., 2016;Li et al., 2019). Therefore, interrupting E6/E6AP/p53 trimeric complex formation and impeding p53 degradation by E6 offer an interesting therapeutic option for HPV-related tumors. A recent study reported that one linear short peptide, which could selectively bind to the HPV16 E6 oncoprotein, restored the expression of functional p53 protein and specifically killed HPV16-positive cervical cancer cells by inducing apoptotic cell death (Celegato et al., 2020). Similar to previous reports (Dymalla et al., 2009;Celegato et al., 2020), treatment with Z HPV16E6 1235 significantly elevated the expression of p53 in HPV16-positive cancer cells compared with treatment with Z WT . Subsequent studies verified that the restored p53 induced by Z HPV16E6 1235 treatment is transcriptionally active, leading to an obvious upregulation of p53 target genes, in particular the proapoptotic genes BAX and PUMA and genes related to cell cycle arrest and senescence, such as p21. Given the accumulation of p53 in cancer cells, we further explored the influence of Z HPV16E6 1235 on the phenotype of HPV16positive cell lines. Consistent with the accumulation of p53 in cancer cells, treatment with Z HPV16E6 1235 specifically inhibited cell viability and proliferation without causing cytotoxicity in other unrelated cells. Moreover, cell-cycle distribution analysis showed that treatment with Z HPV16E6 1235 led to an increased accumulation of G0/G1 cells, together with a remarkable decrease in G 2 /M cells, compared with the control (Mock and Z WT ). Subsequent studies also showed that more apoptotic and senescent tumor cells were observed in the Z HPV16E6 1235 treatment group than the control group. Therefore, we suggested that the reduction in cell proliferation induced by treatment with Z HPV16E6 1235 is potentially associated with cell cycle G0/G1 phase arrest, apoptosis and senescence in HPV16positive cell lines. It is well accepted that the functional inactivation of p53 and Rb tumor suppressor proteins by the HPV E6 and E7 oncoproteins is a crucial mechanism in the carcinogenesis of cervical cancer (Yim and Park, 2005;Jiang et al., 2019;Gutiérrez-Hoya and Soto-Cruz, 2020). Therefore, in subsequent studies, we investigated the potential of the HPV16 E6-binding affibody as a synergistic agent to enhance the antitumor effect of Z HPV16E6 384, an HPV16 E7-binding affibody molecule previously reported to show promising therapeutic value in HPV16-positive tumors . Both CCK-8 and plate colony formation assays showed that the combination of Z HPV16E6 1235 and Z HPV16E7 384 therapy was significantly superior to either modality alone at the same concentration in HPV16-positive cancer cell lines. In addition, the synergistic inhibitory effect of combination therapy with Z HPV16E6 1235 and Z HPV16E7 384 on HPV16-positive cell growth was further confirmed by Chou-Talalay analysis. Further mechanistic analyses demonstrated that the synergistic antiproliferative activity mainly depends on the induction of cell apoptosis and senescence but is not related to cell cycle arrest. In summary, we successfully screened three novel affibody molecules and confirmed their high affinity and specificity for HPV16 E6 oncoprotein through SPR, indirect immunofluorescence, IHC and near-infrared small animal optical imaging in vitro and in vivo. The detailed mechanism underlying the cell penetration and binding of Z HPV16E6 affibodies to intracellular target in vitro and in vivo remains to be further investigated. Nevertheless, our data showed that treatment with affibody Z HPV16E6 1235 could blocked the degradation of p53 by E6 and rescued the expression of p53, which in turn activated a robust p53-mediated transcriptional program and inhibited the proliferation of HPV16-positive cell lines. More importantly, our data indicated that the combined use of Z HPV16E6 1235 and Z HPV16E7 384 to simultaneously target the HPV16 E6 and E7 oncoproteins can significantly enhance antiproliferative activity by inducing increased cell apoptosis and senescence. Therefore, we envisage that Z HPV16E6 1235 could be utilized as a promising starting point for developing rational strategies for both targeted therapy and molecular imaging in HPV16-positive patients. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. In addition, DNA sequence for the three selected HPV16 E6 binding affibody molecules has been deposited in Genbank dataset (https://www. ncbi.nlm.nih.gov/genbank/), and the accession number for them is MW888864, MW888865, and MW888866, respectively. ETHICS STATEMENT The animal study was reviewed and approved by the Ethical Committee of Wenzhou Medical University.
2021-05-24T13:18:57.156Z
2021-05-24T00:00:00.000
{ "year": 2021, "sha1": "b294fc020a5603d638ca20bd727b53ec1f594a01", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.677867/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b294fc020a5603d638ca20bd727b53ec1f594a01", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
12518812
pes2o/s2orc
v3-fos-license
Usability Engineering of Games: A Comparative Analysis of Measuring Excitement Using Sensors, Direct Observations and Self-Reported Data Usability engineering and usability testing are concepts that continue to evolve. Interesting research studies and new ideas come up every now and then. This paper tests the hypothesis of using an EDA based physiological measurements as a usability testing tool by considering three measures which are observers opinions, self reported data and EDA based physiological sensor data. These data were analyzed comparatively and statistically. It concludes by discussing the findings that has been obtained from those subjective and objective measures, which partially supports the hypothesis. INTRODUCTION Usability testing has emerged as an essential phase in software development and it examines the property of systems being usable. This paper concentrates on taking the advantage of the EDA-based physiological measurements, which has the ability to gather data from physiological responses of the body. Consequently, the main idea behind this study is to test the hypothesis of using the EDA-based sensors as a usability testing tool. An experiment has been conducted to see if the findings can support this assumption. In this experiment, the participant wears the sensor bracelet while playing a game in order to perform physical activities and thus increasing the arousal level of the player. Three usability measures were considered in this experiment; external observation; external self-reporting; and internal reactions measured by the sensor. This paper is organized into six sections. Introduction is shown in section I. In section II, the literature review is presented. Then, the experiment's design is discussed. The result obtained in this research study is provided in section IV. After that, the discussion of the result is shown in section V. Lastly, section VI concludes the paper. Usability Testing Usability testing is the process of measuring the quality of the system being tested by gaining the users' feedback while they are interacting with the system and using its services [1]. The purpose of this testing is to understand and uncover the defects that might be encountered by the users after releasing the system and to obtain the users' satisfaction. This can be done by preparing the test plan, setting up the environment properly and using either traditional usability testing methods. Some examples of the methods are observing the participants and asking them to fill out a survey on site or by using software or hardware tools that are developed specifically for this purpose and can help in performing the usability tests. Moreover, some of these tools automate the external observation process while others deal with the physiological issues that occur inside the bodies of the participants. Choosing the usability testing techniques and tools depends on the type of the system being tested and the type of information that need to be gathered. For instance, nowadays mobile applications are used frequently without being restricted to any place such as offices or labs. In this case, usability testing practitioners cannot use the observation method because they need to test the device or application and use a data collection method that can work properly in a non-controlled environment. Consequently, the different states that the user might have when using the mobile device such as walking on the street and driving a car need to be considered [11]. Researchers have proposed ways to address the mobility nature, [10] evaluated six situations by conducting two experiments on people while they are using their smart phones. Hence, by considering the EDA-based sensors as a usability testing tool, the goal of this research is to know if the EDA sensors are as effective as external observation. As a result, the following sections examined the literature from three angles which are the external observation, affective computing aspects and internal body changes. Many studies have been concerned with comparing the usability assessment tools and measuring the accuracy of them. For example, [1] presents a usability testing study that was performed to benchmark a developed system called Ultimate Reliable and Native Usability System (URANUS) against other software tools which are Morae, Tobii, Userfeel and Loop1. And by comparing their features, this study reports that Morae achieves the highest score. On the other hand, the authors state that Tobii Studio has the advantage of using the eye-tracking feature which help in capturing the eye movements of the participant [1]. Another user experience study has been conducted to compare the interface design of two portable ultrasound scanners [7]. In this study, cameras and eye-tracking device were used to observe the sonographers' eyes and upper-body kinematic while they were using the scanners. They were able to conclude that there was a difference between the left hand movements of the two participants. In addition, the eyetracking analysis shows that there were no differences between the participants' eyes movements. Clearly, the findings were based on the analysis of the eye-tracker data and the cameras' photos without any supporting information from the participants and this means that the eye-tracker has been used as a usability testing tool. In another study [8] they combined data from eye-tracker with subjective satisfaction measures. This paper discusses this point by using the eye tracker to measure the fixation frequency and fixation duration of the participants while they were performing tasks on the Web. After that, a comparison between the data generated from the eye tracker and the self-reported data from the collected surveys confirms that there was a correlation between these objective measures and subjective measures [13]. In usability testing sessions, the existence of facilitators to observe the participants and collect information about their performance manually by filling out pre-designed forms has proven to be useful to the usability testing practitioners. For instance, the observation technique has been used in [9] to evaluate the usability of educational game designed for children. In this experiment, the children were observed by a facilitator to know their body language, facial expressions and any other feedback that they show while they were playing the game. Therefore, the results were driven based on the information collected through the observation and game experience questionnaires filled out by the participants. Using objective measures in user experience testing (UET) involves studying the changes in the physiological factors of the participants' bodies. Evidence suggested that Biofeedback data analysis has comparable results when compared to the subjective analysis of paper surveys. The objective of that experiment carried out in this research was to see if the effects of the different color combinations of texts and backgrounds of a website and the mental stress happened in each case can be reflected by the physiological analysis [12]. Moreover, [13] suggested using a triangular approach in UET which was based on using traditional usability testing methods and the self-reported data combined with physiological and neurological measurements. Affective Computing Examining the usability of the interface designs through measuring the emotions that users show to computers is a young field of research. This ability of extracting the emotions of users in certain situations comes under the field of affective computing. Affective computing can help human-computer interaction specialists, particularly in the usability and intuitive interface areas in which machine and software could change their behavior based on the users' response [14,15]. Affective computing is an emergent area of research which deals with the relationship between the emotions and computers [15]. It's a branch of computer science and it carries out the process of designing and developing technologies that recognize, express and understand the humans' emotions to better serve people's desires and to improve communication between people especially those with special needs [14][16] [17]. The main area of affective computing as Picard said is the feelings that the machines might actually "have" [14]. Psychology, cognitive, physiology and computer science are the disciplines that are associated with affective computing [18]. Therefore, emotions and related human behaviors are indispensable backgrounds of affective computing [15]. Emotions are physiological changes that take place in the body. Understanding others' emotions is often conducted by knowing them, making a conversation and sharing experiences and feelings with each other [15]. Obviously, individuals' awareness of others' emotions is seen to be more on their relatives and friends than strangers. In 2003, Picard linked emotions with weather metaphors in her study; it's hard to measure and predict the emotions as weather conditions. For instance, weather forecasting helps people from getting wet in a rainy day because they can know beforehand that they have to hold their umbrellas [14]. Therefore, knowing users' emotions can help in predicting certain situations. With the different emotions that humans encountered every day, the individuals' ability to report those momentum feelings differs in the extent of the personal characteristics and understanding of how they describe what they are feeling and whether or not it's accurate. Collecting these emotions have been studied for more than hundred years [16]. Skin conductivity, electrodermal activity (EDA), body temperature and heart rate (HR) are all considered as physiological measures that can be used to detect the emotional responses of individuals. The proliferation of technologies that can measure, communicate and transform emotions such as computers, sensors and smartphones have been addressed in several research studies [e.g. 22, 27, and 28]. A recent research study has been made using Q Sensor on people with severe mental disability who couldn't express their feelings in a verbal language to understand their emotional reactivity variations [23]. Besides, [26] proved that people can express what they feel with some distinct levels of variations in the expression. In another study, a Remote Millimeter Wave I-Q Sensor was used in a study to analyze the real-time heart rate beats which revealed that the tool and the method were both successful in gathering and detecting the heart beats and beat-to-beat heart rate even in places with different levels of noise. Thus, it can assist other applications in their detection of many kinds of heart diseases [24]. In addition, with technology changing rapidly, nowadays it can be possible to assess the persons' HR by using the smartphones' cameras. What is more, Lakens, D has shown that the method was successful in determining the anger and happiness feelings which the individuals express during experiments [22]. Table 1 illustrates additional technologies used in other experimental studies to detect different emotions. Result Gathering data which is represented by the bright that emit from the device of each participant to measure the person's emotions [19]. Galvactivat or Skin conductivity Listening to a session while wearing the Galvactivator gloves. The light is highly bright at the beginning of the presentation and in interactive sessions. Whereas, the light brightness is low at the end of the session. An approach has been proposed for recognizing emotions based on physiological signals [20]. Many emotional arousals for each of the three subjects. Participants This research involved gathering data from 30 female adults with ages ranging between 19 and 27 years old (Mean = 21.4 years and Standard deviation (SD) = 2.04 years). The detailed ages' groups for the participants are depicted in table 2. Participants were faculty and students in the university environment and they were informed about the experiment through announcements posted on campus. Also online announcements were posted to reach out to more audience. The study was conducted in three days and participants were allocated 30 minutes session for each experiment. Apparatus Several software and hardware devices were used in this study. LED and TV. Firstly, the Q Sensor1 (figure 1) which is a non-obtrusive bracelet worn on the wrist. It is a biosensor that measures the emotional arousal wirelessly via skin conductance that is a form of Electrodermal Activity (EDA). Moreover, EDA is an electrical changes measured at the surface of the skin that arise when the skin receives innervating signals from the brain [2] that increased or decreased during situations such as excitement, attention, anxiety, boredom or relaxation. Q Sensor measures temperature and activity of persons' bodies [6]. Secondly, a controller-free and full body gaming device Xbox Kinect (figure 2). Its sensors capture persons' gestures and respond to them. Lastly, QLive was used for the purpose of viewing, tracking and recording a live stream of data from Q Sensor which appears as graphs and to ensure that there is no connection errors. Stimuli A chosen game named "Adventure Game" was used as stimuli for this experiment [25]. This game consists of two levels with approximate duration of five minutes. Procedure The experiment was conducted in a room within a sufficient space for capturing the participant's movement (figure 3). Figure 3 Experiment Room In this experiment, each participant was requested to perform a task consisting of two phases. In the first phase, the participant was asked to warm-up by walking up and down the stairs for three minutes to ensure skin conductance responses were detected by the QSensor device. After that, she was requested to relax for six minutes. In the second phase, the participant was asked to wear the Q Sensor bracelet on the right hand's wrist and stand in front of the Xbox Kinect to play a game. The "Adventure game" was presented to the user as shown in (figure 4). Figure 4 Adventure game [30] Lastly, a game experience survey was presented to participants be filled. During the second phase, there were two observers whom carry the role of specifying the level of excitement of the player each minute through a pre-designed scale and a dedicated facilitator was working on the QLive software that was activated during sessions. As an incentive, a fifty Saudi riyal coupon was presented to each participant as well as a certificate of participation. Q sensor's Data Analysis The Q Sensor generates its data in graphs as shown in figure 5. These graphs were taken and divided into four/five segments based on the participant's playing time. Therefore, each of these segments represents the participant's interactivity with the game for a duration of one minute. Consequently, the highest peak value for each part was specified and the average of these peaks was calculated accordingly as depicted in table 3. To determine the participant's level of excitement, the means of EDA in the specified segment was mapped to either excited, moderate or not excited. By observing the obtained averages, the range was between 0.11µs and 7.776µs and on average the mean of EDA was 3.7µs. According to the range between the minimum and the maximum values, this area was equally divided into three sub ranges which represent the three levels of excitements during the game. As a result, the average between 5.34µs and 7.776µs was mapped to be excited, the range between 2.67µs and 5.33µs indicates that the participant was moderately excited, and the not excited level is located in the area between 0.11µs and 2.665µs (table 3). Observation Direct observation was used as it has the highest degree of 'ecological' validity [29]. in each session two observers simultaneously observed the 30 participants and specified the level of excitement for each minute while the participants were playing. Table 5 shows a sample of the data recorded by one observer. Moreover, the observers have agreed on the criteria for determining the excitement level beforehand based on the facial expressions such as smiling, eyes focus and body movements as feet bouncing. According to these criteria, big smiling and laughing were considered as marks to indicate that the participant was excited, whereas slight smiling and slow body movements show that the excitement level was moderate, and the boredom and laziness appearance in the participant's facial expressions or body movements is mapped to the not excited level. The data from the two observers for each session has been compared. Not excited After the indication of the excitement level for each minute, the level of excitement for the whole activity was determined by calculating the median. To estimate the median, number one was assigned to "Not excited", number two to "Moderate" and number three to "Excited". Therefore, four/five assigned numbers were sorted in ascending order and by taking the average of the middle located numbers if there are four assigned numbers or the third number if there are five assigned numbers to specify the value of the median. Then, this result was matched to the appropriate excitement level to get the overall scale of enjoyment (table 6). In addition, in order to avoid the participants' nervousness of observers' presence, they have spent their six minutes' relaxation period. The goal of this period was to let the participants become less conscious of the observers' [29]. The participants were informed at the beginning of the session about the purpose of the study, the reason of documenting their activities and the investigation which took a place while they were playing. Survey At the end of each session, a game experience survey was given to the participant to evaluate her excitement level. This survey contains seven various questions which gather information about their demographics, their excitement level and their experience in the game. The latter two points will be illustrated below. The level of excitement of the participant was determined from what she has reported about herself in the survey to use it as a subjective measure (table 7). The answers were taken by us in order to map them to either extremely excited; moderately excited; or slightly excited. Table 9 Sample of Survey results for 5 Participants Demographic Survey Related to Gaming As for the participants' previous experience with the game, nineteen participants didn't play the game before, while ten of them did. Those ten participants were divided into three categories. Six of them have   played the game once before, two of them have played it two/three times and the remaining two participants have played it for five times or more (Mean = 0.655 µs and (SD) = 3.566 µs). Results Different kinds of data to determine the excitement level have been gathered in the previous sections which came from many sources. These are the EDA-based physiological sensor generated graphs, observers' filled forms and the game experience surveys. In order to analyze these collected data, the data has been studied from two different angles. Firstly, examining the relationship between the previous playing times collected from the game experience surveys and the excitement level that was specified by either the self-reported data or the EDA-based physiological sensor data. Secondly, statistical analysis that were performed by using IBM SPSS predictive analytics software. Survey Results According to the surveys' results, figure 6 shows the relationship between the number of participants' previous playing times and their excitement level based on their self-reported data. This was mainly done to know how did they feel during the game. This chart demonstrates that despite the fact that there are nineteen participants who didn't play the game before, 47.36% of them have reported that they were excited whereas only 5.26% indicated that they weren't and the remaining participants have reported that they were moderately excited. Moreover, there are six participants who have played the game once before, 83.33% of them stated that they were excited while 16.76% of them were moderately excited. Furthermore, the last two categories have got the same number of participants which is two per each. One of the two participants who have played the game two to three times before reported to be extremely excited, while the other was moderately excited. For the participants who have played the game five or more times before, all of them were moderately excited. The survey question used to indicate the participant's previous experience (English) The survey question used to indicate the participant's previous experience (Arabic) 7-Did you play the game before? On the other hand, figure 7 illustrates the relationship between the number of playing time collected from the surveys and the excitement levels obtained from the EDA-based physiological sensor results. Based on the EDA-based physiological sensor observation, the chart below shows that 16.7% were excited, 20% were not, and 26.7% were located in the middle. All of the nineteen participants didn't play the game before. In addition, for the participants who have played the game one time before, EDA-based physiological sensor analysis indicated that the excitement level for two participants was moderate, the same number were not excited and two participant were extremely excited. What is more, all of the participants who have experienced the game two to three times before weren't excited and all of the subjects who have played the game five or more times were moderately excited. Tests Based Analysis In order to examine if our findings as demonstrated in Table 11 can support our hypothesis to use the EDA-based physiological sensor device as usability testing tool, a manual calculations along with statistical tests have been used. Table 11 Sample of observers' data, surveys' data and Q sensor's results for 5 Participants Comparative analysis of results: In this section, the level of agreement between the outcomes of the observers, EDA sensor and the game experience survey results are presented. The following sections address the way in which each comparison was performed. External Observation The percentage of agreement between the two observers' results is 79.31%. This result came from the fact that there were twenty-three matching results out of twenty-nine, and this percentage of agreement is relatively high. External Observation and the Survey Results The percentage of agreement between the two observers' results and the surveys' results is 44.83%. This result came from the fact that there were thirteen matching results out of twenty-nine, and this percentage of agreement seems to be fair. External Observation and the EDA-Based Sensor Results The percentage of agreement between the two observers' and the EDA-based sensor findings is 31.03%. This result came from the fact that there were nine matching results out of twenty-nine, and this finding is relatively low. The Survey and the EDA-Based Sensor Results The percentage of agreement between the survey results and the EDA-based sensor findings is 34.48%. This result came from the fact that there were ten matching results out of twenty-nine. Similar to the External Observation and EDA-Based Sensor Results this finding is relatively low. External Observation, Survey and EDA-Based Sensor Results The percentage of agreement all of the subjective and objective measures is 10.35%. This result came from the fact that there were three matching results out of twenty-nine, and this percentage of agreement seems to be very low. Statistical Tests Statistical tests have been conducted, these tests are Wilcoxon signed ranks test and Friedman test. In the tests, a P-value will be generated to check if there's a significant difference between the entities by comparing the P-value with the constant number 0.05. If the P-value is higher than or equal to 0.05 then there is no significant difference between the entities, otherwise there is a difference. External Observation Since there are two observers acting as one parried group, Wilcoxon signed ranks test has been used and the measurement was ranked as excited, moderate and not excited. The Wilcoxon signed ranks test revealed that there is no statistical differences between the two observers (P-value = 0.102 which is greater than 0.05). External Observation and Survey Results Friedman Test has been used because it is normally conducted when there is the same sample of participants and the measurement is taking place at three or more points in the same time, which are the two observers and the survey. The results obtained fro this test indicated that there is no significant difference was evident between the two observers and the survey result (P-value= 0.199, which is greater than 0.05). External Observation and the EDA-Based Sensor Results There is a significant difference between the observers and the EDA-based sensor results when using Friedman test (P-value= 0.003, which is less than 0.05). Survey and the EDA-Based Sensor Results Using Wilcoxon signed ranks test, the results clearly showed that there is a significant difference between the survey and the EDA-based sensor results (P-value= 0.007, which is less than 0.05). External Observation, Survey and the EDA-Based Sensor Results High difference has been revealed between the observers, survey and the EDA-based sensor results when using Friedman test (P-value= 0.001, which is less than 0.05). DISCUSSION To find a relationship between the excitement level of the participants and their previous experience in playing the "Adventure Game"; the EDA-based sensor data and self-reported data were examined. As a result, those who didn't play the game before, showed 31.58% matching between the subjective and objective measure sources. This finding comes from the fact that there were six participants out of nineteen whose results in both sources indicated an equal category of being extremely excited, moderately excited. Moreover, there were two out of six matched participants from those who played the game once before showed a matching percentage of 33.33%. There wasn't any match between EDA-based sensor results and the surveys' results for those who have played the game two to three times before. In contrast, the result for the participants who have experienced the game before, matched perfectly between the two measures. Measuring the user experience of the participants who were familiar with the game, was effective when capturing it with the EDA-based sensor data. And this because it was in line with the observation and in line with the self-reported data. On the contrary, the users whom were unfamiliar with the game results varies. Generally, without considering the external observation, the agreement percentage between Q Sensor's results and surveys results was 34.48%. According to the test based analysis, 79% was the agreement percentage between the two observers which drive us to believe that they almost have the same vision, and Wilcoxon signed ranks test proves this finding by revealing that there are no statistical differences between them. After that, the agreement percentage between the observers and the surveys' results was calculated and we found that they have agreed to the ratio of 44%, while the Friedman test indicated that there is no significant difference was evident between them. Consequently, we found that the observers were partially able to sense the participants' feelings and therefore the two subjective measures used in this research study agreed with each other to a large extent. Turning to consider the objective measure used in this study, which is the EDA-based sensor data and by comparing it with the observers' data, we found that the percentage of agreement between them was low in conjunction with the Friedman test as it showed that there was a significant difference between them. Since this percentage is relatively low, we have decided to consider the second subjective measure which was the self-reported data and the agreement percentage was also low. Furthermore, conducting Wilcoxon signed ranks test revealed that there was a significant difference between them. The findings suggest that the EDA-based sensor didn't effectively measure the level of excitement of the user when compared to self reported data and human observers rating. From this experience it would be interesting to consider the overall agreement in a triangular approach similar to the approach presented in [13]. This has been adopted to take into consideration three measures which are the external observation, self-reported data from the collected surveys and the objective measure which is the EDA-based sensor data. The agreement percentage between external observation, self-reported data and EDA-based sensor data were very low as the Friedman test showed. These findings partially supported our hypothesis in that the EDA-based sensor was able to add insights to the user experience specially with people who are familiar with the system. However, the hypothesis wasn't supported when the EDA-based sensor was compared to observers' data and self-reported data because of the discrepancy between human reported data and self reported data from one side and the EDA-based sensor from the other side. Another reason could be the individual differences between people and how the EDA-based sensor detecting this arousal level. An individual's reactions in specific situation differ from the reactions of other individual under the same circumstances. CONCLUSION This paper examined the use of the EDA-based sensor as a usability testing tool by performing an experiment on female adults. This experiment involves subjective and objective observations on subjects while they are wearing EDA-based sensor and playing a game named "Adventure Game" on Xbox Kinect device. Two perspectives were considered when analyzing the gathered data from observers, EDA-based sensor and surveys. First, finding a relationship between the previous experiences on the game and the excitement level that were specified based on one of the two sources which are EDA-based sensor data or self-reported data. Second, The analysis has been conducted on the collected data while taking into consideration the different combinations of the subjective and objective measures used in this study. As a result, the findings partially supported the hypothesis. FUTURE WORK Would examine the finding on a larger data set and perhaps with different bio sensors to see what other tools might be effective in Usability Testing in the context of game design and development. LIMITATION 1. Fifteen minutes were set between the sessions to solve the Bluetooth connection problems of the Q Sensor with the laptop. 2. Due to the Q Sensor's discontinuation, the analysis feature wasn't supported anymore and we were forced to analyze the data manually and using IBM SPSS predictive analytics software.
2014-08-29T15:37:28.000Z
2014-07-31T00:00:00.000
{ "year": 2014, "sha1": "a14ac35dca53c6a01a90237cd2048ee8b423749e", "oa_license": null, "oa_url": "https://doi.org/10.5121/iju.2014.5301", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "a14ac35dca53c6a01a90237cd2048ee8b423749e", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
247901594
pes2o/s2orc
v3-fos-license
Demand Response Program Implementation Methodology: A Colombian Study Case : The industrialization and urbanization are responsible for Greenhouse Gas (GHG) emissions and could generate energy shortage problems. The application of Demand Response (DR) programs enables the user to be empowered towards a conscious consumption of energy, allowing the reduction or displacement of the demand for electrical energy, contributing to the sustainable development of the sector and the operational efficiency of the electrical system, among others. A reference framework for this type of program is detailed along with a literature survey applied to the Colombian case. The considerations on the design of a methodology to the implementation of the DR pilot, considering if the pilot is in an interconnected system zone or non-interconnected system zone and the application of the design methodology in the modeling of three DR pilots in Colombia is presented. For the modeling of the pilots, the characteristics of the area and the base consumption of the users are considered, and the characteristics and assumptions of the pilot are also defined. Furthermore, the DR pilot in each zone considering four types of users is detailed. The results show the potential for energy reduction and displacement in different time bands for each zone, which allows determining the assessment of the benefits from a technical, financial, and environmental point of view, and the costs of each pilot in monetary terms, it not to compare the pilots with each other, but to illustrate the values that must be taken into account in those analyses. The sensitivity analysis of each pilot was also carried out, considering the variation of the benefit/cost relationship with the energy rate in peak hours vs. off-peak hours and the base energy rate in the area. The sensitivity analysis shows that, when varying the level of energy demand response and the Introduction The global electricity demand is gradually increasing over time, which implies an increase in greenhouse gas (GHG) emissions and their related effects on the environment.Factors such as the daily and seasonal variability of demand and the availability of primary energy resources represent new challenges for energy security [1].The challenging issue is therefore to continue the economic growth while ensuring efficient energy consumption in a context of environmental responsibility.One of the strategies developed to support the previously stated objectives is efficient demand management, which includes different aspects to achieve the active participation of energy users through DR programs [2,3]. The application of DR programs aims to empower users to be aware of their consumption and contribute to the reduction or displacement of electricity demand, bringing benefits to themselves and the sustainable and resilient development of the sector [4].It highlights the increase in the operational efficiency of the electricity system, the optimization of investments in electricity infrastructure for the provision of the service, and the integration of technology by the user. Each country or region has particular characteristics that need to be considered in DR programs [5].Colombia has an annual per capita consumption of 1,312 kWh/inhabitant, a value that can be considered low when compared to the OECD value of 8,009 kWh/inhabitant [6].The Colombian electricity sector has implemented DR programs that have demonstrated the ability of users to participate in the system.For example, the Apagar-Paga program achieved savings of 500 GWh and 170 MW in one month [7]; the mechanisms Voluntary Disconnectable Demand (DDV) and Demand Response in critical conditions registered availabilities of more than 171 MW and 76 MW, respectively.In addition, studies show a DR potential of up to 2,500 GWh in 2030, as well as an estimated mitigation potential of up to 2 MtCO2 for non-critical system conditions [8].Therefore, mechanisms and programs are required to achieve this potential. The diversity of criteria, requirements, and information needed to design a DR program is wide and depends on the local context [9].It is necessary to consider, from the design stage of the pilot, aspects related to the analysis and evaluation of the DR program and communication strategies with the client, among others.Given the limited experience in Colombia in the design and implementation of DR programs, it is necessary to create a methodology that facilitates the supplying companies in going through all the associated processes and that allows the empowerment of the client guaranteeing the achievement of results.This paper presents the design and development of a methodology for the implementation of DR programs and its application to the Colombian case. The objective of this paper is to present the design of a methodology for the implementation of DR programs and develop the simulation of a pilot to its application to the Colombian case.Three types of DR pilots are considered: two for the zones located in the National Interconnected System (NIS) and one for those located in the Non-Interconnected Zones (NIZ).In each pilot, the characteristics of the area and the base consumption of the users are considered, and the characteristics and assumptions of the pilot are also defined. REFERENCE FRAMEWORK This section presents the fundamental concepts related to DR, such as its definition, the stages and components of a DR program, and then the aspects for the empowerment of the user participating in this type of programs.Finally, a state of art of this topic in the Colombian context is presented. Fundamental concepts DR is a mechanism used to manage the consumption of energy demand through actions that allow the reduction or displacement of the same.This is achieved by shifting energy consumption from peak periods, where consumption is higher, to off-peak periods, where consumption is lower, motivated by the increase in the price of energy at times of high demand and thus optimizing the use of electricity infrastructure [10].However, encouraging active demand-side participation depends on economic incentives offered by participants' contributions or penalties for non-compliance with commitments. A DR program can be defined as a set of criteria and requirements or attributes to plan, incentivize, activate, measure, verify and report the response of an individual or aggregated electricity demand in the technical-economic and environmental operation processes of the electricity system [11].It is essential to know the consumption patterns of users; these vary according to the type of user, the region, and the type of activity they carry out.The United State Department of Energy (DOE) proposes two types of mechanisms to incentivize changes in consumption patterns: price-based programs and incentive-based programs [12].To address all aspects associated with a DR program, the stages and components shown in Figure 1 are outlined.In the formulation stage, the type of program is identified, either incentive-based or price-based.For the Colombian case, the Intraday Tariffs, Load Management, and Market Demand programs are established [13].For each one, the objectives and goals to be achieved with its implementation are defined, as well as the actions that allow promoting the program and its limitations.The attributes of the program are established, which depend on the type of market, available resources, the type of incentive received for effective participation, the period in which it is executed, among others.In addition, the technological architecture and standards of the DR program associated with the use of advanced measurement technologies, communication systems, and real-time management are defined. For the characterization and monitoring of users participating in DR programs, it is necessary to know and standardize their energy consumption behavior, since this is one of the most important inputs for the electricity consumption baseline (CBL) [14]. To determine it, it is necessary to combine a variety of mathematical and statistical methodologies.The CBL must be calculated accurately to avoid bias since it will be used to compare the consumption pattern if the event had not materialized with the consumption pattern when the event does occur.Criteria of quality, accuracy, completeness, simplicity, and alignment should be included.An important component is communication with the user, which should include the awareness and data reporting stage of the program operation. In the final evaluation stage, a methodology for monitoring the DR programs is established to quantify the economic, energy, and environmental impacts of their implementation.The comparison of Ex-ante and Ex-post conditions makes allows analyzing the programs results and establishing the necessary improvements and corrections.Technical indicators refer to energy demand, such as average energy reduction levels or per event, in peak or off-peak hours, per user and, per economic sector, among others.Economic indicators refer to the increase or reduction of money for all those involved in the system.Environmental indicators are intended to reflect the state of the environment as a result of the application of DR programs, specifically to evaluate greenhouse gas (GHG) emissions and other parameters.Social indicators refer to users and are defined in terms of those who participate in DR programs. Additionally, aspects such as the difficulty of access to information, speculation in technology prices, and the levels of energy culture of the user should be considered as risks in the implementation of the program.A fundamental aspect of the success of DR programs is the empowerment, motivation, and loyalty of active users.For this, it is necessary to consider the aspects shown in Figure 2. In each DR program, the reporting of data to the user is fundamental for his empowerment.Detailed information on consumption is one of the bases for promoting energy culture and obtaining responses from the user.Considering the segmentation of users according to different socio-cultural and demographic conditions is vital to increase the effectiveness of the programs and communication strategies needed to make users aware of the different DR programs and their operating principles: generating expectation campaigns, digital marketing, campaigns through leaders and experiential experiences.For DR programs to be successfully implemented, it is necessary to raise user awareness beforehand by informing them of the main aspects of the respective program.It is also essential to have different means of two-way communication between the end-user and the company offering the program.Furthermore, it is important to identify user preferences to learn about the different perceptions people have of the DR programs, the status of their implementation, and the main barriers; the development and application of research tools, such as surveys and targeted polls, are suggested. State of the Art To implement DR programs in the country, some public policy, the regulatory and normative background is taken into account: The guidelines on efficient energy management dictated by Law 1715 of 2014 [15], CREG resolutions associated with the mechanisms that have been enabled in the country (such as Voluntary Disconnectable Demand [16] and Apagar Paga [17,18,19], and other documents issued by the Ministry of Mines and Energy.It is necessary to highlight that a fundamental element is the tariff structure, which establishes the unit cost (UC) of providing the service, and allows the aggregation of cost components, using tariff formulas defined by the Commission or by market mechanisms. Currently, the participation of demand in the Colombian context is limited to programs designed to support the operation of the electricity system under critical energy supply conditions, especially those associated with extreme weather phenomena such as El Niño.It is necessary to enable demand with participation tools in normal conditions of the electric system, taking into account the particularities of the Colombian electricity market. DESIGN OF THE METHODOLOGY The implementation of DR program pilots will allow to test the validity of the assumptions considered in the design of the programs, identify the components of successes and failures to intervene, test the effectiveness and relevance of the instruments, methodology, and protocols designed for the programs and identify the variables of interest and how to measure them conveniently.In this work, two types of DR pilots were considered: one for the zones located in the National Interconnected System (NIS) and the other for those located in the Non-Interconnected Zones (NIZ).This main differentiation is made taking into account that the electricity supply in each of these cases differs in the characteristics of the electricity system, costs, and environmental impacts, which influences the evaluation and monitoring methodology.For the design of the pilot, five stages are defined, together with their corresponding activities, as shown in Table 1.These stages must be executed sequentially, verifying compliance with each activity.In each phase, it is necessary to define the execution schedule, the responsibilities matrix, and the risk matrix.Each of the activities that make up the stages involves gathering and analyzing information and executing a systematic procedure.As an example of one of the activities, Figure 3 shows the flowchart for site selection for the NIS. As a result of this analysis, the following is the design of the DR pilot in three zones: one located in a NIZ and two locations associated with the NIS; the particular conditions of each zone are different in demographics, climate, socioeconomic conditions, and productive activities.The pilot design aims to establish the order of magnitude of potential benefits and costs to estimate funding requirements. For the three zones, the DR program considers that the level of demand response is given by a price signal or incentive that modifies the electricity consumption, displacement, and reduction.For the valuation of the DR pilot, an analysis of its potential benefits and estimated costs is carried out.Some of the considerations are: • In the absence of tariff mechanisms, the valuation of benefits considers equivalent values that emulate an hourly/slot rate and a number of events that reflect a reduction or displacement at the hourly level.• The analysis of energy consumption is based on public information Use either SI (MKS) or CGS as primary units.(SI units are encouraged.)English units may be used as secondary units (in parentheses).An exception would be the use of English units as identifiers in trade, such as "3.5-inch disk drive".• Only those benefits that could be evaluated within the execution of a 6-month pilot will be considered. STUDY CASE For the modeling of the pilots, the characteristics of the area and the base consumption of the users are considered, and the characteristics and assumptions of the pilot are also defined.It is important to consider that the design could be made based on aggregate curves of available information and current price signals.However, it is important to highlight that participation in the DR programs requires individual action to achieve an efficient allocation of incentives; therefore, it is necessary to establish specific and hourly behaviors for each type of user. Figure 4 shows 4 types of users, named after representative Colombian animals: The first one is the residential user, whose behavior is called the lazy bear, since its consumption varies once or twice a day; the second type of user is called the turtle, whose consumption is associated with daytime work activities; this type of user can be commercial or industrial.The third type of user is the manatee, whose consumption does not vary significantly during the day; the frog type user has a consumption without a defined pattern and is not related to the hours of the day; and finally, the opossum type user, whose consumption is mainly nocturnal.The definition of a price signal or incentive should be aligned with consumption behavior and for each type of user, the range in which a price signal or incentive is required to achieve a change in consumption behavior or event should be established according to the type of application at which the DR program is aimed.According to the above, the following criteria are established to model the DR potential (displacement/reduction): • Peak slots are assigned based on peak consumption hours and taking into account the peak of the national consumption profile.• Sensitivities are required concerning the peak-slot-valley tariff ratio. • Groupings of day types are related to associated consumption profiles; similar profile shapes allow the grouping of corresponding days.• The delta DR considers aspects of the environment such as climate, rural or urban location, and the customer's main activity.These factors influence consumption flexibility.• The ability to shift and reduce consumption varies in each time slot for each type of user. • The customer's response to the tariff variation is not immediate and the gradualness of the change depends mainly on its main or productive activity.• The potential for consumption variation is directly related to the level of power consumed in each band and its flexibility in each band. For the simulation of the pilot, reference curves are obtained which are defined by the consumption level at the hourly level (blue line), the time slot (red/grey bars), and the demand response (green line, a positive value (+) means a reduction in consumption and a negative value (-) means an increase in consumption concerning the base case).Three types of days are considered, grouped by their similarities in consumption profiles: type 1 (T1: Monday to Friday), type 2 (T2: Saturdays), type 3 days (T3: Sundays, holidays, and special days).Figure 5 shows some examples of the results obtained.For the valuation of the pilot, three categories of benefits are proposed: technical (Tec), economic (E-F), and environmental (Env).Table 2 includes the complete list, including the quantification of each benefit in their respective units and the monetization in thousands of COP for each of the 3 zones (Z) previously chosen.Z1 and Z2 correspond to two different locations, both located within the National Interconnected System and Z3 while Z3 is in the non-interconnected zone.It is important to emphasize that this comparison does not seek to show which pilot presents more benefits, but rather to illustrate to the reader the values that should be taken into account when performing this type of analysis.For the valuation of costs (Table 3), only those related to telecommunication equipment, data plan, and platform for access to information, human resources for managing the DR program, and the quantification of incentives designed for the pilot are considered; these costs will be assumed by the bidding company.For each cost, it is specified whether it is an Investment (I) or Administration, Operation, and Maintenance (AOM) cost.The user of the DR program may incur optional costs, such as the adaptation and installation of technology within its property; these costs are beyond the scope of this work.The consolidated results of benefits and costs for the three DR pilot zones are shown in Table 4.The design of the pilot raises some questions such as: ¿What will be the level of demand response (demand elasticity), ¿what will be the efficient number of slots and their duration?¿what is the efficient value of the slots?and is there certainty about the realization of benefits and costs?To answer these questions, it is necessary to evaluate the sensitivity of the pilot: an analysis was carried out to establish how benefits and costs vary when the value of the peak hour tariff is modified vs. the relationship between the off-peak tariff and the base energy tariff in the area.The sensitivity was also analyzed when the level of demand response (∆DR) and the number of participants in the program was modified. For the case of zone 1, as shown in Figure 6, the sensitivity analysis of the pilot shows that the base case (red dot) is obtained for 70% of the off-peak hourly cost ratio concerning the base tariff (y-axis) and 2.5 times the peak value concerning the off-peak value (x-axis).The green zone indicates that, from the pilot valuation point of view, the user would benefit from participating in the DR pilot to the established ranges and values (the white zone would be the limit of indifference between participating in the DR program or not).In general, the sensitivities consider that the value of the off-peak band (to shift consumption to this band) ranges between 50% and 100% of a user's base tariff.On the other hand, for the peak value, reference values were considered for the cost of rationing and the different demand steps (1.5%, 5%, and 10%).Therefore, considering that step 3 represents an impact on demand of 10% (higher than the assumption considered for the DR contribution), the sensitivity of the peak value/value ranges from 1.5 to 8, where the latter point represents the approximate ratio between the rationing cost of step 3 published by UPME [20] and the base tariff of the zone. Similarly, the sensitivity analysis shows that, when varying the level of energy demand response and the number of pilot participants, the values are presented when the benefit/cost ratio (B/C) is greater than 1 (green cells).The first column represents the percentage by which the demand response may vary and, the first row represents the percentage by which the number of users participating in the pilot may vary.Figure 7 shows the results for zone 1. RECOMMENDATIONS • During the execution of the pilot, it is important to evaluate aspects such as the willingness to change (∆DR per band) during the pilot and validation of the price-demand elasticity, changes in daily consumption, the impact of price variations and tariff components, and participation in the pilot considering the impact between the base tariff and the hourly tariff. • The main recommendations are made to deepen the mechanisms that could enhance and encourage the active participation of users in DR programs, taking into account the identification and segmentation, the type and quality of the information transmitted, the knowledge of preferences related to DR programs, and the promotion strategies that should be applied for the development of DR pilots.Communication strategies should be designed in such a way as to allow users to learn about the different DR programs and the operating schemes of these mechanisms. • The promotion strategies implemented should allow for continuous interaction between the users and the company offering the program, and the so-called DR program notifications should be delivered on time. • The selection of the strategy must be based on the characterization and segmentation of the users so that the messages, channels, and other elements of the strategy are suitable for each type of user. • To characterize and monitor user behavior, the consumption baseline must be constructed taking into account the following criteria: quality, accuracy, completeness, simplicity, and alignment. • The different stakeholders involved in the development of the pilot must be defined.It is necessary to highlight that these stakeholders can be governmental entities, companies of the sector, and the users of the energy service.The results of a simulation to the DR pilot show the potential for energy reduction and displacement in different time bands for each zone, which allows determining the assessment of the benefits from a point of view technical, financial, and environmental, and the cost of each pilot in monetary terms, it not to compare the pilots with each other, but to illustrate the value that must be take into account in that analysis. • The sensitivities consider that the value of the off-peak band (to shift consumption to this band) ranges between 50 % and 100 % of a user's current tariff., and the sensitivity of the peak value/value range from 1.5 to 8. Similarly, the sensitivity analysis shows that, when varying the level of energy demand response and the number of pilot participants, the values are presented when the benefit/cost ratio is greater than 1. Figure 1 . Figure 1.Stages and components of a DR program. Figure 2 . Figure 2. Aspects for empowering the active user of DR programs. Figure 3 . Figure 3. Analysis of the pilot location of the DR program in NIS Figure 4 . Figure 4. Main consumption profiles in Colombia. Table 1 . RD PILOT DESIGN STAGES Table 2 . DR PILOT BENEFIT ANALYSIS Table 3 . DR PILOT COST VALUATION Table 4 . CONSOLIDATED RESULTS PROFIT/COST OF DR PILOTS
2022-04-03T16:42:19.856Z
2022-03-23T00:00:00.000
{ "year": 2022, "sha1": "c6b77bd3a7fb2f23d86a32d4cd471b16a2e48bf9", "oa_license": "CCBY", "oa_url": "https://revistas.utb.edu.co/tesea/article/download/465/363", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b27cc7abb6c50d260076ebd170abb032cd30f229", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
264396732
pes2o/s2orc
v3-fos-license
Hepatitis C Treatment Uptake Following Dried Blood Spot Testing for Hepatitis C RNA in New South Wales, Australia: The NSW DBS Pilot Study Abstract Background Dried blood spot (DBS) testing for hepatitis C virus (HCV) RNA provides a sampling option that avoids venepuncture and can be carried out in a nonclinical setting. Large-scale evaluations are needed to understand how DBS testing can reduce HCV burden. This study estimated prevalence of, and factors associated with, HCV RNA and treatment initiation among people enrolled in a state-wide pilot of people testing in the NSW DBS Pilot in New South Wales, Australia. Methods People at risk of HIV/HCV could participate via (1) self-registration online with a DBS collection kit delivered and returned by conventional postal service; or (2) assisted DBS sample collection at a community site or prison. Logistic regression was used to identify factors associated with detectable HCV RNA and treatment initiation within 6 months of testing. Results Between September 2017 and December 2020, 5960 people were tested for HCV (76% men, 35% Aboriginal and/or Torres Strait Islander, 55% recently injected drugs): 21% online self-registration, 34% assisted registration in the community, 45% assisted registration in prison. Fifteen percent had detectable HCV RNA (878/5960). Overall, 44% (n = 386/878) of people with current HCV initiated treatment within 6 months (13% online self-registration, 27% assisted registration in the community, 61% assisted registration in prison). Testing in prison compared with the community (adjusted odds ratio [aOR], 4.28; 95% CI, 3.04–6.03) was associated with increased odds of treatment initiation. Being a woman compared with a man (aOR, 0.68; 95% CI, 0.47–0.97) was associated with reduced treatment initiation. Conclusions The NSW DBS Pilot demonstrates the feasibility of using DBS to promote HCV testing and treatment in community and prison settings. The World Health Organization's Global Health Sector Strategy on Viral Hepatitis 2016-2021 presented targets to reach the elimination of hepatitis C (HCV) as a major public health threat by 2030 [1].In Australia, the National HCV Strategy 2018-2022 laid out priority areas for action, including implementing approaches to maximize the number of people with HCV who are diagnosed [2].HCV and the behaviors associated with its transmission, such as injecting drug use, are stigmatized, which can be a barrier to testing and treatment initiation [3].Despite direct-acting antiviral (DAA) therapy being available in Australia since 2016, inequities in treatment uptake persist [4,5].Simplified testing and treatment pathways can mitigate structural stigma to advance progress toward HCV elimination. Standard of care HCV testing and treatment often involve multiple visits to different providers, which is burdensome on the patient and risks increasing gaps in the cascade of care [6].Offering testing and treatment in settings that are regularly used by people at risk of HCV is convenient and can improve treatment outcomes [7].Among people who inject drugs, venepuncture can be painful and arduous [8] and has been shown to be less acceptable than fingerprick sampling [9,10].Using dried blood spot (DBS) samples can simplify HCV testing pathways and is associated with improved testing [11,12] and linkage to care [12].Studies have demonstrated high sensitivity and specificity in using DBS testing to detect HCV RNA [13]. M A J O R A R T I C L E DBS sample collection can be performed by a lay person with minimal training, and samples can be easily transported without specialized storage, making it convenient to use in low-resource settings [14].Although there have been some evaluations of DBS testing for HCV [15], the majority of studies have been limited to the United Kingdom, and there have been few data on the characteristics of people receiving testing, which are needed to inform population-level scale-up of DBS testing.In an environment where multiple testing modalities are available, it is important to understand the role that DBS testing can play in improving health equity. The New South Wales (NSW) DBS Pilot was launched in November 2016 with the primary objective of increasing access to HIV testing for priority populations and was expanded to offer HCV RNA testing from September 2017.This analysis evaluated factors associated with detectable HCV RNA and factors associated with HCV treatment initiation postenrollment among people receiving testing for HCV RNA in the NSW DBS Pilot, a study evaluating scale-up of DBS testing for HCV RNA in NSW, Australia. Study Population and Design The NSW DBS Pilot is a state-wide observational cohort study evaluating the scale-up of HCV and HIV testing from DBS sample collection.Participants were recruited via 3 pathways (online selfregistration for at-home sample collection, assisted registration at community sites, or assisted registration in prison) for HIV and/ or HCV testing.Participants were recruited across New South Wales, online, at 36 community sites (providing some or all of the following services: drug treatment, needle and syringe provision, sexual health), and at 21 prisons.The NSW DBS Pilot is ongoing, and this analysis reports data from people receiving testing between September 2017 and December 2020 and people initiating treatment until June 30, 2021. The NSW DBS Pilot recruited people who provided informed consent and were aged ≥16 years.The NSW DBS Pilot began in November 2016, offering HIV testing.From September 2017, HCV testing inclusion criteria were identifying as Aboriginal or Torres Strait Islander or having a history of injecting drug use.From June 2019 to December 2020 (end of analysis period), inclusion criteria for HCV testing were expanded to people born in Asia or Africa and people who had ever been incarcerated.Participants who met the inclusion criteria for HCV testing were eligible for HIV testing.People who met the inclusion criteria for HIV testing (gay and other men who have sex with men [MSM], people from Sub-Saharan Africa and Southeast Asia, and people with current/previous sexual partners from Sub-Saharan Africa and Southeast Asia) were not automatically eligible for HCV testing. DBS Testing DBS collection kits were distributed to sites and study participants.They contained a test card, a lancet, alcohol swabs, band-aids, cotton balls, a foil envelope, and a reply-paid envelope.Procedures for DBS sample collection and testing (elution, spot size, punching protocols, validation, testing algorithm) have been described elsewhere [16,17].The tests used in the NSW DBS Pilot were Murex HIV-1.2.0 antibody enzyme-linked immunosorbent assay (Diasorin, Macquarie Park, Australia) and New Lav-Blot-1 (Bio-Rad, Gladesville, Australia) for HIV testing, and Aptima HCV Quant Dx assay (Hologic, Macquarie Park, Australia) for HCV RNA testing.Once a DBS sample was collected and dried, it was mailed back to the NSW State Reference Laboratory for HIV (St Vincent's Hospital, Sydney, Australia).Depending on the request, the card was tested for HIV and/or hepatitis C. The algorithm for testing was based on current evidence on DBS testing [13,16,18].All reactive (HIV antibody) and detectable (HCV RNA) results required a confirmatory test via venepuncture for diagnosis. Procedures For online registration, participants completed an online survey, and if eligible, they received a testing kit via post for sample collection at home.Kits contained a link to an online instructional video (https://www.health.nsw.gov.au/dbstest), and from December 2017, kits included a visual aid to facilitate sample collection.Participants returned the sample to the DBS group laboratory situated at St Vincent's Centre for Applied Medical Research (St Vincent's Hospital, Sydney, Australia) for testing, and a result was returned to the person via SMS or phone.People with a reactive (HIV antibody) or detectable (HCV RNA) result received an SMS asking them to call the state-wide Sexual Health Infolink, where a nurse offered post-test counseling and supported linkage to confirmatory testing and care.People with a nonreactive (HIV antibody) or undetectable (HCV RNA) result received an SMS notifying them of the result.The Sexual Health Infolink submitted a standardized online case report form for each participant with detectable HCV RNA including data on HCV treatment initiation and loss to follow-up at 6 months. For assisted registration in the community or in prison, participants were assisted to complete the online survey, and, according to their eligibility, a DBS sample for HIV or HCV testing was collected.The sample was returned to the laboratory, and participants were informed of their results by SMS, phone, or inperson at the participating site.Depending on the site, linkage to care post-diagnosis was the responsibility of the state-wide Sexual Health Infolink or clinical staff at the site level.The Sexual Health Infolink or clinic site coordinators submitted a standardized case report form for each participant with detectable HCV RNA including data on HCV treatment initiation and loss to follow-up at 6 months.For participants in prison, the case report form was completed by Justice Health, which only reported on treatment initiations in the prison setting. All participants completed an online survey of baseline data, including demographics, behavioral risk, and HIV and HCV testing history.There was no compensation or financial incentive offered as part of the study, but some sites implemented this as part of local initiatives. Where a participant tested more than once during the study period, a unique identifier was used to extract the most recent detectable test result or most recent undetectable result for each participant, and only these episodes were included in the analysis.Demographic and behavioral characteristics were reported from the enrollment survey.HCV RNA test results were reported from the laboratory database.Treatment initiation was collected from the standardized case report forms completed by sites and the state-wide Sexual Health Infolink.All databases were linked using a medical record number unique to each testing episode. Study Outcomes The primary study outcome was detectable HCV RNA.The secondary study outcome, treatment uptake within 6 months of testing, was defined as HCV treatment prescribed within 6 months of the registration date.Observation time for treatment initiation commenced on the date of NSW DBS pilot enrollment and ended on the date of HCV treatment initiation. Exposures Demographic and behavioral factors hypothesized to be associated with detectable HCV RNA and treatment initiation were determined from the literature and included (i) testing setting (online self-registration, assisted registration in the community, assisted registration in prison), (ii) gender (male, female, other [including nonbinary and transgender]), (iii) age at survey (5 categories: ≤25, 25-34, 35-44, 45-54, >55), (iv) Aboriginal and/or Torres Strait Islander, (v) major city postcode, (vi) born outside of Australia (no, yes [Asia/Africa], yes [other]), (vii) speaks English at home, (viii) recently injected drugs (no, yes, prefer not to say).Due to changes in the survey from the beginning of 2019, the definition of recent drug injection changed from in the last 12 months (pre-2019) to in the last month (2019 onwards). Statistical Analyses Logistic regression models were used to estimate crude and adjusted odds ratios (aORs) and corresponding 95% confidence intervals to evaluate factors associated with detectable HCV RNA and HCV treatment initiation.Variables with a P value <.10 in the unadjusted logistic regression models were retained in adjusted models if no collinearity was identified. For all analyses, statistically significant differences were assessed at a .05level; P values were 2-sided.All analyses were performed using Stata, version 14.0 (StataCorp, College Station, TX, USA). Sample Characteristics Overall, 6600 HCV RNA tests were performed during the study period, and 5960 people were tested.Considering the most recent test, among the 5960 people tested (Figure 1), the most common registration pathway was assisted registration in prison (55%, n = 3275), then assisted registration in the community (40%, n = 2357), and lastly online self-registration (6%, n = 328).The proportion of tests performed in each pathway changed per quarter (Figure 2).The median age of people tested (range, interquartile range) was 38 (16-91, 29-46) years, a quarter were women (24%), 35% identified as Aboriginal or Torres Strait Islander, 85% were born in Australia, 93% spoke English at home, and 55% had recently injected drugs (Table 1). The registration pathways tested different populations.Online self-registration tested a higher proportion of men who have sex with men (55%, vs 7% in the community and 3% in prison; P < .001)and people born outside of Australia (48%, vs 15% in the community and 11% in prison; P < .001).Assisted registration in the community tested a higher proportion of women (35%, vs 18% in online self-registration and 16% in prison; P < .001)and people who recently injected drugs (68%, vs 26% in online self-registration and 49% in prison; P < .001).Assisted registration in prisons tested a higher proportion of Aboriginal and/or Torres Strait Islander people (41%, vs 16% in online self-registration and 30% in the community; P < .001)(Supplementary Table 1). Detectable HCV RNA Of those tested for HCV (n = 5960), 15% (n = 878) had detectable HCV RNA.The proportion with detectable HCV RNA was lower in online self-registration (5%) than assisted registration in the community (17%) and assisted registration in prison (14%; P < .001).The proportion with detectable HCV RNA was highest in Aboriginal and/or Torres Strait Islander people (17% vs 14% in non-Aboriginal and Torres Strait Islander people; P = .003),people born in Australia (16% vs 5% in people born in Asia or Africa; P < .001),and people who recently injected drugs (20% vs 8% in people who did not recently inject drugs; P < .001). Treatment Initiation Among those with a detectable HCV RNA DBS result, 44% (386/878) initiated treatment within 6 months of testing.In the online self-registration pathway, 13% (2/15) initiated treatment within 6 months.Due to the low number of people diagnosed via online self-registration, this pathway was excluded from the treatment initiation analysis, leaving 863 people with detectable HCV RNA, of whom 44% (n = 384) initiated treatment within 6 months. DISCUSSION This study evaluated HCV testing and treatment following the implementation of a large state-wide program to scale up DBS testing.Higher HCV treatment uptake following DBS testing was observed in prison compared with assisted collection in community and online self-registration.These data demonstrate the utility of DBS to improve the reach of HCV testing outside of traditional health care settings, while emphasizing the need for improved pathways to care in community sites.The World Health Organization recommends DBS for sampling outside of health care settings, so the findings of this study are important to guide clinical practice and policy (eg, national plans) for implementing HCV testing programs and advance progress toward HCV elimination.Detectable HCV RNA was 15% overall, similar to national studies in needle syringe programs (16% in 2021) [19] and drug treatment clinics (17% in 2019-2021) in Australia [4].Among people who had recently injected drugs, 20% had detectable HCV RNA, likely reflecting the data being collected across 2017-2020 and consistent with data from older studies of people who inject drugs [4,19].The proportion of MSM with detectable HCV RNA was 8%, higher than a meta-analysis from 2000 to 2019, which estimated pooled HCV prevalence in men who have sex with men in Australia at 2.8% [20].HIV prevalence among MSM in the NSW DBS Pilot was relatively low (0.6%), so the higher HCV prevalence is likely due to the high proportion of recent injecting (54%) among MSM who tested for HCV in the study.The criteria for HCV testing in the NSW DBS Pilot (history of injecting drug use, being Aboriginal or Torres Strait Islander, or ever being incarcerated) are important to target testing to people most at risk of having current HCV infection. Varied HCV treatment uptake in this study emphasizes the need to improve linkage to care and treatment initiation, particularly in the community.In the community, 26% initiated treatment at 6 months postenrollment, which was comparable to 27% in standard of care among people with recent drug dependence in New South Wales [21].Studies that offered point-of-care HCV RNA testing but required confirmatory testing before treatment initiation produced similar treatment uptake (23%-49%) among people who inject drugs [22,23].In Australia, the regulatory requirement to have a confirmatory test via venepuncture following a DBS result of detectable HCV RNA increases the potential for loss to follow-up.Given the limits of the current regulations, there are alternatives to DBS sampling.Microvette collection tubes (many of which already have regulatory approval for sample collection) can be used to collect capillary whole blood by fingerstick, which can then be transported to central laboratories for serological (HCV antibody) and molecular (HCV RNA) tests.One study in Myanmar has demonstrated the feasibility of this strategy using Xpert HCV Viral Load Fingerstick testing [24], and a similar strategy is being implemented in the United Kingdom [25]. The highest treatment uptake was observed in prison (61%).Treatment uptake was higher than comparable studies (21% in an English study using DBS in prison [26] and >26% in an Australian study using DBS in prison [27]).In the current study, DBS sample collection was often performed in highintensity testing campaigns, collecting samples from large numbers of people in recreational areas instead of bringing people individually to a clinical space.This model of testing requires reduced staff time compared with standard of care (which required people to be taken individually to a clinical space for venepuncture), and treatment initiation was facilitated by streamlined care pathways.Gaining regulatory approval to use DBS for HCV diagnosis would remove the requirement for confirmatory testing and could further increase treatment initiation in high-intensity testing campaign models in prison by reducing the number of visits and the time between testing and diagnosis. Treatment uptake was lower among women in the NSW DBS Pilot, consistent with a population-based study of DAA treatment uptake among women in the same state in Australia [28].In the current study, the higher proportion of men in the prison pathway and higher treatment uptake in the prison pathway resulted in higher treatment uptake among men overall.With women being more likely to test in the community, it is important that pathways to care are strengthened across all settings.Incorporating a gender lens into the design of HCV interventions can help avoid the entrenchment of gender inequities [29] and ensure that women are supported to access HCV treatment no matter where they are tested. This study has several limitations.Changes to the survey during the study period mean that at different time points, recent injecting was reported as in the last 12 months or in the last month.A recent study of people who inject drugs in Australia reported that 75% of people injecting in the last year had injected in the last month [4], which indicates that these groups may be similar.Processes for recruitment were not standardized across treatment pathways or across sites in the same pathway; for example, some community sites offered financial incentives to participate, but the timing and size of incentives were not reported to the study.Processes to facilitate linkage to care were not standardized across sites.This may have led to differences in testing and treatment uptake across different sites.Some sites did not have on-site venepuncture and had to refer patients offsite for confirmatory testing and treatment, possibly impacting HCV treatment uptake.Given the sensitivity and specificity of DBS testing, it is possible that some people with low HCV RNA levels that were not detectable with DBS testing would have been detectable with confirmatory testing.People who participated via the online self-registration pathway were contacted to self-report treatment initiation, which may have impacted reporting.For people diagnosed with HCV in prison who initiated treatment in the community, treatment uptake was not reported. This study has important implications for the development of local, national, and international testing strategies.DBS testing is an important testing modality to be offered as part of a suite of testing options, especially in resource-limited settings such as prison and outreach.In prisons, DBS can be used for high-intensity testing campaigns where samples are collected from large numbers of people on the wings, without requiring individuals to attend a clinical space and reducing the burden on staff to accompany individuals to testing.Other studies have identified interventions that improve DAA treatment uptake, for example, low-threshold care (flexible appointment scheduling and a supportive harm reduction framework) being provided in needle syringe programs [30] and assignment of a care coordinator [31].Qualitative research is needed to better understand patient and provider barriers and facilitators for DBS testing in the community to simplify pathways to care, develop effective implementation strategies to support providers, and improve treatment uptake.This study will support an ongoing parallel study to inform an economic analysis to estimate the HCV prevalence at which each alternative testing strategy (antibody testing, DBS RNA testing, RNA point-of-care testing) is most cost-effective.The aforementioned sample collection method in microvette tubes is an alternative to DBS that could be implemented with a similar model and could be integrated into clinical care in Australia's current regulatory environment.Although not feasible for all settings, on-site point-of-care HCV RNA testing is a promising intervention that is being scaled up in Australia [32].Studies have found high treatment uptake in needle syringe programs [33] and prison [27] following on-site point-of-care HCV RNA testing compared with standard of care [34].Further investigation is needed to understand when and where each strategy is appropriate. Overall, 15% of people tested in this study had detectable HCV RNA, and 26% initiated treatment, with higher treatment uptake in prison (61%).Our findings demonstrate that DBS could be an important strategy to improve the reach of HCV testing and improve treatment uptake in resource-limited settings such as prison.This study informs the development of large-scale DBS programs globally, providing one strategy to advance progress to HCV elimination by 2030.Further work is needed to compare the cost-effectiveness of different HCV testing modalities to inform practice and policy.DBS is an important option as part of a range of testing modalities to strengthen local, national, and international HCV elimination strategies. Figure 1 . Figure 1.Flowchart of people tested for HCV RNA in the NSW DBS Study, September 2017-December 2020.Abbreviation: HCV, hepatitis C virus. Figure 2 . Figure 2. People tested for HCV RNA in each pathway per quarter in the NSW DBS Pilot, September 2017-December 2020 (n = 5960).Abbreviation: HCV, hepatitis C virus. Table 1 . Characteristics of People Tested for HCV RNA in the NSW DBS Study, September 2017-December 2020 (n = 5960) Abbreviation: HCV: hepatitis C virus. a Column percentage. Table 2 . Factors Associated With Detectable HCV RNA Among People Tested for HCV RNA in the NSW DBS Study, September 2017-December 2020 (n = 5960) a Proportion numerator in first column, denominator in Table1.Hepatitis C Treatment: The NSW DBS Pilot • OFID • 5 confirmatory HCV RNA test and 55% (n = 33) had confirmed detectable HCV RNA. Table 3 . Factors Associated With Treatment Uptake Among People Tested for HCV RNA in Prison and Community in the NSW DBS Study, September 2017-December 2020 (n = 863) Abbreviation: HCV, hepatitis C virus.
2023-10-22T15:14:29.440Z
2023-10-20T00:00:00.000
{ "year": 2023, "sha1": "dd1764b089ba36523e28a8f51ccfa2158ed68c1c", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/advance-article-pdf/doi/10.1093/ofid/ofad517/52283621/ofad517.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71462d33c998f67e8880ec54ca9324e52ba12022", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
214245406
pes2o/s2orc
v3-fos-license
Geospatial Information as a Tool for Soil Resource Information, Management and Decision Support in Nigeria Understanding and addressing the complexity of soil resources management and factors involved requires collection and interpretation of relevant data that will serve as decision support tools. Geospatial information is a veritable tool for soil resource information and decision support for soil management, which is yet to be well embraced in Nigeria. This paper emphasized the importance of geospatial information as a decision support tool to make better and informed decision in the management of soil resources. It also reviewed and discussed status of soil information systems and need to promote strategies for sustainable soil resource development in the country. DOI:https://dx.doi.org/10.4314/jasem.v23i12.5 Copyright: Copyright © 2019 Adedeji. This is an open access article distributed under the Creative Commons Attribution License (CCL), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Dates: Received: 11 October 2019; Revised: 10 December 2019; Accepted: 21 December 2019 The soils of the world are major contributors to ecosystem services such as food production, climate regulation and mitigation, carbon sequestration, issues of biodiversity, land degradation, water quality and quantity and energy issues (Zinck, 2013;Hempel et al., 2014). The soil isan environmental medium that is often neglected because people sometimes do not realize the importance it has for the ecosystem and the economy (Nkwunonwo and Okeke, 2013). Soil resources all over the world are faced by many constraints such as soil erosion, salinization, flooding, declining fertility, desert encroachment, mismanagement and general misuse that hamper its productivity. The major drivers are the unprecedented growth in human population and wasteful consumption pattern in different parts of the world (Geymen and Baz, 2008). Despite these problems however, there is inadequate information on the extent and control of environmental degradation issues including soil erosion, soil nutrient depletion and soil degradation (Mulaku, 2002;Abdallah et al., 2019) especially in developing countries. In Nigeria like many other developing countries, information on the nature, distribution, values as well as appropriate management techniques of the soil resources are still poor. Even when information is available, it is frequently inappropriate or difficult to access and mobilize in order to support decision making in natural resource management. This issue has elevated the need for higher quality, more consistent and more relevant soil information across the world (Pásztor et al., 2012;Hempel et al., 2014). As one of the main concerns of Agenda 21 is the availability of reliable, geographically specific information on natural resources and the environment (Clarke et al., 1999), there has been growing demand for soil information to strengthen green economy initiatives and community adaptation strategies to climate change (Levin, 2013). Spatial information, while readily available in industrialized countries, is often incomplete or outdated and thus not compatible with modern management requirements in developing countries. To meet the challenges of achieving appropriate balance between the use of natural resources and the maintenance of an optimal natural environment; there is the need for adequate information (spatially referenced or geospatial information) on the earth's processes. Soil survey provides an accurate and scientific inventory of different soils, their kind and nature, and extent of distribution so that one can make prediction about their characters and potentialities (FAO, 1996). This will assist in making informed decisions for the management of the soil resources and for planning sustainable land management and decision-making. Soil information is needed for diverse requirements such as scientific diagnosis of soil; establish the potentials and limitations, optimal land use planning, monitoring problem soils and degradation, basis for transfer of technology and aid in decision-making (Elmira, 2012;Ogunbadewa, 2012). Soil resources inventory provides an insight into the potentialities and limitation of soil or its effective exploitation and an accurate knowledge of land use and land cover features represents the foundation for land classification and management (Dai and Khoraran, 1998). Access to spatial information on important resources such as the soil by policy makers and administrators, is important for the sustainable socio-economic development of a nation. This is reflected in the growing interest in the concept of spatial data infrastructure (SDI) at the national and global levels. Geographical information systems (GIS) integrated with remote sensing can be used to identify areas of land degradation and link them to physiographic settings (Van Lynden and Mantel, 2001;Gao 2008a,b). The availability of earth observation (EO) data in hard copy as well as digital format have resulted in a plethora of applications such as its use in agricultural landscape mapping, soil classification, soil analysis and soil moisture assessment (Hrishkish, 2012;Bui et al., 2016). Developing countries such as Nigeria are faced with the problem of the lack of capacity for the development of such infrastructures in order to optimize their limited resources. Although geospatial data on soil now exist in different forms in Nigeria, these are largely house in different government departments and research institutions without any form of coordination (Adeniyi, 1992;Okeke and Nkwunonwo, 2007;Nonguierma, 2011).Moreover relevant information for planning purposes remains scare due to absence of a national spatial data infrastructure despite the variety of research on different aspects of soil resources. Efforts at thematic mapping has increased in recent year as the result of advances in geographic information and remote sensing techniques but mapping of soil types and characteristics has not fully shared in this advancement especially in developing country like Nigeria, because of the complexity of soil geography and the high cost of its direct observation (Ismail and Yacoub, 2012).This makes it difficult to have reliable information and make better decisions on the soil resources. Nigeria like most countries in sub-Saharan Africa is faced with challenges to put in place policies, resources and structures to make available geographic information especially as it relates to management of land resources such as soil (Nonguierma, 2011). Until recently, these countries relied solely on the traditional method of collecting soil such as, drainage, ecology, mapping units, classification type, texture, pH values, unique locations, and landscape, using qualitative methods. Such methods are however considered slow, time consuming and expensive (Lark, 2007) and the resulting soil maps often suffer from dimensional instability, with highly generalized legend, nonflexibility of scale (Okeke and Nkwunonwo, 2007;Bui et al., 2016). The maps tend not to be suitable for quantitative purposes (Zhu et al., 1997) thus the need for a reliable and quantifiable soil information system that offers the opportunity for quick, quantitative, upto-date, high resolution and more accurate soil data (Nkwunonwo and Okeke, 2013;de Sousa Lima et al., 2014).Soil survey is characterized the mapping of soil units in such a way that useful statements can be made about their land use potential and response to changes in management. Therefore, digital soil mapping (DSM) refers to techniques of mapping soils with mostly digital techniques by incorporating field (and traditional legacy soil information). The idea is to make soil survey, classification, and land evaluation as objective as possible. Thus this paper reviewed and discussed status of soil information systems and need to promote strategies for sustainable soil resource development in Nigeria. It emphasized the importance of geospatial information as a decision support tool to make better and informed decision in the management of soil resources. Soil resource mapping in Nigeria: The soil is a fundament resource for the development of any country, however, in Nigeria; implementation of geospatial data infrastructure as a decision support system for various natural resources and environmental applications has not been well developed (Ogunbadewa, 2012). The country is faced with a myriad of problems ranging from lack of accurate, up-to-date and quantitative soil data, which have resulted in continuous misuse and degradation of soil resources. In addition, the high degree of spatial or temporal variability in the tropics has long been recognized and this has made it difficult for most tropical soils to be mapped and predict accurately their management and productive potentials (Ogunkunle, 2003;Akinbola et al., 2010;Egbuchua, 2014). Concerning soil mapping in Nigeria, existing information on soil resources are compiled on a national basis and were mostly based on coarse spatial resolution satellite remote sensing data plus small scale map covering very large areas but with very little details. Until recently, no systematic soil survey, soil classification and land use capability of the country, have been carried out (Asekunowo, 2009). The country still maintains the conventional soil maps, and serious attempts have not been made at reverting to digital soil mapping. The first provisional soil map of Nigeria was published by Survey Department, Lagos in 1952. The publication of the CCTA soil map of Africa (FAO, 1964) and FAO Soil Map of the World (FAO-UNESCO, 1974) followed this. It was not until the late 1970s that the Federal Department of Agricultural Land Resources, with assistance from USDA initiated a national soil inventory project to produce "a systematic and correlated soil map of the country which would provide a good guide for agriculture and other land development" (FDALR, 1990). Most of the available soil maps are in prints and out-dated which has been criticized for being time consuming and costly and in response to these criticisms, new approaches have been proposed and developed to improve the mapping of soils and their attributes (McBratney and McBratneyet al.2003;Iwashita et al., 2012;Lima, 2013). The most recent national soil map, at the scale of 1:3,000,000, was compiled by Sonneveld (1996) from the pre-1985 soil maps. Basically, the country lacked spatial data infrastructure (SDI) as regard to the soil resources which affects the ability for better decision-making in the management and development of this vital natural resource. While the concept of integrated geospatial infrastructure is well accepted with numerous advantages, its execution in Nigeria is riddled with a number of challenges such as: Financial resources mobilization;Lack of adequate impetus to set up such infrastructure; Arousing political interest among decision-makers and policymakers; and Lack of standardization as the level of standardization is presently very low. Presently, the challenge is how to captured and transformed into databases that may be used for temporal trend analysis and for a digital soil mapping (DSM)programme the hundreds of soil surveys at various scales, ranging from farm levels to national scale. Digital soil mapping is a predictive approach to model soil spatial variation based on the relationship between soil and its environmental conditions (Zhu et al., 2001;McBratney et al., 2003). A digital soil map is a spatial database of soil properties that is based on a statistical sample of landscapes or regions, which permits functional interpretation, spatial prediction, and mapping of soil properties relevant to soil management and policy decisions (Lagacherie et al., 2006;2007;Lagacherie and McBratney, 2007). However, there is the inadequacy or a lack of georeferencing of soil records and the inconsistency in the laboratory and mapping methods, taxonomy, legends, soil survey reports and the deterioration of paper hardcopy soil maps. Furthermore, most available geospatial data collecting agencies have few digital management systems for data capture and processing, their locations are scattered in different parts of the country, while available datasets are scanty and expensive. There is the need to capture and rejuvenate the relevant legacy soils data for Nigeria in digital form to disseminate soil information more easily and for quick and proper decision-making (Dent and Ahmed, 1995;Rossiter, 2008). Digital soil mapping (DSM) involves the creation of spatial soil information systems using field and laboratory methods coupled with spatial and non-spatial soil inference systems. An important aspect of the rejuvenation of historical soil data is data renewal or data resurrection (Dent and Ahmed, 1995), which entails the transformation of the rescued data into usable digital (GIS) formats (Rossiter, 2008). Data rescue involving scanning of the historical records and storing them in appropriate, easily accessible media such as DVD or Web publications. Data rescue also include the collection and input of metadata that describe the source and nature of the rescued data (report, map, database, legend) and the approximate extent and geographic location of the data (Rossiter, 2008).DSM is focused on soil attributes, assuming that these are continuously varying in space and quantitative models developed within GIS environment are used to describe, classify and study the spatial distribution patterns of soil as it occurs in the field. It is a way to bridge the gap in scales between ground-based soil monitoring activities, which are normally limited to the point or small scales and modelling activities thatcover larger areas. A step further to this would be to combine remotely sensed soil-related information with proximally sensed and traditionally measured soil property data at larger spatial scales . Maps generated using DSM may be adequate for extrapolating soil distribution information to areas where no comprehensive soil map is available (Abdel-Kader, 2011). Nigeria as the giant of Africa launched the NigeriaSat-2, NigeriaSat-X (Earth Observation Satellites) and NigComSat-1R (a communication satellite) in to the space orbit in August and December 2011 respectively, which will serve as a boost to our National security if well explored. Nigeria Sat-2 was launched in August 2011. It is high resolution satellite spatial resolutions of 2.5 m panchromatic and 5m multispectral and with area coverage (swath width) 600 by 600km with the ability to rapidly produce accurate mapping to updates the existing information and acquiring new mapping information. It is has the Red, Green, Blue and Near -infrared band. The Nigeria Sat-2 allows for infrastructure mapping, settlement classification, development of urban green spaces, service provision maps, and access control mechanisms regional planning, security, environmental and disaster management. NigeriaSat-X was launched in August 2011 alongside NigeriaSat-1. It was designed and built by Nigerian Engineers in the UK. It is a medium resolution satellite with a resolution of 22m multispectral. . It is has the Red, Green and Near -infrared band with a swath width of 700km. NigComSat-1R is a replacement satellite for NigComSat-1 Satellite. Geospatial Technology for Soil Resources Management: According to Buhren and Decker (2008), all planning and management activities require information. However, there are challenges which include how to determine what data and information is needed, find out if this already exists, how to get hold of it if it exists, and how to collect it if it does not. In addition there is the need to determine how to store this information in easily accessible and referenced form, how to interpret the data, resolve questions of quality, contradictions and incompleteness; (f) how to determine who needs the information, when and in what form(s); and (g) how to disseminate it as required. Given the fact that existing mapping tools in Nigeria and many other developing countries are failing to provide accurate enough information on the natural resources such as the soil, there is the need for the adoption of existing advance technologies to tackle these problems. In the past two decades much changes and developments in technologies and trends have brought new challenges and opportunities in GIS domain. GIS have been moving towards an era in which the power of such systems is continuously increasing in multiple facets consisting of computing, visualizing, mining, reasoning data (Pourabbas, 2014). For instance, the establishment of the Open GIS Consortium is aimed at developing publicly available geoprocessing specifications. Geospatial technologiescan make a huge difference in the availability and cost of geospatial data thus making them readily available to variety of users. These technologies now collect multipurpose data simultaneously, which has different horizontal applications because they record 'everything in their way' (Killpack, 2011). These include the LiDAR, digital video, multi-spectral and hyperspectral sensors and real-time sensors on farm and construction implements. Multispectral and hyperspectral sensors collect all of the bands at once and the data is spatially and time co-registered, user has access to a robust quantity of valuable information for the same spatial area at the exact time increment (Collado et al., 2002;Killpack, 2011).Geospatial technology has many important roles in sustainable development and management of soil resources. Spatial tools such as the global positioning system (GPS) and geographic information systems (GIS) for storing and analyzing spatial data can help make better decisions in agriculture, land development, and environmental protection and restoration. Mapping of soil properties is very important because it plays an crucial role in the knowledge about soil properties and how it can be used sustainably (Hengl, 2007a).Creation of a standard soil geospatial database will ensure consistent access, availability, and performance and promotes data sharing and coordination. Individuals, agencies and organizations can readily access digital base data, digital aerial photography and high resolution satellite imagery from other government departments and nongovernment organizations. Geospatial database is dynamic in nature and can be continually updated by data collection staff. The information derived is useful to individuals, farmers, environmental planners and government agencies for environmental farm planning, nutrient management planning, and decision-making processes. Digital soil mapping(DSM) can make use of large legacy of soil data in the form of soil maps, soil survey reports, soil profile descriptions, or antiquated card catalogue collected by past generations of soil surveyors (Mayr et al., 2008). The old soil data is not only for soil information preservation (Rossiter, 2008) but also serve as the only source of meaningful soil information for Digital Soil Mapping (DSM) in many of data-scarce countries of the developing world (Odeh et al., 2012).The Southeast Asia, andOceania (FAO/AT2010, 1995.). It was compiled from individual country soils data, which used a variety of local soils data, and is a generalization of those data. Country boundaries have been updated as of 1994 at 1:3,000,000 scale (FAO/UNESCO, 2003). The DSMW contains two types of files: map files and derived soil properties files. The map files are available in three GIS formats: one vector format (ARC/INFO Export) and two raster (scale 5 x 5 arc-minute) formats: ERDAS and IDRISI. The Derived Soil Properties files consist of interpretation programs and related data files. The programs are written in QuickBasic and can be read using DOS or OS/2 operating system. Programs are included that interpret the maps in terms of agronomic and environmental parameters (e.g. pH, organic carbon content, C/N ratio, clay mineralogy, soil depth, soil and terrain suitability for specific crop production, soil moisture storage capacity and soil drainage class). The programmes produce analyses of soil inventories, problem soils and fertility capability classification. Included are maps of soil units classified according to ADEDEJI, OLUDARE H the World Soil Reference Base and topsoil distribution, which can used in teaching soil science. The database includes information on soil moisture storage capacity, soil drainage class and effective soil depth, useful for environmental studies. Recently, the first version of the Africa Soil Profiles Database was released by ISRIC -World Soil Information as a contribution to the production of high-resolution soil property maps for Sub Saharan Africa. Version 1.1 of this georeferenced and standardized legacy soil profile database for Sub Saharan Africa contains over 12,000 records for 37 countries, compiled from over 300 data sources. Data were converted to a common standard, parsed through basic quality rules, and cleaning. This inventory of legacy soil maps of Sub Saharan Africa draws from international holdings at ISRIC (Wageningen), FAO (Rome), IRD (Montpellier), WOSSAC (Cranfield), JRC (Ispra), Ghent University, and national holdings at Mlingano (Tanzania), Sotuba (Mali) and Zaria (Nigeria). Currently, some 5000 maps are included in the database and these cover the continent at different scales, using different legends. Applications of Geospatial Information Soil Resource Management and Decision Support: The determination of the long-term trend of land degradation requires spatial comparison of multiple land cover maps derived from remotely sensed data at different times (Geymen and Baz, 2008). Lo and Yeung (2002) note that a major facet of this current era is the development of the geographic information infrastructure. Such infrastructure could be used to support various decision-making processes in urban planning, engineering and environmental resource management. Spatial data management including data storage and data exchange (between several project sections) is particular important for interdisciplinary research projects which focus on environmental field studies and regional modelling (Mückschel and Nieschulze 2004). Optimal soil management and environmental protection require agricultural and natural resource managers equipped to characterize and manage soil spatial variability. Werban et al. (2013) posited that in the face of the new challenges in the context of climate and global change and the increasing need of multidisciplinary, a more sophisticated use of soil information beyond the "classical" soil information is needed. Predictive soil mapping (Scull et al., 2003) or digital soil function mapping are promising ways to enhance the information content of soil maps (Ismail and Yacoub, 2012).Among the many predictive digital soilmapping approaches, SoLIM (soil land inference model) is one of the few digital soil mapping approaches that can be used in the production of soil surveys (Zhu et al., 2001). The SoLIM couples GIS/remote sensing techniques with artificial intelligence techniques to map spatial distributions of soil characteristics using fuzzy logic. The integrating role of GIS in environmental analysis system and the tradition of using metadata and geospatial data standards in GIS make tracking scaling steps much easier. With the rapid development of remote sensing and geoinformation science, natural resources survey teams are now increasingly creating their products (geoinformation) using ancillary data sources and computer programs-the so-called direct-to-digital approach (Hartemink et al., 2008;Hengl, 2007a,b;Pásztor et al., 2012). In digital soil mapping, soil variables such as pH, clay content or concentration of a heavy metal, are increasingly mapped using the regression-kriging framework: the deterministic part of variation is dealt with maps of soil forming factors (climatic, relief-based and geological factors) and the residuals are dealt with kriging (McBratney et al., 2003). Geospatial reasoning creates the objective connection between a geospatial problem representation and geospatial evidence. According to Aderoju et al (2013), one set of activities, information foraging, focuses around finding information while another set of activities, sense making, focuses on giving meaning to the information. This information, coupled with other measurements and field scouting information, allows for effective application of sitespecific management practices (Mark and Waddington, 1997). Retrieval of biophysical variables from satellite time series should result in a quantitative description of the land surface in all dimensions (Defourny and Bontemps, 2012). For instance, LIDAR digital elevation data is used to help farmers better manage their fields. Furthermore, soil moisture is an important component of the terrestrial environment that significantly regulates water circulation and surface energy exchanges between land surface and the atmosphere (Vereecken et al. 2014;Umar et al., 2017). Estimation of soil moisture is crucial to improve the forecasting of precipitation, temperature, droughts, and floods (Albergel et al. 2013). Active remote sensors such as longer-wavelength microwave have excelled quite at measuring soil moisture (Njoku et al., 2002). They could effectively be used to estimate soil moisture near the surface of the earth (Santos et al. 2014) because it could penetrate thick vegetation cover and appropriately sense the soil moisture. Soil moisture estimation can be done across varying spatial and temporal scales (Singh et al. 2005, Umaret al., 2017. In the United States of America, a National Spatial Data Infrastructure was established in 1994 with a remit to enhance technological, political, standards and human resource aspects of GIS through acquiring, processing, sharing, distributing and improving the utilization of geographic data.Remote sensing provides significant source for real-time and accurate data related to land and soil. It enables homogeneous information over large regions, and can therefore greatly contribute to regional erosion assessment (Siakeu and Oguchi, 2000). Mobile phone application such as SOILWEB APP for smart phones is an easyto-use suite of web based mapping tools that allow users to interact with soil survey via Google Earth, Google Maps, and an original standalone interface (Beaudette andO'Geen, 2009, 2010). The technology has revolutionized the use of digital soil survey information, allowing users to access soils information in the field where it is often most needed and to apply decision support tools directly to the customer in real time (Hartemink et al., 2008;Levin, 2013). In 2008, a group of scientists led by Dr. Sanchez worked to develop a proposal to the Bill and Melinda Gates Foundation and the Alliance for a Green Revolution in Africa (AGRA) called AfSIS-African Soil Information Service. AfSIS as a major step towards a digital soil map of the world. Although focused on Sub Saharan Africa, the Gates Foundation had the wisdom to allocate about 10% of the funding to develop the GlobaSoilMap.net Consortium.As an account for seasonal variability, multi-temporal satellite images are useful to extract the valuable information associated to seasonal land use dynamics for mapping land use/land cover (De Bie, 2005). Remote sensing in combination with ancillary information and spatial models in a geographic information system environment would yield reliable results concerning inventories of natural resource and environmental monitoring. (Singh et al., 2002;Lillesand et al., 2007;Neumann et al., 2012;Pretorius, 2013). Finke (2012) indicated the potential of integration of sensors in making maps and providing model input. Furthermore, Krüger et al. (2013) combined results of geophysical sensor data for extraction of high resolution soil depth information to enhance accuracy of biomass modelling, while Ugbaje and Reuter (2013) present an approach to apply DSM procedures to predict available water capacity of soils in Nigeria using pedotransfer functions (PTFs). Conclusion: Continuous environmental degradation and critical issues such as soil erosion in Nigeria calls for a way development of policy tools and mechanisms for appropriate land use management, involvingthe stakeholders in the decision making process. Therefore, the need to improve data quality and efficiently handle large amount of data by using modern information gathering techniques to produce up-to-date information. This will facilitate the decision making process in different applications including soil resource management and sustainable development
2020-02-06T09:11:43.416Z
2020-01-29T00:00:00.000
{ "year": 2020, "sha1": "c7e770b7c76fb4c2d02db2b705e8706844c7b78a", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/jasem/article/download/192511/181618", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "17578f88631ef3276358259dbf5f5397d406543e", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Computer Science" ] }
253131434
pes2o/s2orc
v3-fos-license
Human–Wildlife Conflicts: Does Origin Matter? Simple Summary Conflicts between humans and wildlife can occur with different types of problematic animals: native pests, dangerous native carnivores, invasive native species and invasive alien species. For conservation biology, the latter are the most damaging and must be managed differently than the first three types. I compared the damage done by native and introduced species in the United States using databases published on the Internet by Wildlife Services (WS), which depend upon the Animal and Plant Health Inspection Service of the Department of Agriculture. They receive thousands of complaints per year from the public and institutions due to wildlife damages, which they try to resolve. I found that human–wildlife conflict events were much more frequent with native species than with introduced ones. This pattern can be explained by at least three factors: because this organization biases their effort toward native fauna due to historical reasons, because people perceive the problems caused by native animals more, or because the impact of native species is greater than that of nonnative species. In any case, it seems reasonable to disregard the origin of species and try to resolve the most serious human–wildlife conflicts regardless of whether they are caused by native or introduced species. Abstract Conservation biologists have divided wildlife in two antagonist categories—native and introduced populations—because they defend the hypothesis that the latter acquires or expresses harmful qualities that a population that remains in its original environment does not possess. Invasion biology has emerged as a branch of conservation biology dedicated exclusively to conflicts between introduced wildlife and human interest, including the protection of biodiversity. For invasion biology, the damage caused by native species is different and must be managed differently. However, the consensus around this native–introduced dichotomy is not universal, and a debate has intensified in recent years. The objective of this work was to compare the impacts of native and introduced species of terrestrial vertebrates of the United States using the dataset provided by Wildlife Services (WS), which depend upon the Animal and Plant Health Inspection Service of the Department of Agriculture. Annually, they receive thousands of reports and complaints of human–wildlife conflicts. I analyzed the WS databases and found, against expectations, that native species produce significantly more damage than nonnative ones, especially regarding damage to agriculture, property and health and safety. In the category of impacts on biodiversity and natural ecosystems, the differences were minor. I discuss several potential explanations of these patterns in the results. I also discuss the ecological foundations of the native–introduced dichotomy hypothesis. Introduction Conservation biology divides populations of the same animal species into two broad categories: native and introduced. The first are those that live within the original distribution range, while the second are those that live outside that range because they have been relocated. Conservation biology considers that the former must be protected to maintain biodiversity and ecosystem functions, while the latter must be controlled and, where possible, eradicated. When they spread (and produce damage or not), introduced species Animals 2022, 12, 2872 2 of 10 are called 'invasive' [1]. Conservation biologists' beliefs are summarized in the following statements provided in a recent document of the International Union for Conservation of Nature: "it is expected that all introduced taxa will have an impact at some level, because by definition an alien taxon in a new environment has a nonzero impact" ( [2], p. 9), and "lack of evidence of impacts must not be interpreted as lack of impact" ( [2], p. 17). Native species can also produce negative impacts on the environment, the economy and human health. Harmful native species can be classified into three different categories. Firstly, plague, pest or overabundant species are normally the result of two types of human environmental changes: agriculture and overhunting. For example, in mammals, agriculture has induced the increase in small mammal populations of a small number of species [3,4] while the overhunting of large carnivores has generated the overabundance of cervids and other ungulates [5]. Secondly, human-wildlife conflicts arise when large vertebrates, mainly carnivores, impact livestock husbandry and farming or the property and even the life of the inhabitants of the human settlements close to natural environments [6,7]. The third type is the so-called invasive-native species, i.e., native species that expand their geographical distribution, mainly due to climate change, and impact these new environments [8][9][10][11][12]. Many conservation biologists consider that the impact produced by native populations and those produced by introduced populations are different and thus require different management approaches [13]. They defend the hypothesis of a native-introduced dichotomy (NID) so that a population that originates from introduced individuals acquires or expresses harmful qualities that a population that remains in its original environment does not possess [14][15][16][17]. The prediction of this hypothesis is that, in each region or country, nonnative invasive species will produce significantly more impact than native pests, native conflict species or invasive native species. The native-introduced dichotomy has generated a debate that continues. Davis et al. [18] published a seminal paper with a compelling title: 'Don't judge species on their origins'. The authors urged conservationists and land managers to focus much more on the functions of species rather than on whether they are native or introduced. In the following years, the debate has intensified [8,15,[19][20][21]. Beyond the philosophical or ideological debate, various authors have focused on the analysis of the evidence that supports the hypothesis that nonnative species cause more damage than native ones. They tested the prediction of the NID hypothesis by conducting comparisons between the impact of native and introduced species mainly on biodiversity. The resulting publications show mixed results, with some of them supporting the hypothesis [15,21,22] and others not supporting it [23][24][25]. At least four different methods using different sources of information have been used: searches in the Web of Science [22,23], reviews of the information provided by the IUCN Red List of Endangered species [15], field experiments [25] and mesocosms, which allow for the very precise analysis of harmful ecological effects [26]. The objectives of this work were to compare the impacts of native and introduced species of terrestrial vertebrates in the United States using a new type of information and to discuss the NID hypothesis. I used the database provided by Wildlife Services (WS) that depend upon the Animal and Plant Health Inspection Service (APHIS) of the United States Department of Agriculture (USDA). Annually, they receive thousands of reports and complaints of human-wildlife conflicts involving both native and introduced species. In the discussion, I will not only analyze the results obtained with the WS database, but I will also brief analyze the ecological foundations of the NID hypothesis. Materials and Methods The USDA APHIS Wildlife Services' mission is "to provide Federal leadership and expertise to resolve wildlife conflicts to allow people and wildlife to coexist" [27]. Biologists apply an integrated wildlife damage management approach to provide technical assistance and direct management operations in response to requests for assistance. Since 1996, they have published annual reports on their activities, including the number of animals that were controlled and the damage produced (https://www.aphis.usda.gov/aphis/ourfocus/ wildlifedamage/sa_reports/sa_pdrs accessed on 5 July 2022). Since 2014, the reports have differentiated between invasive and noninvasive species. To promote the vision of coexistence of people and wildlife, WS employees strive to reduce damage caused by wildlife to the lowest possible levels while at the same time reducing negative impacts on wildlife. In practice, this service begins to be rendered when the WS offices receive a request of assistance due to a wildlife damage problem. Then, WS personnel assess solutions from available, practical, cost-efficient, and environmentally and socially sound options. Finally, they help based on a wildlife damage-control strategy. When dealing with the protection of natural resources, including rare species, native wildlife and ecosystems, WS partners with Federal and State agencies, municipalities, organizations and private landowners. In its webpage, the USDA APHIS provides what they call Program Data Reports C, which contain the complaints and reports of threats to resources by wildlife and occurrences of damage received by wildlife services. Each instance of assistance is defined as an 'event' and corresponds to those reported cases of damage or threats that include an actual reported value associated with the damage (WS handles other damage complaints in which a value for loss is not calculated or reported, which are not reported in the webpage). WS classifies damages into four categories: agricultural, natural, property and human health safety. They provide a detailed description of each category on their webpage: https://www.aphis.usda. gov/aphis/ourfocus/wildlifedamage/SA_Protected_Resources accessed on 5 July 2022. Each annual report worksheet contains four columns: resource category, resource damage or threatened, wildlife species causing damage and number of events. Since 2014, they have also provided information on the origin (native or introduced) of species. Therefore, I used C Reports from 2014 to 2021. I had to correct some typos (e.g., the same species named in singular and plural or the use of two tildes instead of one) and origin errors. I included in the analysis only terrestrial, wild and domestic feral vertebrates. I excluded 11,604 data points that corresponded to arthropods (196), fish (194), captive mammals (324) and unidentified animals (10,887). Data did not have a normal distribution, so I used nonparametric statistics. I applied a chi-square test for observed versus expected frequency comparisons and contingency tables and a Mann-Whitney test for the comparison between native and introduced species. The alpha value was <0.05 in all tests. Results In the period of eight years, a total of 935,078 events on terrestrial vertebrate's damages produced by a total of at least 630 species (nonnatives: n = 67, Supplementary materials) were registered on the website of the USDA APHIS WS. The number of events per year remained relatively constant throughout the period of eight years ( Figure 2). Native species caused significantly more events of damage than did nonnative ones (χ 2 = 548.2, p < 0.0001), and the number of species that produced them was also significantly higher (χ 2 = 390.5, p < 0.0001) ( The number of events per year remained relatively constant throughout the period of eight years ( Figure 2). Native species caused significantly more events of damage than did nonnative ones (χ 2 = 548.2, p < 0.0001), and the number of species that produced them was also significantly higher (χ 2 = 390.5, p < 0.0001) ( The number of events per year remained relatively constant throughout the period of eight years ( Figure 2). Native species caused significantly more events of damage than did nonnative ones (χ 2 = 548.2, p < 0.0001), and the number of species that produced them was also significantly higher (χ 2 = 390.5, p < 0.0001) ( Thirteen native and only one nonnative species appear among the species that received more than 10,000 complaints ( Figure 4). In addition, 86% (12/14) of these species were mammals, and 64.3% (9/14) were carnivores. The species that received the highest number of complaints was the coyote Canis latrans (n = 161,491). Introduced wild boar Sus scrofa was the nonnative species on the list and appears in position four. When the different types of impacts were compared, the differences were highly significant both between impacts and between native and nonnative species ( Figure 5). Native species produced more damage in the four categories, although the difference in natural resources was the lowest. Animals 2022, 12, 5 of 10 Thirteen native and only one nonnative species appear among the species that received more than 10,000 complaints ( Figure 4). In addition, 86% (12/14) of these species were mammals, and 64.3% (9/14) were carnivores. The species that received the highest number of complaints was the coyote Canis latrans (n = 161,491). Introduced wild boar Sus scrofa was the nonnative species on the list and appears in position four. When the different types of impacts were compared, the differences were highly significant both between impacts and between native and nonnative species ( Figure 5). Native species produced more damage in the four categories, although the difference in natural resources was the lowest. Thirteen native and only one nonnative species appear among the species that received more than 10,000 complaints ( Figure 4). In addition, 86% (12/14) of these species were mammals, and 64.3% (9/14) were carnivores. The species that received the highest number of complaints was the coyote Canis latrans (n = 161,491). Introduced wild boar Sus scrofa was the nonnative species on the list and appears in position four. When the different types of impacts were compared, the differences were highly significant both between impacts and between native and nonnative species ( Figure 5). Native species produced more damage in the four categories, although the difference in natural resources was the lowest. Discussion The results of this review showed that the USDA APHIS Wildlife Services analyzed and solved significantly more cases of conflicts with native than with introduced wildlife. Although there were differences depending on the type of impact analyzed, the trend of the results was always the same. There are at least five possible explanations for this pattern, which I will briefly discuss in the following paragraphs. A first possible explanation for this notable difference in favor of managing native species is that this wildlife service was traditionally dedicated to this task, while the commitment to controlling introduced species is more recent. Predator eradication with direct federal involvement began in the early 1900s and received direct public funding in 1915 [26][27][28]. For decades, lethal control of native wildlife was performed mainly to benefit livestock producers and to enhance game populations [29]. In this century, WS expanded areas of concern, including aviation safety, crop depredation, zoonotic disease and the safeguarding of rare, threatened and endangered wildlife species from the negative impacts of invasive ones [30]. At present, protecting natural resources (native species and ecosystems) is one of the five areas of concern of WS, who partner with Federal and State agencies, municipalities, organizations and private landowners (see the USDA webpage). Given the important status given in recent years to biological conservation and invasive species control, a bias toward more traditional areas of concern is to be expected. Another explanation could be found in the way these databases are compiled. I do not belong to this organization, and my only link has been by its website. Therefore, I do not know the detailed procedures followed to transfer the information on the actual activities performed by WS employees to information loading in the data programs. A third possible explanation is that the WS data published on C Report truly reflect what occurs in nature, i.e., that native species cause more damage than do introduced species. The use of complaints and reports is a method analogous to that applied in studies of morbidity impact on human populations. In order to estimate disease incidences, epidemiological studies frequently use indirect sources of data, such as insurance claims, accident and injury reports, clinic/hospital admissions, clinic/hospital discharge data and laboratory specimen analyses (review by [31]). Similarly, Klinkowski-Clark et al. [32] used animal control data to estimate the distribution of several species of mammals in Florida. They found that their results reflected those of other studies that used more conventional techniques. The fourth hypothesis would be rooted in the very nature of the data being analyzed. As it was mentioned in the Introduction, studies based on bibliographic research and expert opinion showed mixed results, with some of them supporting the hypothesis [15,21,22] and others not finding differences between native and introduced species' impacts [23][24][25]. However, none of these works found the pattern observed in the WS database, i.e., a clear bias toward a greater impact of native species. This could be because each type of study reflects different actors in the problem of wildlife damages: while the first type expresses the opinion of biologists that investigate impacts, the WS database reflects the opinion of the public and stakeholders who are directly affected by wildlife activities. Therefore, the analysis presented in this article could be criticized for being biased toward the interests of those who complain about wildlife impacts, rather than being objective research on those impacts. The last explanation is probably the most parsimonious: there are many more native than nonnative species, and the same applies to total population numbers. For example, it is rare to introduce a large species, especially a large carnivore, and the US has a substantial number of large predators. This hypothesis appears to be supported by the result that there was no difference in the mean number of events per species between native and introduced species. This result suggests that, proportionally, complaints are equally dispersed and that the larger number of native species complaints is driven by their larger numbers and, therefore, greater likelihood of interaction. All these possible explanations are hypotheses that should be tested in future studies. In the Introduction, I enunciated the NID hypothesis: What is the theoretical and empirical support to distinguish between impacts produced by introduced populations and impacts produced by native populations? Conservation biologists have proposed that native populations are safe from causing significant damage to natural resources by virtue of their roles and relationships within ecological and evolutionary systems and processes, as far as populations do not reach unnaturally high densities because of human activities [4,11]. In contrast, individuals outside their historical distribution area are no longer part of that natural diversity and lack those links with the rest of the ecosystem; thus, they are uncontrolled and represent a threat to invaded ecosystems [15,22]. Hui et al. [33] offered two concepts that organize the argumentative justifications of the hypothesis that introduced populations inevitably produce more damage than do native ones: invasibility and invasiveness. The first concept refers to a property of recipient ecosystems and involves the elucidation of features that determine their vulnerability to invasion, such as community diversity, composition and assembly. It proposes that introduced species find environments free of predators or competitors; thus, they can rapidly increase in distribution and abundance. For example, islands are considered more susceptible to biological invasions than the mainland because species have a greater chance of establishment when the recipient community lacks congeneric or ecologically similar species [34][35][36]. 'Invasiveness' is the other concept introduced by Hui et al. [33] and refers to the propensity of introduced species to invade. These are traits that favor those groups of individuals that are translocated to new environments and can survive, establish themselves and then show high population growth and geographical expansion. These traits are typically related to a wide niche breadth, high reproductive capacity and high dispersal abilities [37][38][39][40]. Studies that seek to identify these characteristics use comparative metrics between successful and unsuccessful introduced species [33]. Abundant evidence indicates that habitat invasibility or extrinsic factors are more important determinants of introduction success in vertebrates than species invasiveness or intrinsic factors are [41][42][43][44][45]. While these concepts have been applied exclusively to 'invasive' species (hence their names), they can also be applied to native species in altered environments. In the Anthropocene, most ecosystems are degraded by human actions. This means that many native species inhabit environments in which some functional relationships are lost or new physical traits are found. Human impacts can increase the invasibility of native habitats, explaining why certain native species become a problem [6,7]. Native species can become pests due to habitat degradation that decreases predator or competition pressure (e.g., due to hunting pressure) or increases food availability (e.g., due to agriculture) without being transferred to exotic environments. Similarly, native species can become invasive-native species when climate change induces them to naturally colonize new environments [8][9][10][11][12]. If the cause that an introduced species becomes invasive and harmful is the 'invasibility' of new environments, then it would not be very different from the causes that determine that a native species becomes a pest. For example, rabbits introduced in an island without predators can dramatically increase their population size, impacting the vegetation of the island [46][47][48]. This is equivalent to the impact on native vegetation produced by a population of native rabbits when overhunting reduces or eliminates their predators [4]. In both cases, the invasibility of the environment determines species overabundances. On the other hand, the concept of species invasiveness can also be applied to native species. As with the concepts of invasibility, invasiveness can also be applied to explain why some native species become superabundant and harm other native species or human activities while others do not. For example, native rodents are expected to become pests of agriculture because of their great reproductive rates [49]. Similarly, wild boar tend to turn into pests both in native and exotic habitats due to their wide diet niche [5]. Instead, there are two mechanisms that appear to be exclusive to introduced populations: one represents an advantage; the other, a drawback. The first is the eventual Animals 2022, 12, 2872 8 of 10 lack of adaptations by the inhabitants of the receiving environments due to the sudden appearance of a new species. Examples are native prey that would not have defenses against introduced predators and parasites that would not have infection mechanisms that are effective in the new animals (firstly described by [34]). In contrast to these eventual advantages, introduced animals must face new ecological relationships for which they do not necessarily have adaptations, thus finding themselves in worse competitive conditions than native populations. For example, European rabbits introduced to Argentinean Patagonia found a new predator, the minor grison Galictis cuja, which can enter rabbit burrows. A few years after their introduction, rabbits became the almost exclusive prey of grisons [50]. Despite this, I cannot exclude the fact that the introduced rabbits induced a disproportionate increase in the minor grison population, inducing a hyperpredation effect on native species in a manner similar to that observed by Cerri and colleagues after the introduction of Eastern cottontails in Italy [51]. To the best of my knowledge, there are no large-scale surveys that assess the degree of incidence of these two mechanisms as determinants of the success or failure of introductions. Conclusions Allocation of resources to conservation programs is normally limited, especially in peripheral countries, so environmental managers must generate decisions about which measures to prioritize and finance. In this context, the proposal by Davis et al. [18] of concentrating on the investigation of the most serious cases of animal impacts regardless of their origin seems more reasonable than generating a dubious dichotomy between native and nonnative species. This in no way means underestimating the importance of the damage that introduced species can cause, especially to biodiversity and natural ecosystems. Instead, it suggests avoiding the risk of investing in controlling species just because they are not native and not because of the degree of damage they do. Nor does it suggest for a moment to minimize the dramatic situation that many natural environments are going through. Quite the contrary, I believe that the seriousness of the situation warrants that the authorities become aware of the need to invest more resources into the conservation of biodiversity. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Data can be found in the web page of the USDA APHIS Wildlife Services: https://www.aphis.usda.gov/aphis/ourfocus/wildlifedamage/sa_reports accessed on 24 July 2022.
2022-10-27T15:25:39.891Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "2895586ab05d0dd20d81a310e262cfff77e7a6c6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/12/20/2872/pdf?version=1666361152", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0cb7419152b713c3f431ae356ab58d2e6224a79", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
258212608
pes2o/s2orc
v3-fos-license
A blue depression in the optical spectra of M dwarfs A blue depression is found in the spectra of M dwarfs from 4000 to 4500A. This depression shows an increase toward lower temperatures though is particularly sensitive to gravity and metallicity. It is the single most sensitive feature in the optical spectra of M dwarfs. The depression appears as centered on the neutral calcium resonance line at 4227A and leads to nearby features being weaker by about two orders of magnitude than predicted. We consider a variety of possible causes for the depression including temperature, gravity, metallicity, dust, damping constants, and atmospheric stratification. We also consider relevant molecular opacities which might be the cause identifying AlH, SiH, and NaH in the spectral region. However, none of these solutions are satisfactory. In the absence of a more accurate determination of the broadening of the calcium line perturbed by molecular hydrogen, we find a promising empirical fit using a modified Lorentzian line profile for the calcium resonance line. Such fits provide a simplistic line-broadening description for this calcium resonance line and potentially other un-modelled resonance lines in cool high-pressure atmospheres. Thus we claim the most plausible cause of the blue depression in the optical spectra of M dwarfs is a lack of appropriate treatment of line broadening for atomic calcium. The broad wings of the calcium resonance line develop at temperatures below about 4000K and are analogous to the neutral sodium and potassium features which dominate the red optical spectra of L dwarfs. INTRODUCTION The ongoing large-scale efforts to determine the content of stellar neighbourhood have made it clear that the dominant spectral type in the stellar neighbourhood is the M dwarfs. For example, within the 10 parsec Gaia sample 61% are M dwarfs and more than half of these are within the range M3V to M5V (Reylé et al. 2021). The spectra of M dwarfs present a relatively monotonic and well modelled sequence even for subdwarfs (e.g., Lodieu et al. (2019)). Most recent work has been focussed on observations at longer wavelengths. In particular, in the infrared region where more flux is available and new spectral features might be identified. In addition such investigations are beneficial since comparisons with newly discovered cooler objects might readily be made. This can be crucial for an overall understanding of dwarfs where it is important to have a uniform spectral typing system across the range of spectral types where pressure-broadened atomic features and molecules dominate. Despite new infrared diagnostics, many of the key features for M dwarfs are found at optical wavelengths, e.g., (1) lithium (6708Å) can be a direct indicator of mass (e.g., Martín et al. (2022)). Although, the blue optical region is a classical region for spectroscopic analysis of stars it has been neglected for M dwarfs. There are observational and modelling difficulties. The observational ones are the intrinsic lack of flux. As the flux peak of stars moves further toward the red the flux drops dramatically. From a modelling perspective, the optical spectra of M dwarfs are impacted by a range of molecular opacities and for the cooler examples also by atomic resonance lines from Na and K. The measurement and/or calculation of billions of transitions associated with the many degrees of freedom of molecular transitions is complex as is the reliable calculation of line broadening at high pressures with a number of different species. For objects with terrestrial-like temperatures some of the work might Table 1. Values of spectral type (SpT), Gaia G band minus 2MASS K band ( − ), absolute K magnitude using Gaia parallax ( K ) and approximate metallicity for objects presented. Data source and/or references for spectral type and metallicity are given in the final column: B07 (Bochanski et al. (2007)), K19 (Kesseli et al. (2019)), P03 (Pinfield et al. (2003)), G17 (Gagné et al. (2017)), M18 (Morel (2018)), S19 (Schweitzer et al. (2019)), S22 (Sprague et al. (2022)), L19 (Lodieu et al. (2019)). For the case of GJ551 we note that most recent assessments of the metallicity of GJ551 find solar or higher, e.g., Sprague et al. (2022) find [Fe/H]=0.259 and so we adopt the Morel (2018) value of 0.23 based on coevality with alpha cen A and B which is allowed by dynamics (Feng & Jones 2018 be verified by comparison with terrestrial spectra. However, for the hotter temperatures associated with M dwarfs, measurements and calculations need to be complete for a range of different species and made to the molecular dissociation energy with numerous transitions involving high energy ro-vibrational states. Thus M dwarfs can provide stringent tests of line identifications and opacities, e.g., (Pavlenko et al. 2022). Discrepancies between observations and models in the blue optical will lead to errors in accounting for the overall spectral energy distribution for M dwarfs and thus determinations of their overall properties. Over recent years there has been a substantial improvement in the molecular line lists relevant to M dwarfs. The EXOMOL group 1 have produced many relevant line lists, in particular, the latest TiO line list (McKemmish et al. 2019) is important across much of the optical spectra for M dwarfs. However, TiO bands became weak or even disappear at 4500Å, so at shorter wavelength we should have the chance to see deeper layers of the comparatively hot photosphere, where numerous atomic lines form. Blueward of the TiO bandhead at 4500Å, synthetic spectra do not provide a good match observations with observed lines being much weaker than expected (Pavlenko et al. (2017(Pavlenko et al. ( , 2022. In this paper, we consider the evidence and potential explanations for the depression in the blue optical spectra of M dwarfs in Section 2. Our various attempts to model and explain this depression are presented in Section 3 and discussed in Section 4. OBSERVATIONS We consider a range of observations of M dwarfs taken with different instruments and resolutions taken by some of the authors, colleagues mentioned in the acknowledgements, from the literature, and also from telescope archives. The upper plot shows a spectral sequence from M1 to M9 using SDSS averaged spectra from Bochanski et al. (2007). The core of the calcium line does not change much across different M spectral types although there is a broad dip from around 4000 to 4500Å which increases in strength to later spectral types. The SDSS averaged M9 dwarf spectrum is of low signalto-noise particularly in the blue and has been smoothed. The lower plot approximately marks the blue depression using a blue dashed line and shows ISIS spectra overplotted so as emphasise the spectral region of the blue depression feature for the nearby dwarfs GJ411 (M2), GJ699 (M4) and GJ406 (M6). The emission lines in GJ406 are hydrogen lines and are consistent with its relatively young age and the slow decline in activity for late type M dwarfs. et al. (2007) 2 and that of Kesseli et al. (2019) are particularly useful as they provide a wide range of spectral types with relatively uniform signal-to-noise with careful consideration for spectral type. However, we note that some of these spectra have rather low signal-to-noise in the spectral region of interest and that in the Kesseli et al. (2019) spectra the telluric bands have not been removed and that we found it difficult to remove these without leaving residuals. Thus, we use ESO archive processed data and supplement these datasets particularly with data of our own from the WiFeS instrument on the Australian National University (ANU) 2.3m telescope at Siding Springs (Dopita et al. 2010) and data from the ISIS instrument at the William Herschel Telescope on La Palma taken as part of the calibration dataset for The upper plot shows the strong core of the calcium line in the M2 dwarf (Gl411). This is in contrast to the relatively weaker calcium core of the M2 ultra subdwarf (LHS1691) from Kesseli et al. (2019). In the lower plot, the core and the wings of the Ca feature in the M5 subdwarf (LHS2096) plotted in blue is significantly deeper than in the young M5.5 dwarf (TWA33) plotted in black. Pinfield et al. (2003). Since a number of the spectra used for our study have not been published before we provide further details of their acquisition and reduction. The flux calibration of WiFeS spectra were made with the 1% Bohlin spectrophotometric standards (e.g., Bohlin (2007)). The importance of observing a smooth spectrum star is set out in section 6 of Bessell (1999) with a preference for EG131, L745-46a, LTT4364 and VMa2. While these all work well at red optical wavelengths, EG131 and L745-46a are best in the blue. EG131 and L745-46a have an extremely shallow and broad Halpha line and a couple of very weak and shallow HeI lines (in particular 4471Å), in addition to the deep Ca H and K lines in L745-46a. For data recovered from the ESO UVES archive where it is unlikely to have been used in the data reduction, the optimum calibration procedure has been to obtain a WiFEeS or other well flux calibrated intermediate or low resolution spectra of the UVES object. The UVES or other echelle spectrum is then smoothed to the same resolution as the fluxed WiFeS spectrum which is then divided by the WiFeS spectrum into the smoother echelle spectrum to obtain the flux corrections for the unsmoothed 0.00 0.50 Normalised flux Figure 3. The plots show M subdwarfs NLTT31967 and NLTT5022 with similar low metal abundance but with slightly different temperature. The upper plot shows the onset of the blue depression and the lower plot a zoom out where the spectra can be seen to closely match at wavelengths in the regions outside the 4000-4500Å region. NLTT31967 shows the blue depression, NLTT5022 does not. The observational spectra for NLTT31967 were taken with 300B and 600B settings and show that resolution is not the cause of the blue depression. echelle spectrum. In particular this procedure is followed in the case of the UVES spectra of GJ109 and GJ551, the HARPS spectrum of GJ551, and the Magellan/MIKE spectrum (from David Yong) of NLTT5022. Here we present M1 to M9 spectra though we can see evidence in literature spectra that a strong 4000-4500Å absorption feature probably does persist to later spectral types. Although many spectra of cooler objects do exist their signal-to-noise tends to be poor blueward of 5000Å. An example of this is the combined SDSS spectrum of all observed M9 dwarfs labelled as m9.all.na.k (Bochanski et al. 2007), it is clear that the flux rapidly drops below 4500Å but the signal-to-noise is too poor to consider further. As can be seen from the higher resolution ISIS/WHT spectra shown in the lower plot in Fig. 1, later type spectra are increasingly impacted by chromospheric emission lines. Based on our comparisons of similar spectral types we do not find any evidence that chromospheric emission is related to the blue depression (e.g., fig. 7 in Bessell (2011)). Nonetheless, chromospheric emission from H in M dwarfs is a well quantified indicator of age (e.g., Kiman et al. (2021) and young M dwarf stars may be pre-main sequence stars and have higher luminosity and lower gravity. For the later spectral type M dwarfs many do have emission lines and it is instructive if the spectra have sufficient resolution and signal-to-noise to resolve these lines. A depression in blue optical spectra of M dwarfs In Fig. 1 it can be seen how an absorption feature from 4000-4500Å increases as a function of spectral type. The upper plot shows a sequence of averaged spectra from the SDSS (Bochanski et al. 2007) and in the lower plot for the bright nearby M dwarfs GJ411 (M2), GJ699 (M4) and GJ406 (M6). In the lower plot it can be seen that apart from a broad absorption feature other spectral features in this region for GJ699 and GJ406 appear to get relatively weaker in comparison to GJ411. This is also the case for the SDSS spectra though there is a significant decrease in signal-to-noise toward later spectral types. Given that we are investigating a number of different targets with different characteristics we include Table 1 summarising some major observable parameters for the objects. In Fig. 2 we can get a sense of how the 4000-4500Å region is modified as a function of metallicity and gravity. In the upper plot, we consider the M2 old-disk main sequence star GJ411 with a gravity about log ∼4.9 and the ultra subdwarf LHS1691, log ∼5.2. The Ca 4227Å feature is very weak in the subdwarf as might be expected. However, the absorption feature from 4000-4500Å appears to have broadened in width and shows a steep decline blueward of 4600Å. Indeed when we examine our carefully calibrated WiIFeS spectrum of LHS1691 we can not see any evidence of features blueward of 5600Å. We can also see an apparent broadening in the behaviour of the blue depression in the lower plot of Fig. 2 with cooler dwarfs. In this case we consider the object TWA33 which is definitively in a young star forming region (∼10Myr, e.g., (Schneider et al. 2012)) and thus has a low gravity, log ∼4.0. We compare TWA33 to an M5.5 extreme subdwarf LHS2096 (log g∼ 5.5) to ensure comparison with an object which is definitively older and of higher gravity. Despite the much higher abundance of TWA33, its lower gravity (log g∼ 4) appears to weaken the Ca feature indicating that pressure is relatively more important that abundance. Notably the absorption feature in the M5 subdwarf from 4000-4500Å is no more visible than at M2 in the upper plot of Fig. 2. Overall then the blue depression absorption feature is stronger at lower metallicity and higher gravity. The Ca feature appears to behave broadly as expected. Lowering the metallicity increases the gravity and the pressure, and lowering the temperature also increases the gravity and the pressure. So the common feature is increased pressure causes the blue depression. It is instructive to investigate the first appearance of the blue depression in the hotter M dwarfs. Fig. 3 shows a comparison of two M subdwarfs NLTT31967 and NLTT5022 with similar low abundance and temperature of around 3900K. This temperature was derived by a fit of the 4000-9500Å spectra using BT-Settl synthetic spectra. Most of the diagnostic ability of the models is found to reside in the continuum slope. NLTT31967 with G = 6.58 shows the blue depression whereas NLTT5022 with G = 6.19 does not. They both show a similar strength atomic Ca at 4227Å absorption feature. The spectra of the two objects appear to be well matched at wavelengths outside the 4000-4500Å region. We can also examine spectra by dividing them. This helps to show the relative importance of different spectral features. The three plots in Fig. 4 are shown to indicate how different spectral features react to temperature (upper), metallicity (middle) and gravity (lower) across the optical regime for relatively small spectral changes around spectral types from M4 to M6. The red line in the top plot shows the The plots indicate the sensitivity of the blue depression to temperature (top), metallicity (middle) and gravity (bottom). In the top plot the red line represents a change in spectral type of 1 (approximately 150 K) and is made from the division of an SDSS average M5 dwarf spectrum (m5.all.na.K) by an average M4 dwarf spectrum (m4.all.na.k) with the M5 spectrum shown in grey. The middle plot, shows a blue line made from dividing the M4 dwarf (GJ402) by an M4 subdwarf (LHS2674) to represent a metallicity change of around 0.5 dex. The green line in the lower plot is intended to represent a gravity change of approximately 1 dex. It is made from the division of a low gravity M5.5 (TWA33) by an average of the GJ551 (M5.5) and GJ299 (M4.5) so as to produce a spectrum approximately matched in colour. All plots identify the major spectral features in M dwarfs and show a double headed arrow to point to draw attention to the blue depression feature. MNRAS 000, 1-14 () impact of a spectral class change of 1 from the spectral division of averaged M5 by M4 spectra from the SDSS archive. The red divided spectra resembles the M5 spectra plotted in grey with weaker features as would be expected for the spectral division of two objects with similar spectral types. As anticipated in Fig. 1, the divided spectrum indicates the blue depression has some significant temperature sensitivity. The middle plot shows a blue line to represent a change in metallicity of approximately 0.5dex based on the division of an M4 dwarf (GJ402) by an M4 subdwarf (LHS2674) from Kesseli et al. (2019). In this case we chose GJ402 because it does not have particularly prominent spectral emission features associated with youth and a lower gravity. The M4 dwarf is also shown in grey as a comparison. The middle region of the plot from 6000-7000Å is strongly influenced by CaH and H . Redward of 7000Å, marked with green dashed area, the divided blue spectrum is much flatter. In this green dashed area the divided spectrum resembles the M4 spectra with weaker features as would be expected for the spectral division of two objects with similar spectral types but one with significantly lower abundance. From 4000-6000Å the blue divided spectrum slopes gently upward with a bump around 4000-4500Å anticipated by Figs 2 and 3. The upward slope might partly be caused by GJ402 being somewhat redder and brighter than LHS2674 (Δ( − ) = 0.5, Δ K = 1.44). It is also plausible that the influence of the blue depression feature in the 4000-4500Å region might extend much further for lower metallicity M dwarfs. In particular, the very varied behaviour of spectral features with wavelength emphasises the difficulties of spectral typing for low metallicity M dwarfs. In the lower plot of Fig. 4 we again divide spectra of a similar spectral types but here we also seek to examine the impact of gravity. In this case we make comparisons between WiFeS/ANU spectra of the low gravity (log ∼4.0 M5.5 dwarf TWA33 whose spectrum is shown in grey with GJ299 (M4.5, log ∼5.0) and GJ551 (M5.5, log ∼5.0). In this case, the green spectrum is TWA33 ( − = 4.25) divided by averaged spectrum of GJ299 ( − = 3.74) and GJ551 ( − = 4.60). Relative to the upper plots, the green divided spectrum in the lower plot is rather flat without any particular slope. At wavelengths blueward of 4500Å, the green spectrum shows the distinctive 4000-4500Å feature. As with the other plots, well known spectral features are identified. Overall, Fig. 4 illustrates that the 4000-4500Å feature is the single most sensitive feature in these M dwarf spectra. It is found to increase in strength towards lower temperatures and higher gravities in a relatively similar manner. As also indicated in Fig. 2 a decrease in metallicity can lead to an increase in relative strength and a broader shape for the blue depression. Presumably, towards lower metallicities, the blue depression increasingly dominates over other opacities in the blue optical region. In the past this broad depression in the blue optical spectra of M dwarfs has been noted by a number of authors (e.g., Lindblad (1935a,b); Vyssotsky (1943); Vardya & Böhm (1965); Warner & McGraw (1974); Ake & Greenstein (1980); Bessell (2011);Pavlenko et al. (2017Pavlenko et al. ( , 2022) and modelled by an increase in the continuum opacity of this region (e.g., Vardya & Böhm (1965)) and for Proxima Cen (GJ551) by a factor 40-80 relative to expected in order to match the observed line strengths of spectral features (Pavlenko et al. 2017(Pavlenko et al. , 2022. The upper plot of Fig. 5 illustrates how a fit is obtained for the Ca 4227Å line through an increase of the continuum opacity by 80. The middle plot shows a representative region 200Å to the blue which also requires the opacity increase for a reasonable fit. However, such an arbitrary enhancement is not appropriate to fit features much further from the blue depression, e.g., the lower plot of Fig. 5 shows . Spectra of GJ551 in black compared with a 'standard' model of solar metallicity with a temperature of 2900K and gravity of log g = 5 in green (labelled as x=1) which would be expected to produce a good spectral fit. The model in blue has had the continuum opacity enhanced by 80 (labelled as x=80). The upper plot shows that the immediate region around the 4227Å range. The middle plot shows a spectral region close to 4227Å with a variety of absorption features also better fit by the model with enhanced opacity. The lower plot shows a spectral region far away where the enhanced opacity provides a poor fit and would not be appropriate. a redder wavelength where a poor fit is obtained for the enhanced continuum opacity. The necessity for additional opacity is appreciated elsewhere; for example, figs 7 and 25 of Herczeg & Hillenbrand (2014) and fig. 6 of Allende Prieto (2023) that shows the M3.5V star GJ555 and a 3200K synthetic spectrum indicating a much stronger Ca 4227Å line than observed as well as a poor fit in the surrounding spectral continuum. The explanation for this 4000-4500Å spectral featfBure reported by Lindblad (1935a,b) and independently as the 'Lindblad depression' by Vyssotsky (1943) has been attributed as the Ca 2 quasi-molecule (e.g., Lindblad (1935a)), to CaH absorption (e.g., see discussion Weniger (1967)), as a missing opacity (e.g., Vardya & Böhm (1965)), and perhaps as a lack of 3D NLTE models (e.g., Pavlenko et al. (2017)). Ake & Greenstein (1980) found that the strengths of the Ca 4227Å line and the 'Lindblad depression' increase with decreasing metallicity and if interpreted correctly can be used to identify metalpoor stars. However, it appears that appropriate modelling of this spectral region has not been adequately captured by synthetic spectra computed for the grids of modern model atmospheres. Although some modern studies have used this absorption feature as a spectral diagnostic the relatively lower signal-to-noise obtained below 4500Å for M dwarfs means this spectral region has been relatively neglected. MODEL ATMOSPHERES To better understand the modelling of the 4000-4500Å region we consider a number of grids of models producing synthetic spectra for M dwarfs including MARCS (Gustafsson et al. 2008), ATMO2020 (Phillips et al. 2020), NextGen (Hauschildt et al. 1999), BT-Settl (Allard et al. 2011). We note the extensive work on the variations in derived spectral parameters by Cristofari et al. (2022) from the use of different model atmospheres. For example, they find that the temperatures of MARCS models are on average about 30 K higher than from the PHOENIX models including BT-Settl, metallicities are offset by around 0.4 dex, and log values lower by about 0.30 dex. We also find some modest differences between the models and so examine their underlying model structures. In Fig. 6, we show model structures for 3000K for MARCS and BT-Settl and also consider the impact of using a different abundance pattern labelled as MARCS GS (based on the abundances from Grevesse & Sauval (1998)). The differences between models structures which might provide for line formation taking place in relatively different places within the atmosphere are seen to be relatively consistent. We also find a relative similarity between model structures at 2500 and 3500K and so do not envisage that the differences between model structures can be responsible for large scale discrepancies between observed and synthetic spectra seen in Fig. 5. In general the spectra of M dwarfs are envisaged as a background of molecular absorption with some atomic absorption features still strong enough to be visible. In the optical regime, the opacity caused by the TiO band system is particularly prevalent (see Pavlenko (2014)). As experiments and ab initio calculations have been able to more accurately assess higher energy transitions the modelling of TiO has improved. In Fig. 7 it can be seen how different line lists have improved in terms of identifying the primarily short-wavelength bands head as well as more subtle spectral features from Plez (1998) to Schwenke (1998) andlatterly McKemmish et al. (2019) . When calculating synthetic spectra the "line by line" approximation is used and in particular we use (1) the list of atomic lines from the VALD database (Ryabchikova et al. 2015), (2) lists of CN and MgH lines (Kurucz 2018), (3) lists of CrH and FeH lines (Burrows et al. (2002) and Dulick et al. (2003)), (4) lists of H 2 O absorption lines calculated by Barber et al. (2006). Absorption line profiles for atomic lines were taken from VALD3, or in the case of absence from VALD3, they were determined in terms of the Voigt function and damping constants were determined using the Unsold approximation (Unsold 1955). The microturbulent velocity here was assigned to be constant over the depth of the atmosphere and equal to t = 2 km/s. Spectra were computed by Wita 6 program (Pavlenko 2014) with step 0.02Å, and then convolved with a Gaussian FWHM and rotation profile by Grim & Staubach (1996) to match the spectral comparison. Here we consider a number of modified modelling solutions to explain the 4000-4500Å blue depression feature. Metallicity and dust There has been a long standing concern with models for M dwarfs that metallicities are not well constrained and that there might not be adequate treatment of dust opacity (e.g., Tsuji et al. (1996); Jones & Tsuji (1997)). We examine spectral fits to two nearby M dwarfs, GJ299 known to be slightly metal poor (e.g., Jones et al. (1996)) and TWA33 known to be young (Schneider et al. 2012). In Fig. 8 and 9 we show two modelling solutions of metallicity and dust that can be used to improve the overall fit across the 4000-4500Å region. We find the best fits of the observed spectra of TWA33 and GJ299 to synthetic spectra using a least-squared minimisation procedure (Jones et al. 2002). The upper plot of Fig. 8 indicates that lowering the metallicity to -1 provides a substantial improvement relative to a -0.5 model. However, the absorption in the 4300-4500Å region is not quite sufficient to reproduce the observations for GJ299 and the need to select an even lower metallicity for a slightly metal poor M dwarf in the solar neighbourhood seems inappropriate. In the lower plot of Fig. 8 the lower gravity of the baseline model preferred for TWA33 at first sight produces a rather good fit. This situation might be anticipated because in Fig. 2 and 4 it could be seen that the blue absorption feature largely disappeared in the relatively low gravity TWA33. In this case the choice of such a low metallicity model seems particularly inappropriate when modelling a star from a nearby star forming region with no evidence for any peculiar metallicity. Scrutiny of the figure also suggests that the largely overlapping model lines in green and blue are stronger than . Some M dwarf spectra can be fit rather well by introduction of dust into the synthetic spectra. The comparisons between observed and synthetic spectra with (blue) and without (green) dust suggest some improvement in the fit can be made with the addition of dust. the observed features. As with the upper plot further iterations of the chosen synthetic model can be made. In Fig. 9 we consider the same objects investigated in Fig. 8 using dust as a possible solution. Pavlenko et al. (2007) considered the lack of synthetic spectral fits as perhaps arising from the heating produced by atmospheric dust. We use a layer of dust following the prescription of Pavlenko et al. (2007). Fig. 9 indicates how dust opacity can be adjusted to provide a superficially improved fit. However, the addition of dust opacity has an impact in other regions beyond our 4000-4500Å region of the interest. For example, the lower plot of Fig. 9 also shows how the addition of dust causes the observed TiO band at 4950Å to become too weak in the synthetic spectra. This is also the case for modelled features to the blue of 3850Å in TWA33. We note that Herczeg & Hillenbrand (2014) use the strength of the Ca feature as a function of spectral type to measure the optical veiling in young M dwarfs. However in the absence of identified optical veiling in our targets, our dust solutions are somewhat artificial particularly above 3000K where dust rapidly disappears and so provides a relatively improbable explanation at higher temperatures. As with our experiments with metallicity in Fig. 8, dust provides a ready means to get reasonable fits at lower temperatures across this spectral region. However, in both cases such fine tuning does not provide a satisfactory solution. In particular, it leads to the situation where spectral regions that were hitherto well reproduced are simultaneously worst fit. Moreover such opacities do not naturally produce Other molecules M dwarfs contain a rich plethora of molecules some of which might have been relatively neglected in model atmosphere calculations. Of particular interest are molecules with low dissociation potentials such as SiH (3.34eV), AlH (2.99eV) and NaH (1.96eV) which might have transitions in the 4000-4500Å spectral range. SiH is considered by Yurchenko et al. (2018) and is seen in the Sun and in K giants. Here we identify SiH in M dwarfs though we find that it is most strongly present in higher temperature M subdwarfs, e.g., upper plot of Fig. 10. Although SiH transitions are relatively spread out, they do not extend across the 4000-4500Å region and the strongest lines are shortward of 4250Å. Similar to SiH, AlH transitions are relatively localised and mostly occur between 4065-4090 and 4240-4280Å Bessell (2011); Pavlenko et al. (2022). In the lower plot of Fig. 10 two bands of absorption lines can be seen that become considerably stronger towards lower temperature and higher metallicity. Although AlH is an important spectral diagnostic in this spectral region, its transitions do not extend blueward of 4065Å where the dissociation limit can be robustly identified and it has too few transitions to provide a continuum opacity. NaH is an abundant species at cool temperatures and the A-X GJ 551 NaH, EXOMOL log(gf) (Rivlin2015) NaH, astophysical log(gf) (see Pavlenko2020) VALD Figure 11. The plot shows NaH and atomic transitions with observed spectra for GJ551 for different regions adjacent to the 4227Å Ca line. The upper plot shows a bluer region where NaH can be identiifed, NaH is plotted in red above the observed HARPS (black) and UVES (blue) spectra. However, in the lower plot on the red side of Ca line, we struggle to identify NaH and instead find AlH. The coloured lines in the plot are the result of the division of synthetic spectra with/without NaH for different oscillator strengths: in blue based on Rivlin et al. (2015), in red based on Pavlenko et al. (2020). The magenta line represents atomic lines in this spectral region from VALD (Ryabchikova et al. 2015). The red upper arrows in the plot denote the coincidence of synthetic and observed NaH lines based on oscillator strengths. The shorter magenta arrows denote atomic features and are labelled individually. Shorter black arrows in the plot identify AlH features based on Table A1 in Pavlenko et al. (2022). band mainly covers the region between 3500 and 4600Å. Unlike the relatively restricted distribution of AlH and SiH transitions, the NaH molecule presents many closely spaced lines spread out across a wider wavelength region. We can identify a number of NaH transitions in the spectral region particularly between 3500 and 4600Å. Fig. 11 gives an example of the character of NaH absorption lines in red based on Rivlin et al. (2015). Using HARPS and UVES spectra for GJ551 in the lower plot, we can identify the strongest NaH features as appearing in the observed spectra and thus NaH is a relevant opacity source. The theoretically strongest NaH transitions can be found in the spectra though these are predominantly around 4000Å where we only identify the strongest ones; others are predicted to be several orders of magnitude weaker. In the lower plot of Fig. 11 we plot a somewhat redder region as an example that AlH acan more easily be identified. Since we struggle to reliably identify NaH transitions beyond 4350Å they do not provide a simple solution to explain the shape of the 4000-4500Å region. In Fig.12 we investigate the gravity dependence of different molecular features across the 4000-4500Å region. The relatively modest changes in Fig.12 suggest that the gravity sensitivity of CaH, MgH, NaH, TiO SiH and AlH is small with only AlH offering significant identifiable transitions increasing to lower gravities. For the case of AlH, the lower plot of Fig. 10 and Pavlenko et al. (2022) indicate that it is well enough understood and is not a continuum opacity across the 4000-4500Å region. In principle, incorrect dissociation constants would affect the computed number densities of molecular lines. We ran a few experiments looking at the impacts on lines from varying D 0 for NaH using the substantially different available values in the literature: experimental 1.876eV (Gurvits et al. 1982) and ab initio 2.036eV (Tsuji 1973) and 1.958eV (Le Roy et al. 2013). No interesting differences in our results were obvious and it should be noted that here is no particular evidence that such a spread in literature values is appropriate based on (Barklem & Collet 2016) who find the experimental value should be adjusted to 1.886eV using updated data for the ground state and an improved partition function. The most significant D 0 change in the dissociation constants of interest indicated by (Barklem & Collet 2016) is for CaH changing from 1.7 to 2.28 eV. This serves to make CaH features stronger and otherwise the purple line labelling CaH in the upper plot of Fig. 13 would be further to the bottom right and CaH less important. Since we use the (Barklem & Collet 2016) values this is already accounted for in our models though even substantial further changes in the D 0 for CaH would not seem enough to alter its relative lack of importance (e.g., Fig. 12). Ca resonance line parameters and atmospheric stratification The Ca line at 4227Å along with the Ca lines at 3934 and 3968Å are well known and the later are utilised as the primary spectral diagnostic of activity in the Sun and other stars since Preston (1959). The behaviour of the ionised Ca lines appears to be in line with expectations and is not examined further here. However, the neutral Normalised Flux Wavelength (A) GJ299 γ 6 =-7.56 γ 6 =-5.00 γ 6 =-5.00, τ s =4.4e-3, α=0.02 Figure 15. Fit of our synthetic spectra to the observed SED of GJ299 across spectral range of 3800-5000 Å. A good solution is found for = 3.4e-3, = 0.02, 6 = -5.50. line does not appear to have been so widely used particularly in such cool stars and is susceptible to possible issues with its line strength and broadening not considered by standard model atmospheres. Given the interest to produce the broad absorption feature suggested by Fig. 4 we make changes to the damping constant for the Ca resonance line with the observational constraint that the absorption feature only appears in 4000-4500Å region at higher gravity, and is pronounced at lower metallicity. In Fig. 14 we consider solar metallicity models of 2800 K with log =3.5 and 5.5 and make theoretical plots analogous to the empirical red lines in the Fig. 4 plots. We investigate the effect of modifying the VALD (Ryabchikova et al. 2015) damping constant of log 10 6 = -7.57 for the Ca line. We show variations of log 10 6 =VALD, VALD/1.25 and VALD/1.5 in Fig. 14, where we adopt a shorthand syntax 6 =VALD/1.25 to represent 6 = log 10 ( 6 from VALD) / 1.25. It can be seen that these modifications from the standard VALD value do have a dramatic impact on the Ca line at 4227Å but that these changes are most dramatic in the core region. So while such changes do produce the required significant line wing changes that are observed blueward of 4100Å or redward of 4400Å the side effect is a very large change in the shape of line core region (green and blue synthetic spectra) which is not observed. In general terms the formation of the strong wings of the resonance lines of metals in cool dense atmospheres is well-studied in the framework of the semi-stationary theory of line broadening (see Allard & Kielkopf (1982); Allard et al. (2003), Burrows & Volobuyev (2003), Pavlenko et al. (2007)). In order to provide a theoretical investigation of the collisional effects in the blue wing of the Ca line perturbed by H, He and H 2 , unified line profile calculations and molecular data are both required. The first step will be the determination of accurate potential energies and electronic transition dipole moments of Ca perturbed by H 2 (Thierry Leininger, private communication). Lacking a full treatment of broadening such as done for potassium by Burrows & Volobuyev (2003), we investigate an alternative solution by modifying the position of calcium in the photosphere. This approach is justified in the sense that there is considerable model uncertainty in the temperature versus pressure profile in cool stellar atmospheres and that the empirical constraints on the structure largely come from a good match between synthetic and observed spectra. Stratification of atmospheres is another approach that has been employed elsewhere for example in the case of the Sun (Solanki & Hammer 2002). Here we have the situation where the Ca reso-nance line occurs in a region of relatively low opacity and so we are interested in the extent to which its line shape is dependent on exactly where its line formation happens within the local radiation field. Namely, we consider that in the atmosphere of M dwarfs above the depth molecular densities ( ) are reduced in comparison with the equilibrium values ( equil ) according to ( ) = equil . The upward pointing arrow in the upper plot of Fig. 13 locates our arbitrary modification to the density of calcium containing species in an M dwarf model atmosphere. The impact of the stratification on the flux and temperature of the photosphere with wavelength across the Ca line can be seen in lower plots of Fig. 13. Stratification can have the desired effect of producing an enhancement in the wings of Ca over a considerable range but without unduly impacting the line core. From the perspective of the model atmospheres we are left with the parameters and which can be determined from comparison with observations. In Fig. 15 we compare different values of 6 and use and to modify the line broadening of the wings for the Ca line to fit the M4.5 dwarf GJ299. The synthetic spectral lines with 6 = -5.00 and -7.56 show the problem of simply modifying 6 where it is difficult to find a value which leaves the Ca line core intact and simultaneously provides appropriately broadened wings. It can also be seen how stratification modifies the extreme line wing broadening which is introduced by modification of the Van der Waals damping constant. Stratification can suggest a fit to the overall shape of the blue depression as well modifying the modelled strength of atomic features within this region but at the same time not impacting Ca lines at 3934 and 3968Å and the developing system of TiO bands towards redder wavelengths. Further fine tuning of this stratification could improve the match between empirical and synthetic spectra and enable stratification to provide a perfectly reasonable albeit contrived solution. A 'modified' Lorentzian solution The interactions between pairs of neutral atoms and the corresponding perturbations of atomic levels known as resonance broadening is classically a concern for Hydrogen and Helium atoms in hotter stars. In cooler stars reliable synthetic line profiles calculations sometimes need to consider line broadening from neutral hydrogen collisions. This is has been borne out in spectacular fashion particularly in the L and T dwarfs with the sodium and potassium resonance line at 5900Å and 7700Å which increase in strength from the M through the L dwarfs into the T dwarf regime. The work of Blouin et al. (2019) indicates that line broadening can provide a suitable solution for the line shape of the Ca in 4000-5000K DZ white dwarfs and the resonance profile line shape resembles the sought after line shape to explain the appearance of the 4000-4500Å region. The upper plot of Fig. 16 extends the calculations of Blouin et al. (2019) to lower temperatures and illustrates the importance of temperature and the line shape of a broadened Ca line in a pressure broadened environment with He. The Ca-He profile is not symmetric and this lack of symmetry is something which is found in observations of white dwarfs and well matched when using highquality Ca-He potentials (Blouin et al. 2019). Similar calculations are not yet available for Ca-H and Ca-H 2 broadening which would be appropriate for M dwarfs. However, the situation for Rb/Cs-He/H 2 broadening may be seen as promising (Allard & Spiegelman 2006) and might be analogous to the study of Na and K in Allard et al. (2019). Initial calculations by Kielkopf & Allard (2008) suggest that the unified profiles for Na-He and Na-H 2 are not significantly different and the quasi-molecular line satellites become closer to the main line when using more accurate potential data. As an interim alternative to an appropriately broadened profile we investigate a modified Van der Waals broadening as a plausible solution. In a crude attempt to adequately model the calcium line broadening we empirically modify the Lorentzian used to represent the calcium line in order to fit the measured continuum. The profile of our modified Lorentz is determined by the formula where damping = ( R + vdW + S )/ D , and R , vdW , S are damping parameters due to the natural, van-der Waals and Stark broadening and = / Doppler . In the lower plot of Fig. 16 we invert the plot and zoom in on the line core to illustrate schematically how the different the alternative line profiles are to the standard Lorentzian plotted in red. From the conventional approximation for the Voigt profile, , , = 1, 1, 2, (e.g., Gibson (1973)), we adjusted the parameters Allard et al. (1999). The middle plot shows the 'modified' Lorentzian 'fit' as a thick yellow line to an X-shooter spectrum of GJ551 in black and binned thin yellow. The red line in the middle plot shows the standard Lorentzian leaves too little opacity to explain strength of atomic features and overall spectral energy distribution across 4000-4600Å region.The lower plot is made using the 'modified' Lorentzian 'fit' and corresponds to the same bluer spectral region as the lower plot of Fig. 5. of our 'modified' Lorentzian by eye in order to fit the observed spectral energy distribution. Although in principle, a Lorentzian allows for control of line wings and line core in practise this required some iterations to obtain the reasonable 'fit' shown in in the middle plot of Fig. 17. For this we use , , = 0.0001, 0.001, 0.5 to give an approximate fit to the smoothed spectrum taken with the ESO Xshooter instrument (programme 092.D-0300 employed by Pavlenko et al. (2017)). This choice also provides a similarly improved fit to other regions, e.g., in the lower plot of Fig. 17. This choice of this spectrum is driven by the desire to ensure that the flux calibration of the empirical spectrum is robust. X-shooter is a well established instrument for determining the spectral energy distributions of objects (e.g., (Verro et al. 2022)). Although our 'modified' Lorentzian solution provides only an approximate solution it gives an illustration of how a strong line may be modelled with the Lorentzian dominated wings of a standard Voigt profile. Our modified synthetic spectrum plotted in yellow in Fig. 17 enables a much more realistic fit. It is notable that the red line in the middle plot of Fig. 17 represents the situation without modification to the models. Without the very significant increase in the opacity in the 4000-4500Å region, models predict that the M dwarf flux does not increase significantly across this region and remains relatively flat even by the start of the TiO bands. A similar result is apparent for GJ551 in fig. 2 of Bessell (2011) for synthetic spectra from MARCS. In reality all M dwarfs do exhibit a rise in the flux across this region to the TiO bands. DISCUSSION While this paper is concerned with extensive spectral synthesis, we do not update existing literature values for effective temperature, metallicity and gravity. Gravity remains a difficult parameter to fit precisely from the spectra. For the moment, we consider that it is as reliable to use the isochrone gravities for M stars and iterate with the temperature and metallicity. Though the problem of the paper is precisely that we do not have a good spectral model. A key purpose of this paper is to highlight the importance of line broadening calculations specifically for Calcium. Such calculations have the potential to reduce the number of free parameters needed to provide reliable parameters from spectral synthesis. We have presented a range of hypotheses to understand the potential reasons for the blue depression in the energy distribution of M dwarfs across the 4000-4500Å region though we note that all plausible modelling issues have not been exhausted. For example, we do not consider non-thermal equilibrium effects which can alter the overall ionisation balance in late-type stars particularly toward those with lower metallicities, e.g., Mallinson et al. (2022). Though as with most of the presented alternative explanations, non-thermal equilibrium is likely to only provide a partial solution to the blue depression and at least for solar type metallicities would primarily impact the line core, e.g., Mashonkina et al. (2017). We find that the simplest solution is a proper treatment for the pressure broadening of Ca 4227Å. In recent years there have been a wide variety of systematic efforts to determine opacities appropriate for cool objects. These opacities have provided a ready explanation for most of the unknown features in cool stars. Here we focussed on a blue depression in M dwarfs in the blue optical. We concur with Lindblad (1935a,b) that the best explanation for the blue depression is the broadening of the calcium resonance line. This feature appears as the strongest and most sensitive feature in the optical spectral of M dwarfs. While the importance of careful line treatment for strong lines has been implemented for other resonance lines our work emphasises the need for a proper treatment of strong atomic line parameters particularly for Calcium as well as the identification of other relevant molecular species. It is already clear from studies such as Lodieu et al. (2019) that a number of well-studied atomic lines do not behave quite as expected. Such identifications can easily be attributed to non-solar abundance patterns or more exotic explanations but as the quality and quantity of empirical spectra improve it is vital to resolve issues associated with line parameters. Multi-object spectrographs provide the tools to be able to take large numbers of M dwarf spectra and to derive their detailed characteristics (e.g. Ding et al. (2022)). These can be important for a range of different reasons. For example, the M dwarfs may be intrinsically interesting themselves, due to their companions, or as a statistical chronometer to map out the history and evolution of our Galaxy. It is likely that existing analyses will have biases introduced by using grids of synthetic spectra that lack a systematic consideration of strong line broadening. This is particular important where astrophysical parameters are increasingly determined by algorithms blind to underlying opacity issues. The identified optical blue depression shows considerable sensitivity within the observed spectra of similar spectral types and so provides the potential as useful diagnostic spectral region for M dwarfs. Since calcium broadening is the single most sensitive feature in M dwarfs it might be used to help resolve the degeneracies found in the analysis of subdwarfs. Jao et al. (2008) find there is a complex phase space in the appearance of very explicit metallicity indicators. They note that CaH is impacted in complicated ways by combinations of temperature, metallicity and gravity. A solution to this is to introduce a further step in the analysis of subdwarfs focussed on gravity determination Zhang et al. (2023), the sensitivity of calcium broadening to gravity and metallicity would seem to have promise for this step. More simply, a filter centred on the 4227Å line of similar width to a Strömgren filter would be sensitive to metallicity and gravity. For example, low gravity objects would present as relatively brighter than field objects of the same colour due to their reduced opacity in the 4100-4400Å region and low metallicity objects would present as fainter. Although the general causes of line broadening are relatively well understood, the details of the calculations are complex and require considerable effort. Calculations need to consider the broadening of different alkali lines with a range of different molecules across a large range of temperatures, pressures and abundances. Such calculations are essential for synthetic spectra to match the spectra of M dwarfs. In the meantime, it may sometimes be practical to use a 'modified Lorentzian' whose shape is empirically determined. This can be thought of as analogous to the use of astrophysical oscillator strengths. While it will always be desirable to use the proper line parameters for strong lines it is notable that even after two decades the proper treatment of collisional broadening for the well studied potassium resonance doublet is not complete e.g., Phillips et al. (2020). some of the spectra for this study. HJ was supported by STFC grant ST/R000905/1, YP and YL were funded as part of the routine financing programme for institutes of the National Academy of Sciences of Ukraine. YP acknowledges financial support from Jesús Serra Foundation through its "Visiting Researchers Programme" and from the visitor programme of the Centre of Excellence "Severo Ochoa" award to the Instituto de Astrofísica de Canarias (CEX2019-000920-S). This research has made use of the VALD database operated at Uppsala University, and NASA's Astrophysics Data System Bibliographic Services (ADS).
2023-04-20T01:15:45.353Z
2023-04-18T00:00:00.000
{ "year": 2023, "sha1": "2a1f4b8c6bd09b07887c7220e86262e6498efacc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2a1f4b8c6bd09b07887c7220e86262e6498efacc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119718181
pes2o/s2orc
v3-fos-license
Quasi-Elliptic Cohomology and its Power Operations Quasi-elliptic cohomology is a variant of Tate K-theory. It is the orbifold K-theory of a space of constant loops. For global quotient orbifolds, it can be expressed in terms of equivariant K-theories. In this paper we show how this theory is equipped with power operations. We also prove that the Tate K-theory of symmetric groups modulo a certain transfer ideal classify the finite subgroups of the Tate curve. Introduction An elliptic cohomology theory is an even periodic multiplicative generalized cohomology theory whose associated formal group is the formal completion of an elliptic curve. It is an old idea of Witten, as shown in [31], that the elliptic cohomology of a space X is related to the T−equivariant K-theory of the free loop space LX = C ∞ (S 1 , X) with the circle T acting on LX by rotating loops. It is surprisingly difficult to make this precise, especially if one wishes to consider equivariant generalization of this construction. In this case the loop space LX with the natural rotation action is a rich orbifold. In this paper we offer a new formulation between the loop space and Tate K-theory via a new theory which we call quasi-elliptic cohomology. Tate K-theory is the generalized elliptic cohomology associated to the Tate curve. The Tate curve T ate(q) is an elliptic curve over SpecZ((q)), which is classified as the completion of the algebraic stack of some nice generalized elliptic curves at infinity. A good reference for T ate(q) is Section 2.6 of [1]. We give a sketch of it in Section 6.1. The relation between Tate K-theory and string theory is better understood than for most known elliptic cohomology theories. The definition of G−equivariant Tate K-theory for finite groups G is modelled on the loop space of a global quotient orbifold, which is formulated explicitly in Section 2, [20]. Its relation with string theory and loop space makes Tate K-theory itself a distinctive subject to study. The idea of quasi-elliptic cohomology is motivated by Ganter's construction of Tate K-theory. It is not an elliptic cohomology but from it we can recover the Tate K-theory. This new theory can be interpreted in a neat form by equivariant Ktheories, which makes many constructions on it easier and more natural than those on the Tate K-theories. Some formulations can be generalized to other equivariant 2010 Mathematics Subject Classification. Primary 55. The author was partially supported by NSF grant DMS-1406121. cohomology theories. In addition, quasi-elliptic cohomology provides a method that reduces facts such as the classification of geometric structures on the Tate curve into questions in representation theory. 1.1. Loop Space. Quasi-elliptic cohomology is modelled on a version of equivariant loop space. For background on orbifolds and Lie groupoids, we refer the readers to Section 2, 3, [32] and [39]. For any compact Lie group G and a manifold X with a smooth G−action, there is a Lie groupoid X/ /G which is explained in detail in Chapter 11, [12]. Smooth unbased loops in the orbifold X/ /G carries a lot of structure: on the one hand, it includes loops represented by smooth maps γ : R −→ X such that γ(t + 1) = γ(t)g for some g ∈ G; other than the group action by the loop group LG := C ∞ (S 1 , G), the loop space also has the circle action by rotation. Lerman discussed thoroughly in Section 3, [32] that the strict 2-category of Lie groupoids can be embedded into a weak 2-category whose objects are Lie groupoids, 1-morphisms are bibundles and 2-morphisms equivariant diffeomorphisms between bibundles. Thus, the free loop space of an orbifold M is the category of bibundles from the trivial groupoid S 1 / / * to the Lie groupoid M . We will write Loop 1 (X/ /G) := Bibun(S 1 / / * , X/ /G), which is discussed in Definition 2.2. In Definition 2.3, we extend Loop 1 (X/ /G) to a groupoid Loop ext 1 (X/ /G) by adding rotations as morphisms. Especially we are interested in the ghost loops groupoid GhLoop(X/ /G), which is defined to be the full subgroupoid of Loop ext 1 (X/ /G) consisting of objects (π, f ) with the image of f contained in a single G−orbit. Ghost loops are introduced by Rezk in his unpublished manuscript [42]. Another reference is Section 2. So it satisfies a kind of Mayer-Vietoris property. In addition, if H is a closed subgroup of G and X is the quotient space G/H, GhLoop(X/ /G) is equivalent to GhLoop(pt/ /H). In other words, it has the change-of-group property. When G is finite, GhLoop(X/ /G) is isomorphic to the full subgroupoid Λ(X/ /G) of Loop ext 1 (X/ /G) consisting of constant loops. This groupoid Λ(X/ /G) can be regarded as an extended version of the inertia groupoid I(X/ /G). Please see Definition 3.7 for inertia groupoid. 1.2. Quasi-elliptic cohomology. For any compact orbifold groupoid G, the orbifold K-theory K orb (G) is defined to be the Grothendieck ring of isomorphism classes of G−vector bundles on G. In particular, K orb (X/ /G) is K G (X). A reference for orbifold K-theory is Chapter 3, [3] and a reference for equivariant K-theory is [46]. Quasi-elliptic cohomology QEll * (X/ /G) is defined to be the orbifold K-theory of a subgroupoid Λ(X/ /G) of GhLoop(X/ /G) consisting of constant loops. When G is a finite group, QEll * G (X) can be expressed in terms of the equivariant K-theory of X and its subspaces as (1.1) 1.4. Classification of the finite subgroups of the Tate curve. Though the general formulas for the power operations in QEll G are complicated, to understand it, it is useful to consider special cases. It is already illuminating to consider the case that X is a point and G is the trivial group, the power operation has a neat form, as shown in Example 4.13. It has a natural interpretation in terms of the Tate elliptic curve. In Section 6.3 applying the power operation we prove that the Tate K-theory of symmetric groups modulo the transfer ideal classifies the finite subgroups of the Tate curve, which is analogous to the principal result in Strickland [49] that the Morava E−theory of the symmetric group Σ n modulo a certain transfer ideal classifies the power subgroups of rank n of the formal group G E . The finite subgroups of the Tate curve are classified by First we prove the parallel conclusion for quasi-elliptic cohomology. where I QEll tr is the transfer ideal defined in (6.4) and q ′ is the image of q under the power operation P N . Then applying the relationship between QEll * and Tate K-theory, we obtain the main theorem. where q ′ is the image of q under the power operation P T ate constructed in Definition 5.10, [20]. Moreover, via the isomorphism in Theorem 1.1, we can define a ring homomorphism QEll ΣN (pt)/I QEll tr , as shown in Proposition 6.5. Under the identification (1.2), it extends uniquely to the ring homomorphism P string N : (K T ate ) G (X) −→ (K T ate ) G (X) ⊗ Z((q)) (K T ate ) ΣN (pt)/I T ate tr constructed in Section 5.4, [20]. In [28] we construct the universal finite subgroup of the Tate curve via the operation P N . 1.5. Acknowledgement. I would like to thank Charles Rezk who is always a wonderful advisor and very inspiring teacher. Most of this work was guided by him and it is a great experience to work with him. I would also like to thank Matthew Ando. We had many mathematical discussions, which are also important for my work. At last, I would like to thank the editors and the referee for spending time on reading this work and for their constructive and deep suggestion. Models for orbifold loops and ghost loops To understand QEll * G (X), it is essential to understand the orbifold loop space. In this section, we will describe several models for the loop space of X/ /G. In Definition 2.2 we discuss Loop 1 (X/ /G) and introduce another model The groupoid structure of Loop 1 (X/ /G) generalizes M ap(S 1 , X)/ /G, which is a subgroupoid of it. Other than the G−action, we also consider the rotation by the circle group T on the objects and form the groupoids Loop ext 1 (X/ /G) and Loop ext 2 (X/ /G). The groupoid Loop ext 2 (X/ /G) has a skeleton L(X/ /G) := where each 1 L g X = Map Z/lZ (R/lZ, X) with l the order of g is equipped with an evident C G (g)−action. L(X/ /G) has the same space of objects as the groupoid L(X/ /G) discussed in Definition 2.3, [35], from which equivariant Tate K-theory is defined. It has richer morphisms. The circle group T acts on R/lZ by rotation, and so in principle on the orbifold 1 L g X. The key groupoid Λ(X/ /G) in the construction of quasi-elliptic cohomology is the full subgroupoid of L(X/ /G) consisting of the constant loops. In order to unravel the relevant notations in the construction of QEll * G (X), we study the orbifold loop space in Section 2.1.2 and Section 2.1.3. In Section 2.1.1 we define Loop 1 (X/ /G). In Section 2.1.2 we interpret the enlarged groupoid Loop ext 1 (X/ /G) and introduce a skeleton L(X/ /G) of it. In Section 2.1.3 we show the construction of quasi-elliptic cohomology by ghost loops. In Section 3.1 we show the representation ring of Λ G (g). In Section 3.2 we introduce the construction of quasi-elliptic cohomology first in terms of orbifold K-theory and then equivariant K-theory. We show the properties of the theory in Section 3.3. Loop space. 2.1.1. Bibundles. A standard reference for groupoids and bibundles is Section 2 and 3, [32]. For each pair of Lie groupoids H and G, the bibundles from H to G are defined in Definition 3.25, [32]. The category Bibun(H, G) has bibundles from H to G as the objects and bundle maps as the morphisms. Example 2.1 (Bibun(S 1 / / * , * / /G)). According to the definition, a bibundle from S 1 / / * to * / /G with G a Lie group is a smooth manifold P together with two maps π : P −→ S 1 a smooth principal G−bundle and the constant map r : P −→ * . So a bibundle in this case is equivalent to a smooth principal G−bundle over S 1 . The morphisms in Bibun(S 1 / / * , * / /G) are bundle isomorphisms. Definition 2.2 (Loop 1 (X/ /G)). Let G be a Lie group acting smoothly on a manifold X. We use Loop 1 (X/ /G) to denote the category Bibun(S 1 / / * , X/ /G), which generalizes Example 2.1. Each object consists of a smooth manifold P and two structure maps P π −→ S 1 a smooth principal G−bundle and f : P −→ X a G−equivariant map. We use the same symbol P to denote both the object and the smooth manifold when there is no confusion. A morphism is a G−bundle map α : P −→ P ′ making the diagram below commute. Thus, the morphisms in Loop 1 (X/ /G) from P to P ′ are bundle isomorphisms. Only the G−action on X is considered in Loop 1 (X/ /G). We add the rotations by adding more morphisms into the groupoid. Definition 2.3 (Loop ext 1 (X/ /G)). Let Loop ext 1 (X/ /G) denote the groupoid with the same objects as Loop 1 (X/ /G). Each morphism consists of the pair (t, α) where t ∈ T is a rotation and α is a G−bundle map. They make the diagram below commute. Moreover, we can extend the groupoid Loop 2 (X/ /G) by adding the rotations. Therefore, the groupoid Loop ext 2 (X/ /G) contains all the information of Loop ext 1 (X/ /G). Next we will show a skeleton of this larger groupoid when G is finite. Before that, we introduce some symbols. Let k ≥ 0 be an integer and g an element in the compact Lie group G. Let L k g G denote the twisted loop group The multiplication of it is defined by , for any δ, δ ′ ∈ L k g G, and t ∈ R. The identity element e is the constant map sending all the real numbers to the identity element of G. We extend this group by adding the rotations. Let L k g G ⋊ T denote the group with elements (γ, t), γ ∈ L k g G and t ∈ T. The multiplication is defined by The set of constant maps R −→ G in L k g G is a subgroup of it, i.e. the centralizer C G (g). When G is finite, L k g G = C G (g). When G is finite, the objects of Loop 2 (X/ /G) can be identified with the space , and l is the order of the element g. The cyclic group Z/lZ is isomorphic to the subgroup kZ/klZ of R/klZ. The isomorphism Z/lZ −→ kZ/klZ sends the generator [1] corresponding to 1 to the generator [k] of kZ/klZ corresponding to k. kZ/klZ acts on R/klZ by group multiplication. Thus, via the isomorphism, Z/lZ acts on R/klZ. Z/lZ is also isomorphic to the cyclic group g by identifying the generater [1] with g. So it acts on X via the G−action on it. 1 L g X/ /L 1 g G is a full subgroupoid of Loop 2 (X/ /G). Moreover, 1 L g X/ /L 1 g G ⋊ T is a full subgroupoid of Loop ext 2 (X/ /G) where L k g G ⋊ T acts on k L g X by (2.5) δ · (γ, t) := (s → δ(s + t) · γ(s + t)), for any (γ, t) ∈ L k g G ⋊ T, and δ ∈ k L g X. The action by g on k L g X coincides with that by k ∈ R. So we have the isomorphism where g represents the constant loop T −→ {g} ⊆ G. In fact we have already proved Proposition 2.7. Proposition 2.7. Let G be a finite group. The groupoid is a skeleton of Loop ext 2 (X/ /G), where the coproduct goes over conjugacy classes in π 0 G. Next we show the physical meaning of L 1 σ G. Recall that the gauge group of a principal bundle is defined to be the group of its vertical automorphisms. The readers may refer [38] for more details. For a G−bundle P −→ S 1 , let L P G denote its gauge group. We have the well-known facts below. Lemma 2.8. The principal G−bundles over S 1 are classified up to isomorphism by homotopy classes Up to isomorphism every principal G−bundle over S 1 is isomorphic to one of the forms P σ −→ S 1 with σ ∈ G and P σ := R × G/(s + 1, g) ∼ (s, σg). A complete collection of isomorphism classes is given by a choice of representatives for each conjugacy class of π 0 G. For the gauge group L Pσ G we have the conclusion below. Proposition 2.9. For the bundle P σ −→ S 1 , L Pσ G is isomorphic to the twisted loop group L 1 σ G. Proof. Each automorphism f of the bundle P σ −→ S 1 has the form for some γ f : R −→ G. The morphism is well-defined if and only if γ f (s + 1) = σ −1 γ f (s)σ. So we get a well-defined map It is a bijection. Moreover, by the property of group action, F sends the identity map to the constant map R −→ G, s → e, which is the trivial element in L 1 σ G, and for two automorphisms f 1 and f 2 at the object, F (f 1 • f 2 ) = γ f1 · γ f2 . So L Pσ G is isomorphic to L 1 σ G. Ghost Loops. Let G be a compact Lie group and X a G−space. In this section we introduce a subgroupoid GhLoop(X/ /G) of Loop ext 1 (X/ /G), which can be computed locally. Definition 2.10 (Ghost Loops). The groupoid of ghost loops is defined to be the full subgroupoid GhLoop(X/ /G) of Loop ext 1 (X/ /G) consisting of objects S 1 ← P δ → X such that δ(P ) ⊆ X is contained in a single G−orbit. For a given σ ∈ G, define the space We have a corollary of Proposition 2.7 below. where the coproduct goes over conjugacy classes in π 0 G. Example 2.12. If G is a finite group, it has the discrete topology. In this case, LG consists of constant loops and, thus, is isomorphic to G. The space of objects of GhLoop(X/ /G) can be identified with X. For σ ∈ G and any integer k, L k σ G can be identified with Unlike true loops, ghost loops have the property that they can be computed locally, as shown in the lemma below. The proof is left to the readers. It sends a morphism In addition, we can define a functor F ′ : We can prove the equivalence between GhLoop((G/H)/ /G) and GhLoop(pt/ /H) in the same way. Remark 2.15. In general, if H * is an equivariant cohomology theory, Proposition 2.14 implies the functor gives a new equivariant cohomology theory. When H * has the change of group isomorphism, so does H * (GhLoop(−)). 3. Quasi-elliptic cohomology QEll * G Unless otherwise indicated, we assume G is a finite group and X is a G−space in the rest part of the paper. The main references for Section 3 are Rezk's unpublished work [41] and the author's PhD thesis [26]. The construction of the theory QEll * G for any compact Lie group G will be shown in the paper [27]. In Section 3.2 we define QEll * G and prove some of its main properties. Before that we discuss in Section 3.1 the complex representation ring of , which is a factor of QEll * G (pt). We assume familiarity with [46] and [7]. 3.1. Preliminary: representation ring of Λ G (g). Let q : We have an exact sequence where the first map is g → [g, 0] and the second map is The map π * : RT −→ RΛ G (g) equips the representation ring RΛ G (g) the structure as an RT−module. There is a relation between the complex representation ring of C G (g) and that of Λ G (g), which is shown as Lemma 1. Lemma 3.1. The RT−module RΛ G (g) with the action defined by π * : RT −→ RΛ G (g) is a free module. In particular, there is an RT−basis of RΛ G (g) given by irreducible representations {V λ }, such that restriction V λ → V λ | CG(g) to C G (g) defines a bijection between {V λ } and the set {λ} of irreducible representations of C G (g). Proof. Let l be the order of g. Note that Λ G (g) is isomorphic to Thus, it is the quotient of the product of two compact Lie groups. Let λ : C G (g) −→ GL(n, C) be an n−dimensional C G (g)−representation with representation space V and η : R −→ GL(n, C) be a representation of R such that λ(g) acts on V via scalar multiplication by η(1). Define a n−dimensional Any irreducible n−dimensional representation of the quotient group Λ G (g) = C G (g) × R/ (g, −1) is an irreducible n−dimensional representation of the product C G (g) × R. And any finite dimensional irreducible complex representation of the product of two compact Lie groups is the tensor product of an irreducible representation of each factor. So any irreducible representation of the quotient group Λ G (g) is the tensor product of an irreducible representation λ of C G (g) with representation space V and an irreducible representation η of R. Any m ∈ Z gives a choice of η in this case. And η is a representation of R/lZ ∼ = T. Therefore, we have a bijective correspondence between (1) isomorphism classes of irreducible Λ G (g)−representation ρ, and (2) isomorphism classes of pairs (λ, η) where λ is an irreducible C G (g)−representation and η : R −→ C * is a character such that λ(g) = η(1)I. λ = ρ| CG(g) . where X is the sign representation on Σ 3 and Y is the standard representation. Thus, the computation of both Ind can be reduced to the computation of representations of finite groups. The proof is straightforward and left to the readers. Let k be any integer. Next we describe the relation between , which gives the relation between their representation rings. There is an exact sequence In other words, There is a group isomorphism α k : Observe that there is a pullback square of groups So we have the commutative square of a pushout square in the category of λ−rings. It gives a canonical isomorphism of λ−rings RΛ G (g) −→ RΛ k G (g) sending q to q 1 k . A good reference for λ−rings is Chapter 1 and 2, [50]. Quasi-elliptic cohomology. In this section we introduce the definition of quasi-elliptic cohomology QEll * G in terms of orbifold K-theory, and then express it via equivariant K-theory. We assume familiarity with [46]. The reader may read Chapter 3 in [3] and [39] for a reference of orbifold K-theory. When G is finite, quasi-elliptic cohomology is defined from the ghost loops in Definition 2.10. By proposition 2.11 and Example 2.12, we can see the groupoid GhLoop(X/ /G) is equivalent to the disjoint union of some translation groupoids. Before describing this equivalent groupoid Λ(X/ /G) in detail, we recall what inertia groupoid is. A reference for that is Section 4, [34]. An object a is an arrow in G such that its source and target are equal. A morphism v joining two objects a and b is an Definition 3.9. The groupoid Λ(X/ /G) has the same objects as I(X/ /G) but richer morphisms For an object x ∈ X g and a morphism ( The composition of the morphisms is defined by . We can unravel the definition and express it via equivariant K-theory. Then we have Proposition 3.11. where G conj is a set of representatives of G−conjugacy classes in G. Thus, for each g ∈ Λ G (g), we can define the projection We have the ring homomorphism 3.3. Properties. In this section we discuss some properties of QEll * G , including the restriction map, the Künneth map on it, its tensor product and the change-of-group isomorphism. Since each homomorphism φ : Moreover, we can define Künneth map of quasi-elliptic cohomology induced from that on equivariant K-theory. Let G and H be two finite groups. Consider the composition below where the first map is the Künneth map of equivariant K-theory, the second is the restriction map and the third is the isomorphism induced by the group isomorphism For any g ∈ G, let 1 denote the trivial line bundle over X g and let q denote the line bundle 1 ⊙ C q over X g . The map T above sends both 1 ⊗ q and q ⊗ 1 to q. So we get the well-defined map τ ) ). Definition 3.13. The tensor produce of quasi-elliptic cohomology is defined by The direct product of the maps defined in (3.15) gives a ring homomorphism , which is the Künneth map of quasi-elliptic cohomology. By Lemma 3.1 we have . More generally, we have the proposition below. Proposition 3.14. Let X be a G × H−space with trivial H−action and let pt be the single point space with trivial H−action. Then we have Especially, if G acts trivially on X, we have Proposition 3.15. If G acts freely on X, Since T acts trivially on X, we have K * T (X/G) = QEll * e (X/G) by definition. It is isomorphic to K * (X/G) ⊗ RT. We also have the change-of-group isomorphism as in equivariant K-theory. Let H be a subgroup of G and X a H-space. The first map is Λ G (τ )−equivariant and the second is equivariant with respect to the homomorphism c gτ : Taking a coproduct over all the elements τ ∈ H conj that are conjugate to σ ∈ G conj in G, we get an isomorphism It is straightforward to check the change-of-group map coincide with the composite Power Operation In Section 4.2 we define power operations for equivariant quasi-elliptic cohomology QEll * G (−). We show in Theorem 4.12 that they satisfy the axioms that Ganter established in Definition 4.3, [19] for equivariant power operations. The power operation of quasi-elliptic cohomology is of the form : where P n maps a bundle over the groupoid to a bundle over Λ(X ×n / /(G ≀ Σ n )), and each P (g,σ) maps a bundle over We construct each P (g,σ) as the composition below. where k ∈ Z and (i 1 , · · · i k ) goes over all the k−cycles of σ. We explain the first three functors in detail in Section 4.2. In Section 4.1 we construct the isomorphism f (g,σ) between the groupoid Λ(X ×n / /(G ≀ Σ n )) and the groupoid d((X/ /G) ≀ Σ n ) constructed in Definition 4.5. With it, it is convenient to construct the explicit formula of the power operation. . For an introduction of actions of wreath product G ≀ Σ n on X ×n and symmetric power G ≀ Σ n of a groupoid G, we refer the readers to Section 4.1, [20]. The symmetric power (X/ /G) ≀ Σ n is isomorphic to X ×n / /(G ≀ Σ n ). Before introducing the groupoid d((X/ /G) ≀ Σ n ), we need to introduce several ingredients. Definition 4.1 (Λ k (X/ /G)). The groupoid Λ k (X/ /G) has the same objects as Λ(X/ /G) but different morphisms For an object x ∈ X g and a morphism ( The composition of the morphisms is defined by Definition 4.2 (Fibred wreath product). The groupoid Λ k (X/ /G) ≀ T Σ N is defined to be the subgroupoid of the symmetric power Λ k (X/ /G) ≀ Σ N with the same objects but only those morphisms with all the t j s having the same image under the quotient map R/kZ −→ R/Z. The isotropy group of each object in Let Y be an H−space. Definition 4.3 (Fibred product and fibred coproduct). The groupoid with the same objects but only those morphisms with all the t i,ji s having the same image under the quotient map R/k i Z −→ R/Z, for i = 1, 2 and j i = 1, · · · N i . The isotropy group of each object in We can define the fibred coproduct Let σ ∈ Σ n correspond to the partition n = k kN k , i.e. it has N k k−cycles. For (g, σ) ∈ G ≀ Σ n , we consider the orbits of the bundle G × n −→ n under the action by (g, σ). The orbits of n under the action by σ corresponds to the cycles in the cycle decomposition of σ. The bundle G × n −→ n is the disjoint union of the G−bundles where (i 1 , · · · i k ) goes over all the cycles of σ. Each bundle G × {i 1 , · · · i k } −→ {i 1 , · · · i k } is an orbit of G × n −→ n under the action by (g, σ). There is a bijection between W σ i and the set {j = (j 1 , · · · j k ) | (j 1 , · · · j k ) is a k-cycle of σ and C G (g i k · · · g i1 , g j k · · · g j1 ) is nonempty}. denote all the elements of the set W σ i . Obviously, i = (i 1 , · · · i k ) is in W σ i . So we can assume it is α i 1 . For any k−cycle i and m−cycle j of σ, if k = m and C G (g i k · · · g i1 , g j k · · · g j1 ) is nonempty, W σ i and W σ j are the same set. Otherwise, they are disjoint. The set of all the k−cycles of σ can be divided into the disjoint union of several W σ i s. We can pick a set of representatives θ k of k−cycles of σ such that the set of k−cycles of σ equals the disjoint union Definition 4.4 (d (g,σ) (X)). The groupoid d (g,σ) (X) is defined to be a full subgroupoid of where the second product goes over all the k−cycles of σ. To study K orb (d (g,σ) (X)), we start by studying the representation ring of the wreath product Theorem 4.7 gives all the irreducible representations of a wreath product. It is Theorem Theorem 4.3.34 in [29]. The proof of Theorem 4.8 is analogous to that of Theorem 4.7 in [29], applying Clifford's theory in [13] and [14]. Note that goes over all the irreducible representations of the fibred product 1 be a basis of the Z[q ± ]−module RΛ G (σ) and let V k be the corresponding representation space for ρ k . Let (n) be a partition of n. The representation ring which is induced by X −→ pt. Theorem 4.9. The two groupoids (X ×n ) (g,σ) / /Λ G≀Σn (g, σ) and d (g,σ) (X) are isomorphic. Thus, this isomorphism induces a Λ G≀Σn (g, σ)−action on the space Proof. We construct the inverse functor be a morphism in d (g,σ) (X). Let t be a representative of the image of m ′i 1 in R/Z. Then, each m i k := m ′i k − t is an integer. When we know how τ ∈ C Σn (σ) permutes the cycles of σ, whose information is contained in those ̺ k i ∈ Σ M σ i , and the numbers m i 1 , · · · m i M σ i , we can get a unique τ . Explicitly, for any number j r = 1, 2 · · · n, if j r is in a k−cycle (j 1 , · · · j k ) of σ and it is in the set W σ i , then τ maps j r to ̺ k i (j) r+m i j , i.e. the r + m i j -th element in the cycle ̺ k i (j) of σ. For any a ∈ W σ i , ∀k and i, we want u i a = β h,τ τ (a),a for some h. Thus, By (4.7) we can get all the other h τ (a)j . It can be checked straightforward that J (g,σ) is a well-defined functor. It does not depend on the choice of the representative t. J (g,σ) • f (g,σ) = Id; f (g,σ) • J (g,σ) = Id. So the conclusion is proved. Then by Proposition 4.6, we get the main conclusion in Section 4.1. (iii)f (g,σ) preserves cartesian product of loops. The following diagram of groupoids commutes. Proof. (i) is indicated in the proof of Theorem 4.9. (iii) The proof is left to the readers. Total Power Operation of QEll * G . In this section we construct the total power operations for quasi-elliptic cohomology and give its explicit formula in (4.17). We show in Theorem 4.12 that they satisfy the axioms that Ganter concluded in Definition 4.3, [19] for equivariant power operation. The Functor U For each (g, σ) ∈ G ≀ Σ n , r ∈ Z, let Λ r (g,σ) (X) denote the groupoid with objects where (i 1 , · · · i k ) goes over all the k−cycles of σ, and with morphisms where (i 1 , · · · i k ) and (j 1 , · · · j k ) go over all the k−cycles of σ respectively. It may not be a subgroupoid of Λ r (X/ /G) because there may be cycles (i 1 , · · · i k ) and (j 1 , · · · j m ) such that g i k · · · g i1 = g jm · · · g j1 . Let (4.10) U : Λ 1 (g,σ) (X) −→ Λ(X/ /G) denote the functor sending x in the component X gi k ···gi 1 to the x in the component X gi k ···gi 1 of Λ(X/ /G), and send each morphism In the case that g i k · · · g i1 and g j k · · · g j1 are equal, ([h, t], x) is an arrow inside a single connected component. The functor ( ) k gives a well-defined map K orb (Λ(X/ /G)) −→ K orb (Λ k (X/ /G)) by pullback of bundles. We still use the symbol ( ) k to denote it when there is no confusion. For any Λ(X/ /G)−vector bundle V, S 1 acts on (V) k via If V has the decomposition V = j∈Z V j q j , then The Functor ( ) Λ k Let Λ var (g,σ) (X) be the groupoid with the same objects as Λ 1 (g,σ) (X) and morphisms where (i 1 , · · · i k ) and (j 1 , · · · j k ) go over all the k−cycles of σ respectively. We can define a similar functor that is identity on objects and sends each [g, t] ∈ Λ k G (g i k · · · g i1 , g j k · · · g j1 ) to [g, t k ] ∈ Λ 1 G (g i k · · · g i1 , g j k · · · g j1 ). We use the same symbol ( ) Λ k to denote the pull back (4.13) K orb (Λ 1 (g,σ) (X)) −→ K orb (Λ var (g,σ) (X)). The external product ⊠ Let Y an H-space, (g, σ) ∈ G ≀ Σ n and (h, τ ) ∈ G ≀ Σ m . Each K * orb (d (g,σ) (X)) is a Z[q ± ]−algebra, as shown in Section 4.1.1. The external product in the theory K * orb (d (g,σ) (−)) is defined to be the tensor product of Z[q ± ]−algebras. The fibred product d (g,σ) (X) × T d (h,τ ) (X) has the same objects as d (g,h,στ ) (X) and is a subgroupoid of it. So we have the Künneth map ) It is compatible with the Künneth map (3.15) of the quasi-elliptic cohomology in the sense that the diagram below commutes. Theorem 4.12. The family of maps The external product of two power operations The composition of two power operations is where (h, τ ) ∈ (G ≀ Σ m ) ×n , and σ ∈ Σ n . (τ , σ) is in Σ m ≀ Σ n , thus, can be viewed as an element in Σ mn . is exactly (iii) Recall that for an element (τ , σ) = (τ 1 , · · · τ n , σ) ∈ Σ mn , it acts on the set with mn elements in this way: That also shows how to view it as an element in Σ mn . where (i 1 , · · · i k ) goes over all the k-cycles of σ ∈ Σ m and (j 1 , · · · j r ) goes over all the r-cycles of τ i k · · · τ i1 ∈ Σ n . The last step is by Proposition 4.11 in [20]. Remark 4.14. We have the relation between equivariant Tate K-theory and quasi-elliptic cohomology It extends uniquely to a power operation for Tate K-theory which is the stringy power operation P string n constructed in Definition 5.10, [20]. It is elliptic in the sense of [2]. Orbifold quasi-elliptic cohomology and its power operation The elliptic cohomology of orbifolds involves a rich interaction between the orbifold structure and the elliptic curve. Ganter explores this interaction in the case of the Tate curve in [21], describing K T ate for an orbifold X in terms of the equivariant K-theory and the groupoid structure of X. In Section 5.1 we give a description of orbifold quasi-elliptic cohomology. In Section 5.2 we discuss the inertia groupoid of symmetric power and the groupoids needed for the construction of the power operation in Section 5.3. Definition. We have two ways to define orbifold quasi-elliptic cohomology. The first one is motivated by Ganter's definition of orbifold Tate K-theory in Section 2, [21]. The other one is a generalization of the definition of quasi-elliptic cohomology in Section 3.2. We consider the category of groupoids Gpd as a 2-category with small topological groupoids as the objects and with 1Hom(X, Y ) = F un(X, Y ). This 2-category is different from that in Section 3 [32]. Let Gpd cen denote the 2-category of centers of groupoids defined in Section 2, [21]. Ganter constructed in Example 2.3 [21] a 2-functor for any k ∈ Z Gpd −→ Gpd cen where ξ k is the center element of the inertia groupoid I(X) sending (x, g) to (x, g k ). Definition 5.1. For any topological groupoid X, the quasi-elliptic cohomology QEll * (X) is the orbifold K-theory (5.1) K * orb (pt/ /R × 1∼ξ I(X)). In other words, for a topological groupoid X, QEll(X) is defined to be a subring of K orb (X) q ± 1 |ξ| that is the Grothendieck group of finite sums a∈Q V a q a satisfying: for each a ∈ Q, the coefficient V a is an e 2πia − eigenbundle of ξ. In the global quotient case, In addition, for any topological groupoid X, we can also consider the category and formulate Loop ext 1 (X) by adding the rotation action by circle, as the construction in Section 2.1.2. Afterwards we can construct the subgroupoid Λ(X) of Loop ext 1 (X) consisting of the constant loops, which is isomorphic to pt/ /R× 1∼ξ I(X). So in this way we give an equivalent definition of orbifold quasi-elliptic cohomology. Symmetric powers of orbifolds and its inertia groupoid. In this section we introduce the groupoids necessary for the construction of the power operation. In Lemma 5.3, 5.4 and 5.5 we show the relation between them. For groupoids like pt/ /R × k∼ξ X, instead of the total symmetric power (Definition 3.1, [21]) S(pt/ /R × k∼ξ X), we consider a subgroupoid Definition 5.2 (The groupoid S R (pt/ /R × k∼ξ X)). Let be the functor sending all the objects to the single point, and an arrow to t mod Z. Let × R (pt/ /R × k∼ξ X) denote the limit of the diagram of groupoids Let × n R (pt/ /R × k∼ξ X) denote the limit of n morphisms ρ k s. It inherits a Σ n −action on it by permutation from that on the product (pt/ /R × k∼ξ X) ×n . Let S R n (pt/ /R × k∼ξ X) denote the groupoid with the same objects as × n R (pt/ /R × k∼ξ X) and morphisms of the form ([g 1 , t 1 ], · · · [g n , t n ]; σ) with ([g 1 , t 1 ], · · · [g n , t n ]) a morphism in × n R (pt/ /R × k∼ξ X) and σ ∈ Σ n . This new groupoid S R n (pt/ /R × k∼ξ X) is a subgroupoid of (pt/ /R × k∼ξ X) ≀ Σ n . Define (5.2) The triple is a symmetric monoid where * is the concatenation and the unit ( ) is the unique object in X ≀ Σ 0 . S R (pt/ /R × k∼ξ X) is the symmetric product that we will use to formulate the power operation. Lemma 5.3. Let Φ k (X) denote the groupoid in Definition 3.3, [21], and φ k ∈ Center(Φ k ) denote the restriction of S k (ξ) to Φ k . For each integer k ≥ 1, there is an equivalence between pt/ /R × 1∼φ k Φ k (X) and the groupoid pt/ /R × 1∼ξ k is an added element such that the composition of k ξ 1 k s is ξ. Lemma 5.5. We have an equivalence of groupoids which is natural in X and satisfies Proof. Let I be the inclusion Let ǫ be the counit of the adjunction (S, * , ( )) ⊣ forget. Let Q denote the composition Let Q R be the restriction of Q to the subgroupoid S R (pt/ /R × 1∼φ Φ(X)), i.e. the composition The essential image of I consists exactly of the indecomposable objects of pt/ /R × 1∼S(ξ) I(S(X)), thus, both Q and Q R are essentially surjective. Q is not fully faithful but Q R is. This is why we need the product S R instead of S. Power Operation for orbifold quasi-elliptic cohomology. In this section we construct the total power operation for the orbifold quasi-elliptic cohomology P Ell : QEll(X) −→ QEll(SX) in (5.6), which satisfy the axioms that Ganter formulated in Definition 3.9, [21] for power operations for orbifold theories. The power operation we constructed in Section 4.2 is a special case of it for G−spaces. Example 5.6. We can construct Atiyah's power operation for orbifold quasielliptic cohomology. Let V be an orbifold vector bundle over the orbifold pt/ /R × 1∼ξ I(X), thus, V represents an element in QEll(X). Then is an orbifold vector bundle over So P n (V ) is in QEll * (S(X)). P = (P n ) n≥0 satisfies the axioms of a total power operation. Before the construction of the power operation of QEll, we introduce several maps necessary for the construction of the power operation. Let X be an orbifold groupoid and k ≥ 1 an integer. We define the map The functor ( ) k : Λ (g,σ) (X) −→ Λ 1 (g,σ) (X) defined in (4.11) is a special local case of s k when X is a G−space and (g, σ) is fixed. Let θ : QEll(X) −→ K orb (pt/ /R × 1∼φ Φ(X)) be the additive operation whose Now we are ready to define the total power operation P Ell of QEll * as the composition below: Theorem 5.7. P Ell satisfies the axioms of a total power operation in Definition 3.9 [21]. Proof. From the definition of P Ell , we can see it is a well-defined natural transformation QEll ⇒ QEll • S and is a comodule over the comonad (−) • S. In addition, the functor θ has the property of additivity The power operation P defined in Example 5.6 has the exponential property. Therefore, P Ell has the exponential property. So P Ell is a total power operation. Remark 5.8. Let X/ /G be a quotient orbifold. The power operation we construct in Section 4.1 for quotient orbifolds is in fact the one below. where J is constructed from the functors J (g,σ) in the proof of Theorem 4.9. For global quotient orbifolds, P Ell and P are the same up to isomorphism. The diagram commutes. The vertical maps k A * k and S R ( k A * k ) are both equivalences of groupoids. The horizontal maps are the power operation defined in Example 5.6. Finite subgroups of the Tate curve Strickland showed in [49] that the quotient of the Morava E-theory of the symmetric group by a certain transfer ideal can be identified with the product of rings k≥0 R k where each R k classifies subgroup-schemes of degree p k in the formal group associated to E 0 CP ∞ . In this section we prove similar conclusions for Tate Ktheory and quasi-elliptic cohomology. The main conclusion for Section 6 is Theorem 6.4. 6.1. Background. In this section we introduce the Tate curve and its finite subgroups. The main references are Section 2.6, [1] and Section 8.7, 8.8, [30]. An elliptic curve over the complex numbers C is a connected Riemann surface, i.e. a connected compact 1-dimensional complex manifold, of genus 1. By the uniformization theorem every elliptic curve over C is analytically isomorphic to a 1-dimensional complex torus, and can be expressed as C * /q Z with q ∈ C and 0 < |q| < 1, where C * is the multiplicative group C\{0}. The Tate curve T ate(q) is the elliptic curve E q : y 2 + xy = x 3 + a 4 x + a 6 whose coefficients are given by the formal power series in Z((q)) Before we talk about the torsion part of T ate(q), we recall a smooth onedimensional commutative group scheme T over Z[q ± ]. It sits in a short exact sequence of group-schemes over It fits into a short exact sequence The canonical extension structure on T (N ) is compatible with an alternating paring of Z[q ± ]−group schemes e N : T (N ) × T (N ) −→ µ N in the sense that e N (a N (x), y) = x bN (y) , for any Z[q ± ] − algebra R and any x ∈ µ N (R). We have the conclusion below, which is Theorem 8.7.5, [30]. Theorem 6.1. There exists a faithfully flat Z[q ± ]−algebra R, an elliptic curve E/R, and an isomorphism of ind-group-schemes over R Thus, we have the unique isomorphism of ind-group-schemes on Z((q)) The isomorphism is compatible with the canonical extension structure: for each Spec(Z((q))[x]/(x N − q k )). In addition, we have the question how to classify all the finite subgroups of T ate(q). As shown in Proposition 6.5.1, [30], the ring O Subn that classifies subgroups of T ate(q) of order n exists. To give a description of it, first we describe the isogenies for the analytic Tate curve over C. Let (d, e) be a pair of positive integers such that N = de and q ′ a nonzero complex number such that q d = q ′e . The map We have Moreover, we have the conclusion below. Proposition 6.3. The finite subgroups of the Tate curve are the kernels of isogenies. 6.2. Formulas for Induction. Before the main conclusion, we introduce the induction formula for quasi-elliptic cohomology. The induction formula for Tate K-theory is constructed in Section 2.3.3, [21]. Let H ⊆ G be an inclusion of finite groups and X be a G−space. Then we have the inclusion of the groupoids σ goes over all the conjugacy classes in H. The finite covering map is defined by sending an object (σ, [g, x]) to (σ, gx) and a morphism ([g ′ , t], (σ, [g, x])) to ([g ′ , t], (gx, σ)). The transfer of quasi-elliptic cohomology where the first map is the change-of-group isomorphism and the second is the finite covering. Thus where r goes over a set of representatives of (G/H) g , in other words, r −1 gr goes over a set of representatives of conjugacy classes in H conjugate to g in G. if there is no element conjugate to g in H. There is another way to describe the transfer, which is shown in Rezk's unpublished work [41] for quasi-elliptic cohomology. The transfer of Tate K-theory can be described similarly. 6.3. The main theorem. Theorem 6.4 gives a classification of finite subgroups of the Tate curve and a similar conclusion for the quasi-elliptic cohomology. We prove it in this section by representation theory. We assume the readers are familiar with the transfer ideal I tr of equivariant K-theory. References for that include Chapter II, [33] where q ′ is the image of q under the power operation P T ate constructed in Definition 3.15, [21]. The product goes over all the ordered pairs of positive integers (d, e) such that N = de. We have the analogous conclusion for quasi-elliptic cohomology. where q ′ is the image of q under the power operation P N constructed in Section 4.2. The product goes over all the ordered pairs of positive integers (d, e) such that N = de. We show the proof of (6.6). The proof of (6.5) is similar. Proof of (6.6). We divide the elements in Σ N into two cases. Case I: The decomposition of σ has cycles of different length. For example, the element For those σ that belong to Case I, Λ ΣN (σ) = Λ Σr×ΣN−r (σ), so Ind is the identity map, so K ΛΣ N (σ) (pt) is equal to Ind . Thus, the summand corresponding to σ in QEll(pt/ /Σ N ) is completely cancelled. Case II: σ consists of cycles of the same length. In other words, it consists of d e−cycles with N = de. K ΛΣ N (σ) (pt) is the representation ring RΛ ΣN (σ). According to Theorem 4.8, as a Z[q ± ]−module, it has the basis where for each a ∈ Z, q a e : Λ Ce ((12 · · · e)) −→ U (1) is the map (6.7) q a e ([(12 · · · e) j , t]) = e 2πia j+t e . Namely, it is the map x a 1 in the sense of Example 3.3. For each partition (d) of d, if it has more than one cycle, Σ (d) is a subgroup of some Σ d1 × Σ d−d1 for some positive integer 0 < d 1 < d. So for each by the property of induced representation. Note that , Thus, each base element with r ≥ 2 is contained in the transfer ideal. When r = 1, consider (q a 1 e ) ⊗ Z[q ± ] d ⊗ D τ with τ ∈ RΣ d . As indicated in Proposition 1.1 and Corollary 1.5 in [5], each τ , except the trivial representation of Σ d , can be induced from a representation τ ′ in some R(Σ i × Σ d−i ) with d > i > 0. So each f j (q) in F is the zero polynomial. The kernel of Ψ is the ideal generated by q ′e − q d . From the power operation of quasi-elliptic cohomology, we can construct a new operation for quasi-elliptic cohomology. N =de Z[q ± ][q ′ ]/ q d − q ′e defines a ring homomorphism, where res is the restriction map by the inclusion G × Σ N ֒→ G ≀ Σ N , (g, σ) → (g, · · · g; σ), diag is the diagonal map X −→ X ×N , x → (x, · · · x) and the last map is the isomorphism (6.6). Proof. Let V = g∈Gconj V g ∈ QEll G (X). Apply the explicit formula of the power operation in (4.17), the composition diag * • res • P N sends V to g∈Gconj σ∈ΣN conj ⊗ k ⊗ (i1,···i k ) V g k q 1 k where (i 1 , · · · i k ) goes over all the k−cycles of σ, and the tensor products are those of the Z[q ± ]−algebras. Then, as shown in the proof of (6.6), after taking the quotient by the transfer ideal I QEll tr , all the factors in diag * • res • P N (V ) are cancelled except those corresponding to the elements in Σ N conj with cycles of the same length. For the factor corresponding to the element σ ∈ Σ N conj with d e−cycles and de = N , the nontrivial part is V g e ,d ⊗ Z[q ± ] q ′ d,e where V g e ,d is the fixed point space of V ⊗ Z[q,q −1 ] d g e by the permutations Σ d and q ′ d,e = P σ (q) = (q 1 e ) ⊗ Z[q,q −1 ] d . Thus, (6.12) P Let V, W be two elements in QEll G (X). We have (V ⊕ W ) g e ,d = V g e ,d ⊕ W g e ,d and (V ⊗ W ) g e ,d = V g e ,d ⊗ W g e ,d .
2018-05-14T22:35:43.000Z
2016-12-03T00:00:00.000
{ "year": 2018, "sha1": "daeb9a8f88e43a90e472dad66d6e971bb03435a2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1612.00930", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "daeb9a8f88e43a90e472dad66d6e971bb03435a2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
3115004
pes2o/s2orc
v3-fos-license
The Post-Merger Magnetized Evolution of White Dwarf Binaries: The Double-Degenerate Channel of Sub-Chandrasekhar Type Ia Supernovae and the Formation of Magnetized White Dwarfs Type Ia supernovae (SNe Ia) play a crucial role as standardizable cosmological candles, though the nature of their progenitors is a subject of active investigation. Recent observational and theoretical work has pointed to merging white dwarf binaries, referred to as the double-degenerate channel, as the possible progenitor systems for some SNe Ia. Additionally, recent theoretical work suggests that mergers which fail to detonate may produce magnetized, rapidly-rotating white dwarfs. In this paper, we present the first multidimensional simulations of the post-merger evolution of white dwarf binaries to include the effect of the magnetic field. In these systems, the two white dwarfs complete a final merger on a dynamical timescale, and are tidally disrupted, producing a rapidly-rotating white dwarf merger surrounded by a hot corona and a thick, differentially-rotating disk. The disk is strongly susceptible to the magnetorotational instability (MRI), and we demonstrate that this leads to the rapid growth of an initially dynamically weak magnetic field in the disk, the spin-down of the white dwarf merger, and to the subsequent central ignition of the white dwarf merger. Additionally, these magnetized models exhibit new features not present in prior hydrodynamic studies of white dwarf mergers, including the development of MRI turbulence in the hot disk, magnetized outflows carrying a significant fraction of the disk mass, and the magnetization of the white dwarf merger to field strengths $\sim 2 \times 10^8$ G. We discuss the impact of our findings on the origins, circumstellar media, and observed properties of SNe Ia and magnetized white dwarfs. INTRODUCTION Type Ia supernovae (SNe Ia) are among the most energetic explosions in the known universe, releasing 10 51 erg of kinetic energy and synthesizing 0.7 M of radioactive 56 Ni in the ejecta of a typical brightness explosion. The discovery of the Phillips relation (Phillips 1993) enabled the use of Type Ia supernova (SN Ia) as standardizable cosmological candles, and has ushered in a new era of astronomy leading to the discovery of the acceleration of the universe (Riess et al. 1998;Perlmutter et al. 1999). The nature of the Type Ia progenitors, as well as their precise explosion mechanism, remains a subject of active investigation, both observationally as well as theoretically. Observational progress to determine the nature of the SNe Ia progenitors, as well as their underlying explosion mechanism, has accelerated in recent years, with a series of projects, including the Palomar Transient Factory (PTF), Large Synoptic Survey Telescope, Pan-STARRS, and the Dark Energy Survey, all coming online. This progress culminated in the discovery of 2011fe in M101 by PTF on August 24, 2011 (Nugent et al. 2011). At a distance of 6.4 Mpc, 2011fe is the nearest SNe Ia detected in the last 25 years, and has proven to be the kind of SNe Ia exemplar system that SN 1987A has been for SNe II. PTF captured 2011fe within 11 hours of the explosion, making it the earliest SN Ia ever detected, and opening the gates to prompt multi-waveband followup observations in the radio, optical, UV, and X-ray bands. The first weeks of multi-wavelength follow-up observations have directly confirmed for the first time that the primary object is a carbon-oxygen white dwarf, and have placed tight constraints on the progenitor system to 2011fe, ruling out red giant as well as Roche-lobe overflowing main sequence companions (Nugent et al. 2011;Brown et al. 2012;Horesh et al. 2012;Li et al. 2011;Bloom et al. 2012). Additionally, determinations of the SNe Ia delay time distribution (DTD) generally follow a t −1 power-law, consistent with expectations for double-degenerate (DD) systems (Gal-Yam & Maoz 2004;Totani et al. 2008;Maoz & Badenes 2010;. A related, but independent model of the supernovae rate based upon a two-component model accounting for both a prompt and delayed component also supports the existence of a delayed, DD channel (Scannapieco & Bildsten 2005;Raskin et al. 2009a). Moreover, the search for both point sources in pre-explosion archival data and "ex-companions" in SNe Ia remnants have so far yielded no definitive candidates, with the possible contested example of Tycho's remnant (Maoz & Mannucci 2008;González Hernández et al. 2012;Edwards et al. 2012;Kerzendorf et al. 2012). Lastly, the search for hydrogen in the nebular spectra of remnants places fairly tight constraints on the amount of hydrogen ( 10 −2 M ) than can be stripped from a companion (Leonard 2007). It may be possible to understand both the absence of ex-companions and nebular hydrogen in the context of the single-degenerate channel as due to the time delay of the spin-down of the white dwarf after accretion from its companion ceases (Di Stefano & Kilic 2012). However, the weight of the observational evidence strongly suggests the viability of the DD channel model as the progenitors for some, if not the majority, of SNe Ia events. The key conceptual challenge faced by the DD channel for SNe Ia is to explain how these models yield a thermonuclear runaway, as opposed to an accretion-induced collapse (AIC) to a neutron star. Early sphericallysymmetric models, based on Eddington accretion rates onto the white dwarf merger, suggested that double degenerate mergers will ignite a carbon-burning deflagration wave which propagates inward to the core of a 1.0 M 50/50 carbon-oxygen white dwarf in ∼ 2×10 4 yr, comparable to the thermal timescale of the white dwarf merger, resulting in a AIC (Saio & Nomoto 1985). An AIC may be avoided, however, if the actual evolution within the rotating merger and fully multidimensional, magnetized accretion disk differs significantly from the one-dimensional models, and can ignite a detonation on a timescale much shorter than the inward carbon deflagration timescale (Nomoto & Iben 1985). An additional crucial difference highlighted by more recent numerical simulations deals with the thermal structure of the white dwarf merger itself (Mochkovitch & Livio 1989, 1990; Rasio & Shapiro 1995;Segretain et al. 1997;Guerrero et al. 2004;D'Souza et al. 2006;Yoon et al. 2007;Motl et al. 2007;Pakmor et al. 2010;Dan et al. 2011;Zhu et al. 2011). These simulations demonstrate that the compressional work produced during the final tidal disruption of the white dwarf binary results in a hot (∼ 10 8 K) rotating white dwarf merger, quite unlike the cold (10 7 K) isothermal white dwarfs typically taken as a starting point in earlier one-dimensional studies. Furthermore, recent work suggests that the outcome of white dwarf mergers may not always be either a SNe Ia or an AIC, but could also result in a high-field magnetic white dwarf (HFMWDs). HFMWDs have magnetic fields in excess of 10 6 G and up to 10 9 G. Very few of these white dwarfs belong to a non-interacting binary system, and moreover they are more massive than averagesee, for instance, Kawka et al. (2007). All these characteristics point towards a binary origin of these white dwarfs. Although long-suspected (Wickramasinghe & Ferrario 2000), it has only recently been shown that if the white dwarf merger fails either to detonate into an SNe Ia or collapse down into an AIC, it will result in a magnetized, rapidly-rotating white dwarf (García-Berro et al. 2012). Whether the magnetic white dwarf rotates rapidly or not depends on the relative orientation of the magnetic and rotational axes, as well as on the efficiency of the various braking mechanisms. The evolution of the white dwarf merger subsequent to the coalescence of the initial binary system remains a subject of active investigation. Numerical models have begun to relax assumptions of earlier work by modeling the accretion of the hot thick accretion disk onto the white dwarf, either by including a prescription for the accretion process and the spin-down of the merger (Yoon et al. 2007), or by employing a Shakura-Sunyaev turbulent viscosity Schwab et al. 2012). Other researchers have investigated the violent mergers of super-Chandrasekhar mass white dwarf systems (Pakmor et al. 2010(Pakmor et al. , 2011 and collisions (Raskin et al. 2009b). While violent mergers and collisions of white dwarfs are found by some groups to lead to detonations, these detonations may be sensitive to the initial conditions of the white dwarf merger (Motl et al. 2007;Dan et al. 2011). Because the detonations are not fully-resolved in largescale multidimensional simulations, detonations must be initiated by the choice of a suitable criterion, which is an additional issue in determining whether these systems robustly detonate -e.g. Seitenzahl et al. (2009). In contrast, sub-Chandrasekhar mergers are more prevalent than super-Chandrasekhar mergers in nature. Recently, van Kerjwijk and colleagues have reinvigorated the examination of whether the accretion of the disk may give rise to a detonation, even for sub-Chandrasekhar mergers (van Kerkwijk et al. 2010;Zhu et al. 2011;van Kerkwijk 2012). Specifically, beginning with a near-equal mass binary with two 0.6 M carbonoxygen white dwarfs, which both subsequently tidally disrupt and merge, van Kerkwijk et al suggest that the accretion of the thick, turbulent disk surrounding the white dwarf merger results in the compressional heating of the degenerate material in the white dwarf over a viscous timescale, which in turn leads to a detonation. The model explains how a realistic multidimensional DD merger might produce a SNe Ia instead of an AIC. Equally interesting is the possibility that sub-Chandrasekhar DD mergers may help to bring observed and predicted SNe rates in closer agreement (Ruiter et al. 2009;Badenes & Maoz 2012). Furthermore, detonations of sub-Chandrasekhar progenitors naturally produce nucleosynthetic yields and luminosities closely in line with observation, and thereby sidestep the longstanding problem of pre-expansion required during the deflagration phase of near-Chandrasekhar mass white dwarf progenitors encountered in the single-degenerate channel (Sim et al. 2010). Despite these advances, all previous numerical simulations to-date on double-degenerates have treated the influence of magnetic field only approximately through the use of a Shukura-Sunyaev α prescription, or neglected its contribution entirely. Furthermore, apart from recent work by Romanova et al. (2011Romanova et al. ( , 2012 and Siegel et al. (2013), which focused upon the inner accretion disk surrounding a non-burning star, and a hypermassive neutron star, respectively, all work done on the magnetorotational instability (MRI) to date have generally lacked a central stellar object. Consequently, fundamental questions involving the influence of the magnetic field upon the outcome of the WD merger and its connection to SNe Ia remain unresolved. These questions include: What is the structure of the magnetic field in the white dwarf merger, disk, and corona -ordered or disordered? If ordered magnetic fields are present, are they capable of collimating outflows? Are significant amounts of mass outfluxed from the binary system? How does the magnetic field influence the spin of the merger? What is the influence of the field upon nuclear burning? Under what conditions may we generally expect mergers to produce stable HFMWDs, SNe Ia, or accretion-induced collapses? In this paper, we present the first set of numerical simulations of the post-merger evolution of double-degenerates to include the effect of the magnetic field, and in so doing, to begin to address the role of the magnetic field upon each of these fundamental questions. Simulation Setup The astrophysical fluid framework code FLASH has previously been used to simulate single-degenerate models of SNe Ia in both 2D cylindrical and 3D Cartesian geometry in a wide range of studies (Townsley et al. 2007;Jordan et al. 2008;Meakin et al. 2009;Falta et al. 2011;Jordan et al. 2012a,b). Here we build upon and extend this body of work to simulate the double-degenerate channel. Our double-degenerate models take as an initial condition the endpoint of a 0.6 M + 0.6 M 40/60 carbon-oxygen white dwarf merger from the 3D SPH simulations of Lorén-Aguilar et al. (2009). The stars are modeled using 4×10 5 particles. The initial condition corresponds to non-sychronously rotating white dwarfs on a circular orbit with an orbital separation of 0.03 R , which immediately leads to an unstable mass transfer episode lasting for ∼ 500 s. The final configuration of the merger corresponds to a solidly-rotating central compact object (Ω 0.3 s −1 ) surrounded by a thick accretion disk extending up to a distance of 0.07 R . No noticeable nuclear processing was found during the merger process. Unlike unequal-mass and synchronously-rotating merger events, the remnant of the irrotational 0.6 + 0.6 M case presents a temperature peak ( 6 × 10 8 K) at the center of the central compact object, with a rapid drop towards the outer parts of the disk. Whether or not the initial white dwarf binary is synchronously-rotating depends upon the efficiency of tidal dissipation, which remains an open question. Segretain et al. (1997) argue that the timescale for gravitational wave emission is much longer than the orbital timescale for dynamically-unstable systems, leading to non-synchronous rotation of the white dwarfs. Later work by some groups find resonant tidal dissipation in the final merger process to be highly-efficient, thereby producing synchronous binaries (Burkart et al. 2012), while other groups find that a degree of asynchronicity persists even to short periods (Fuller & Lai 2013). Asynchronously-rotating systems generally lead to more violent mergers and hotter initial conditions -e.g., Pakmor et al. (2010) and Zhu et al. (2011), while synchronously-rotating systems result in less violent mergers with lower temperatures -e.g. Dan et al. (2011). Because the initial temperature profile plays a crucial role in determining the nuclear burning within the white dwarf merger, the possible influence of initial synchronous rotation and tidal heating of the white dwarf binary are important assumptions of our models which must be borne in mind. In particular, we expect the majority of our conclusions regarding the development of the magnetic field to be robust, though these tidal effects will influence the temperature profile of the white dwarf merger, and may impact our conclusions with regard to nuclear ignition. We utilize the SPH smoothing kernel to interpolate the Lagrangian SPH data onto a 2D axisymmetric cylindrical coordinate (r, z) Eulerian mesh, averaging over azimuthal angle φ, while retaining all three velocity components (v r , v φ , v z ) -see appendix for details. Both the interpolation and the azimuthal angle-averaging are not guaranteed to conserve energy; however, we have confirmed that the initial total energy on the mesh is preserved to within 1% or better of the SPH value for all models presented here. We then advance this initial Eulerian initial condition in time using the adaptive mesh refinement (AMR) FLASH application framework (Dubey et al. 2009;Fryxell et al. 2000) using an initiallyweak poloidal magnetic field to seed the growth of the MRI. We solve the fundamental governing equations of selfgravitating, inviscid ideal magnetohydrodynamics, which can be expressed as: Here, ρ is mass density, v is the fluid velocity, B is the magnetic field, and g is the gravitational acceleration. p * = p + B 2 /(8π) is the total pressure, including both gas pressure p and magnetic pressure B 2 /(8π). ρE is the total energy density, E = 1/2ρv 2 + ρ + B 2 /(8π). The inclusion of Poisson's equation for self-gravity: where g = −∇φ, as well as an equation of state close the system. Equation (4), the induction equation, preserves ∇ · B = 0 if this is imposed as an initial condition. The poloidal magnetic field is efficiently initialized in a divergence-free form by defining the toroidal component of the vector potential A: This form is motivated by previous MRI studies (Hawley 2000), with the inclusion of a filter function f (r) = ∆ tanh [(r − r 0 ) /∆] α , chosen to localize the initial poloidal field to the disk, and to avoid initially strongly magnetizing the merger, the axial region near the z-axis, as well as low-density regions outside the disk. Here we have chosen r 0 = 10 km, ∆ = 1.5×10 4 km, ρ 0 = 2 g/cm 3 , α = 9, and B 0 is an overall field strength factor chosen to ensure the magnetic pressure is everywhere weak compared to the gas pressure initially. The magnetic field is then straightforwardly defined at cell edges by finitedifferencing the vector potential using B = ∇ × A, and advanced using the unsplit ideal MHD solver in angularmomentum conserving form (Lee & Deane 2009;Tzeferacos et al. 2012;Lee 2012). The Roe Riemann solver is employed, with piecewise parabolic (PPM) spatial reconstruction and a minmod slope limiter. The divergencefree prolongation of the magnetic field is done using an adapted implementation of the method of Li & Li (2004). Our simulations employ an equation of state that includes contributions from blackbody radiation, ions, and electrons of an arbitrary degree of degeneracy (Timmes & Swesty 2000), along with an axisymmetric multipole treatment of gravity, with the series truncated after ten moments ( = 10). Nuclear burning is incorporated through the use of a simplified 13-species alpha-chain network, which includes the effect of neutrino cooling (Timmes 1999). We performed a series of simulations from the previously-described initial condition, varying both our choice of the initially-weak magnetic field as well as the spatial resolution. The full set of completed runs is shown in table 1. All runs are performed on a domain r < 1.31 × 10 10 cm in the radial direction, and −6.55 × 10 9 cm < z < 6.55 × 10 9 cm in the vertical, with diode boundary conditions, which strictly guarantee that no inflow occurs at the outer boundary. The white dwarf merger has a radius 1.5 × 10 4 km, which is resolved with 60 cells in our standard model. The isothermal disk scale height H = √ 2c s /Ω 4.8 × 10 8 cm at a temperature of 5 × 10 7 K near the midpoint of the disk, where Ω 0.05 s −1 . At an inner disk radius of 2 × 10 9 cm, the Keplerian period around a 1.0 M white dwarf merger is 51 s. The viscous accretion time, based upon a simple constant Shakura-Sunyaev α = 0.01, is roughly 7 × 10 3 s. Our standard model (designated "Sbh") has a resolution of 256 km and an initial value of the global ratio of gas pressure to magnetic pressure, defined to be the dimensionless ratio β = 8π P / B 2 = 2000, where angle bracket quantities represent averages over space over the entire domain. While the local ratio of gas to magnetic pressure varies throughout the spatial domain, the magnetic pressure is initially significantly less than the initial total gas pressure in all models; its initial minimum value is 16.5 in model S-bh. We note that while our seed magnetic field is initially dynamically weak everywhere, and its initial magnitude in the white dwarf merger (2.8 × 10 5 G) is typical of field white dwarfs (≤ 10 6 G), its value in the disk is astrophysically large by this same comparison. Our choice is motivated by the requirement to accurately capture the dynamics of the MRI by well-resolving the fastest-growing MRI mode λ c 6.49 v A /Ω, where v A is the local Alfvén speed, and Ω the local rotational velocity (Hawley et al. 1995), on a computationally-tractable mesh size. MRI simulations which do not initially resolve λ c also become unstable and reach magnetic field saturation, but take much longer to do so, since only a narrow band of all unstable modes, not including the fastest-growing mode, are captured on the mesh (Suzuki & Inutsuka 2009). In reality, because the fastest-growing mode of the MRI always grows on a dynamical timescale ∼ Ω −1 , we expect that even an astrophysically-realistic magnetic field strength will reach saturation on a relatively short timescale in comparison to the viscous timescale. Our additional models vary both the spatial resolution and the initial magnetic field strength, in order to test spatial convergence, as well as sensitivity to the resolution of λ c . Our standard model S-bh maximally resolves λ c with approximately 76 cells per wavelength, and the disk scale height H (evaluated near the midpoint of the disk) with roughly 19 cells. The vertical resolution of our standard model is therefore somewhat less than 3D convergence studies of stratified shearing-box MRI simulations, which demonstrate that between 32 and 64 cells per scale height are required for convergence (Davis et al. 2010). Moreover, because both the disk scale height in a global disk geometry and the initial seed magnetic field vary, we do not necessarily resolve either the scale height or the fastest-growing mode uniformly throughout the domain. To address this issue, we examine our convergence of the peak magnetic field and magnetic stresses by varying our resolution explicitly, increasing and decreasing our standard model resolutions in runs H-bh and L- -Four frames in the r-z plane consisting of a) log ρ, b) log T , c) magnetic field, with lines of poloidal magnetic field in the r-z plane superposed against a color raster plot of the toroidal field B φ , and d) the ratio of gas pressure to magnetic pressure β value. All four frames are taken at the midpoint of the model S-bh simulation at t = 10 4 s. bh, respectively. Additionally, we also vary our initial magnetic field strength, increasing it in both models Sbm and S-bl, which have initial β values of 1000 and 500, respectively. Because λ c depends on the initial field strength, these model variations also vary the effective resolution (see table 1, column f). Our standard model S-bh has been advanced to 2 × 10 4 s, while other models run for varying durations. Our standard model duration is equivalent to 390 inner rotational periods, and several viscous accretion timescales. We output the state of the system in 10 s intervals. This extensive time series is sufficiently long to permit accurate time-averages over turbulent quantities. We experimented with both relaxed and "cold-start" initial conditions. In general, due to small differences in numerical schemes, the initial conditions mapped from SPH lead to a mapped initial condition on the Eulerian mesh with small spurious radial oscillations in the disk and merger (Zingale et al. 2002). In the relaxed cases, a damping term in the momentum equation drove the system to a hydrodynamic equilibrium state, eliminating the radial oscillations over a few dynamical times, prior to the in-troduction of the seed magnetic field. In contrast, cold starts simply allowed the system to evolve from t = 0 with the seed magnetic field. Due to the rapid growth of the MRI, there were relatively small changes in the outcome between these initialization procedures. The results presented here are all cold starts. Magnetic Structure of White Dwarf Merger and Disk A snapshot depicting the logarithm of the density taken from the midpoint of our standard model, at t = 10 4 s, is shown in figure 1(a), whereas the distribution of temperatures is shown in figure 1(b). The rotating, hot white dwarf merger is surrounded by a differentially-rotating, thick accretion disk. On the bottom panels, figures 1(c) and 1(d), we reveal the magnetic structure of the merger and the disk. This is done plotting the poloidal magnetic field lines in the r-z plane superimposed on the background toroidal magnetic field B φ (left-hand panel), and the ratio of the gas to magnetic pressure β (right-hand panel). Regions of high β are dominated by gas pressure, while those with low low β are supported by magnetic pressure. The merger and accretion disk themselves remain relatively weaklymagnetized (β 1), whereas the disk corona and biconical jets are strongly-magnetized (β 1), as previous MRI studies have found (Miller & Stone 2000). The magnetic field structure in the disk is highlyturbulent and disordered. Loops of low-density, heated magnetic flux rise buoyantly above the accretion disk into the corona (Machida et al. 2000), where some reconnect through numerical resistivity, thereby heating the coronal region. Some poloidal loops of flux -which actually are toroidal in shape in an axisymmetric geometry -are long-lived in our simulation, persisting for many local dynamical times. While it is known that these poloidal flux tori are subject to a wide variety of instabilities in 3D, including the kink and interchange instabilities, both the toroidal field and the differential shear in the disk (Spruit et al. 1995) may help stabilize these even in full 3D. In contrast, biconical axial outflows carry open field lines away from the merger. The biconical region is strongly magnetized and heated to T ∼ 10 8 K, as is clearly seen in figures 1(b) and 1(d). A strong outflow is driven at the interface of this region with the magnetized corona, similar to previous MRI studies of black hole accretion disks (De Villiers et al. 2005). However, we find that this interface region, which is Kelvin-Helmholtz unstable, varies significantly in location and shape over the duration of the simulation. Moreover, although there is a net outflux of both mass and magnetic flux from the simulation domain, there are also thin returning flows, driven by reconnection, which infall onto the disk and merger, similar to that seen in previous work (Igumenshchev et al. 2003). Throughout this paper, we will have the need to separate the domain into the white dwarf merger, rotationally-supported disk, and magnetized corona in order to determine the properties of each of these regions individually -see figure 2. We characterize each of these regions based upon a physical criterion: specifically, we define our magnetized coronal region to be dominated by magnetic pressure (β < 1), with the remainder of the weakly-magnetized gas (β ≥ 1) divided into both the white dwarf merger and the disk. The white dwarf merger is defined to consist of the region primarily supported by pressure, and not rotation (P > 1 2 ρv 2 φ ), while the disk is rotationally-supported ( 1 2 ρv 2 φ ≥ P ). Furthermore, our definition does not separate out the biconical jets and outflows into distinct additional components; the coronal region includes both the biconical jets and outflows. Growth of the Magnetic Field and Magnetic Stress We expect the disk to be strongly unstable to the magnetorotational instability, and indeed, we confirm this to be the case. In figure 3(a), we show the development of the magnetic energy, for each run. For higher resolutions, we achieve a higher peak magnetic energy, though the runs exhibit a trend towards convergence in the peak magnetic energy with increased resolution. Specifically, the difference in the peak magnetic energy of run H-bh (E peak mag = 1.44 × 10 48 erg) and S-bh (E peak mag = 1.38 × 10 48 erg) is 0.46 that of the difference between the next two lowest-resolution models, S-bh and L-bh (E peak mag = 1.25 × 10 48 erg). Next, we explore the role of stresses within our model. The r-φ component of the stress tensor, governs the transfer of angular momentum in the disk. The first term of the stress tensor is the Reynolds stress, and the second, the Maxwell stress. Here δv R and δv φ are the fluctuations of the radial and azimuthal velocity components. Analytically, during the linear growth phase of the MRI in a near-equilibrium disk, these fluctuations are the departures of the local fluid velocity from a circular orbit -specifically, where Ω(R) is the disk angular velocity at radius R. However, it is well-known that due to the large turbulent fluctuations in fully-developed MRI turbulence, defining the mean disk velocity is problematic, even in a timeaveraged sense, and consequently there is no unique prescription for specifying the Reynolds stress in a global disk simulation -see for instance, Hawley & Krolik (2001). Here, we focus upon the Maxwell stress as a proxy for the total stress, since 3D simulations of the MRI typically find the Maxwell stress dominates the Reynolds stress by factors of 3-6 -see, for instance, Davis et al. (2010). In figure 3(b), we plot the ratio of the spatial average of the Maxwell stress to the gas pressure, which defines an effective magnetic Shakura-Sunyaev parameter α m = − 1 4π B R B φ / P gas , as a function of time, for each run. Here, brackets indicate spatially-averaged quantities over the disk and coronal regions, excluding the white dwarf merger itself. We find overall good agreement in the Maxwell stresses in all models computed, with all models apart from Sbm converging towards a value of α m ∼ 0.01. Moreover, our standard model, which has been evolved for the longest runtime, has a relatively steady α m ∼ 0.01, indicative of sustained accretion and angular momentum transport. The magnetized stress reflects the turbulent dynamics of the MRI in the disk, and like many properties . The left panel inset shows the evolution of the magnetic energy for our standard model over the entire duration of the simulation. b) The effective Shakura-Sunyaev magnetic alpha coefficient αm (see text for definition) for all runs, with the same color notation as the panel a. The right panel inset shows the evolution of αm over the entire simulation for our standard model. of turbulent systems, is best understood in terms of a stochastic behavior with large departures from mean values. To better quantify this behavior, we have computed the time-and spatially-averaged α m , which we define as Here double angle-brackets indicate both spatial and time-averaging over an interval of T , beginning with t = t 0 . We time-average the late-time evolution of figure 3(b), taking t 0 = 1000 s and T = 5000 s. We find a trend towards convergence in α m for our high β model with increased resolution -with values ranging from 0.033 (L-bh) through 0.020 (S-bh) to .023 (H-bh), but with some sensitivity at fixed resolution to the initial choice of β. In particular, we find α m = 0.058 and 0.027 for β = 1000 and β = 500, respectively. The white dwarf merger, which is initially weakly magnetized with a mean, purely poloidal magnetic field of 2.8 × 10 5 G, becomes rapidly magnetized over several inner rotational periods, as the MRI develops in the disk and magnetic flux is advected into the merger -see figure 4(a). The field strength is time-variable, particularly at early times, which is expected given the turbulent nonsteady nature of the MRI in the disk -just as the mass accretion is highly variable, so too must be the advection of magnetic flux into the merger. At late times, the total mean magnetic field strength within the merger is ∼ 2 × 10 8 G, typical of HMFWDs. The ratio of the mean toroidal field strength to the mean poloidal field strength is shown in figure 4(b). The final field is predominantly toroidal, with a mean value of B t /B p ∼ 1.5. Both the field strength and the geometry, as traced by the toroidal to poloidal field strength ratio, are highly time-variable, indicative of a disordered interior magnetic field. Given our limited resolution within the white dwarf merger of roughly 60 cells in our standard model S-bh, our final field strengths are very likely limited by numerical resistivity. Our results do, however, demonstrate that HMFWD field strengths may be achieved through the double-degenerate channel. The global magnetic energy E mag exhibits a slow decay at late times. About 7 × 10 47 erg, or roughly half of the total drop in magnetic energy is actually due to the outflow of magnetic flux from the problem domain in our model S-bh. At late times, more magnetic flux is outfluxed than the net magnetic energy generated within the domain by the MRI, as the MRI saturates and slowly winds down, and turbulent magnetic energy is dissipated through reconnection. The net result is a net decrease in magnetic field energy on the problem domain. We note that while the simulations presented here correspond to a relatively special case of an equal-mass white dwarf merger, which results in a central peak temperature. In the more general case of an unequal mass merger, the peak temperature will occur in the nearly spherically-symmetric, hot, convective region surrounding the primary white dwarf, in which the MRI is also expected to grow rapidly (García-Berro et al. 2012). However, the key point is that the MRI is expected to be the generic outcome of a merger, and the calculations presented here demonstrate this concretely for this specific model. Moreover, we can estimate the lifetime of the magnetic field, assuming that there is no further dynamo action present, and that the magnetized, accreted disk material is spread into a spherical shell surrounding the white dwarf (Nordhaus et al. 2011). In this case, the timescale against Ohmic decay via Spitzer resistivity is ∼ 200 Myr for a 10 8 K, 0.2 M disk. Thus we expect the surface fields to be strong enough to be visible as a HMFWD for some time beyond merger. Consequently, mergers may account for the newly-discovered class of hot DQ white dwarfs, of which roughly 70% are strongly-magnetized, a level far exceeding that of the field (Dunlap et al. 2010;Dufour et al. 2011;Lawrie et al. 2013;Williams et al. 2013). Mass Accretion and Mass Outflow The stresses developed by the MRI drive mass accretion through the disk. Additionally, previous studies have demonstrated that the turbulence produced within the disk can develop coronal mass outflows (Machida et al. 2000;Miller & Stone 2000;Suzuki & Inutsuka 2009;Flock et al. 2011). The implications of significant mass outflow are particularly profound for early-time optical, X-ray, and radio observations of SNe Ia, which are sensitive probes of the circumstellar environments surrounding the progenitor systems. A key topic of observational interest is the recent discovery of narrow (∼ 10 km/s) Na I doublet absorption lines in the light curves of some normal SNe Ia, originating from the circumstellar medium (CSM) surrounding the SNe Ia progenitor (Patat et al. 2007;Foley et al. 2012). Originally, these lines were interpreted as likely supporting a symbiotic progenitor channel for the SNe Ia, with the implicit assumption that other progenitor channels were likely to have a very sparse or nonexistent CSM. However, recent work by several groups have questioned this assumption by demonstrating that a CSM with properties similar to the observed narrow NaID lines can be formed during the late snowplow phase of radiative, fast outflows from a variety of other models. The surface escape speed of a white dwarf is typically ∼ several ×10 3 km/s, which is much greater than CSM velocities implied by the narrow NaID. However, provided that the SNe event is significantly delayed after the emergence of the outflows -by hundreds to tens of thousands of years depending on the initial outflow speed and the assumed density of the interstellar medium surrounding the progenitor -a number of various progenitor channels may also give rise to a CSM with properties quite similar to the observed NaID lines, through the basic physics of the momentum-conserving snowplow phase of the outflow. For instance, Shen et al. (2013) demonstrate that multiple shells can be formed during the mass transfer between a He+C/O double white dwarf binary preceding a sub-Chandrasekhar double-detonation event, and Soker et al. (2013) demonstrate that multiple shells may also arise in a core-degenerate scenario of the merger of a white dwarf with the core of an asymptotic giant branch phase star. Additionally, subsequent to the initial submission of this paper, Raskin & Kasen (2013) have found that a tidal tail of ∼ several ×10 −3 M of mass is unbound during the final white dwarf binary merger itself, very similar to the amount in the original SPH simulations upon which our results here are based (Lorén-Aguilar et al. 2009). We characterize the initial mass of the white dwarf merger, disk, and magnetized corona by analyzing each of these at an early point in the simulation (t < 500 s), where the MRI is fully-developed, but prior to significant post-merger mass changes. At these early times, the white dwarf merger mass is 0.96 M by our specified criteria; the disk and the corona account for 0.20 M and 0.04 M , respectively. In addition to the mass accretion from the disk onto the white dwarf merger, a significant fraction of the total mass of the disk is lost, primarily through accretion as well as an outflow driven near the interface of the corona with the biconical jet with a small fraction exceeding the escape speed. Over the duration of our standard model, we determine that the disk has lost nearly 90% of its initial mass over the duration of the simulation, through a combination of accretion and outflow. Of this amount, just over 82%, or 0.16 M , is accreted onto the white dwarf merger, with the remainder either transferred into the corona or outfluxed and lost from the domain. In total, nearly 0.06 M is outfluxed from the domain and subsequently lost to the simulation. However, the vast majority of this mass remains gravitationallybound in our simulation (see table 1), and is likely to be re-accreted. Only about 10 −3 M of the outflow is gravitationally-unbound over a 2 × 10 4 s duration, and will be completely lost from the system. The mean ejection velocity is 2600 km/s, which is much in excess of the escape speed at the top and bottom edges of the domain of ∼ 1600 km/s. In order to quantify the angular distribution of the mass outflow, we have computed the angle of the outflow of each unbound cell, relative to the vertical axis. While matter is ejected in both the radial and vertical directions, the magnetically-driven outflow is preferentiallydriven along a 50 • angle relative to the vertical direction, which is the mean of the mass density outflux angular distribution -see figure 7. We note that the total mass outfluxed, including both bound and unbound mass, from the domain over the duration of the simulation varies by a factor of roughly 2, from 0.027 M for model S-bl to 0.058 M for model S-bm, as shown in table 1. The total momentum of ejected material in the tidal tail is quite similar to the momentum ejected in our wide, magnetically-driven outflow, though the tidal tail material is preferentially ejected in the plane of the merger with a slightly lower characteristic velocity. For a significant delay of ∼ 10 4 yr between the merger and a possible SNe event, Raskin & Kasen (2013) demonstrate that these tidally-driven outflows may produce distinct observational signatures in narrow NaID lines. For such a delay subsequent to the initial merger, we expect that magnetically-driven winds will act in concert with tidal tails, with both the tidal tails and the magneticallydriven outflows sweeping up the interstellar medium to produce multiple shocked, asymmetric shells in narrow NaID lines. These predicted shells are perhaps similar to existing observations of asymmetric NaID shells (Förster et al. 2012), or of multiple NaID shells in the PTF11kx system (Dilday et al. 2012), depending on the viewing angle between the SNe event and the observer. Spin-Down of White Dwarf Merger A further key question is the possible influence of the magnetic field upon the spin of the white dwarf merger. Previous studies of core-collapse supernovae have suggested that the outward transport of angular momentum results in a spin-down of the protoneutron star and its surroundings -see for instance, Thompson et al. (2005). In the context of the double-degenerate model of SNe Ia, a loss of rotational support of the central white dwarf merger may yield additional or perhaps even dominant compression beyond that provided by accretion from the disk. We find the central white dwarf merger is spun down significantly on a very rapid timescale, due primarily to the development of Maxwell stresses at the boundary of the white dwarf merger. In particular, we find that the merger, whose initial angular momentum is 2.6 × 10 50 g cm 3 s −1 , spins down significantly to 8.0 × 10 49 g cm 3 s −1 by the end of the standard model simulation. This amounts to a braking timescale of J/|J| several ×10 4 s. In figure 6(a), we plot the verticallyaveraged angular velocity Ω as a function of cylindrical radius r. The white dwarf merger is seen to be in solidbody rotation at r < 10 9 cm, with the outer portion of the domain is in Keplerian rotation. We note the tapering of the angular velocity plot at radii r > 6 × 10 9 cm at t = 0 s is an artifact of the initial SPH particle distribution, which lacked any particles in this region. Additionally we note that the shear in the innermost portion of the disk, near r = 2 × 10 9 cm, flattens out as the free energy available in the shear is tapped to drive the MRI. As time evolves, the merger is seen to spin-down significantly, from Ω ∼ 0.3 s −1 initially to almost half of that value, Ω ∼ 0.17 s −1 , finally, at the end of the run. In order to verify that the simulated spin-down of the merger is indeed physically accurate, and not the consequence of numerical errors, we have computed the spindown expected from the conservation of angular momentum and compared this to the measured spin-down. In particular, in axisymmetry, the inviscid angular momentum evolution equation can be written as Here, B p is the poloidal magnetic field. The second term represents the divergence of the angular momentum flux, In post-processing, we integrate the divergence of the angular momentum flux F φ inside the merger, and compare it with the actual angular momentum evolution of the merger as obtained by the full simulation. For the purposes of this computation we define the merger as the fixed region with a distance less than 1.5 × 10 4 km from the domain center, which is consistent with our previous definition based upon pressure support. The result is shown in figure 6(b). The cumulative effect of the magnetic and hydrodynamic stresses are integrated over volume and time to produce a net change in angular momentum, shown in the dash-dotted curve. The Maxwell stresses are roughly three times more significant than the Reynolds stress, similar to previous MRI studies, e.g., Flock et al. (2011) and Davis et al. (2010). The sum of these predicted net changes in angular momenta are plotted, and added to the initial angular momentum, as the dashed curve, which compares with the actual simulated angular momentum, as shown in the solid curve. The results agree to within 10%, which establishes that the spin-down is physically driven by MRI-generated Maxwell stresses, and is not the result of numerical or artificial viscosity. Furthermore, by subsampling the output interval, we have determined the dominant error in the calculation to be the finite output cadence of 10 s. Thus, while our calculation places the upper-bound of the numerical errors at ∼ 10%, the true numerical error is very likely to be much smaller. Nuclear Burning The question of whether the merger will initiate a detonation, and possibly result in a SNe Ia, hinges crucially upon the peak nuclear burning rate. Because the equalmass merger case produces a central peak in temperature, the burning rate achieves its highest value at the merger center. In figure 5, we plot the central conditions for the white dwarf merger in the log T − log ρ plane, as a thick solid line. Also shown are equicontours of both the carbon-burning (dot-dashed) and neutrino timescales (dashed), at values of 10 6 yr, 10 2 yr, and 10 −2 yr. The critical condition for thermal runaway, at which the timescale for carbon burning is equal to that of neutrino cooling, is shown as a thin solid line. The degeneracy boundary in the temperature-density plane is demarcated by the dotted where the temperature equals the Fermi temperature: T = T F . The central conditions of the merger remain partially, though not fully, degenerate throughout the evolution. At the beginning of the simulation, at t = 0 s immediately after the initial merger, the neutrino cooling time is shorter than the carbon-burning time at the center of the merger. As the white dwarf spins down and accretes mass from the disk, the central region compresses further, and the central conditions become thermally unstable, with the timescale for carbon burning shorter than neutrino cooling at around t CC = 10 4 yr. At this point, the central region of the white dwarf has ignited. After ignition, the central temperature of the merger continues to rise, reaching a peak temperature T ∼ 10 9 K. The burning timescale at this temperature is roughly 1 yr, which is much longer that what we can feasibly follow in this set of simulations. Additionally, the maximum temperature is not converged, and shows a systematic increase of roughly 20% both as the resolution increases, or as β decreases. This behavior is consistent with the maximum temperature for a sharplypeaked temperature profile, which becomes coarsened at lower resolution. Further, since decreasing the initial β parameter is equivalent to an increase in the effective resolution of the MRI critical wavelength λ c , both trends may be understood on the basis of the effective resolution of the simulation. We return to this issue below in the conclusions. While there is a large body of work on the final stages of the merger of white dwarf binaries (Mochkovitch & Livio 1989, 1990Rasio & Shapiro 1995;Segretain et al. 1997;Guerrero et al. 2004;D'Souza et al. 2006;Yoon et al. 2007;Motl et al. 2007;Pakmor et al. 2010;Dan et al. 2011;Zhu et al. 2011) there are relatively few indepth studies of the post-merger phase of evolution. Several classic spherical studies of Saio & Nomoto (1985), Saio & Nomoto (1998), and Saio & Nomoto (2004), follow the evolution of a spherical remnant accreting near the Eddington limit, and concluded that the offcentered ignition would lead to an accretion-induced collapse. Later, Yoon et al. (2007) followed six white dwarf mergers using SPH, varying both the mass ratio and the number of particles. Their most-advanced model consisted of a super-Chandrasekhar SPH simulation of 0.6 M + 0.9 M white dwarf binary, which was followed through merger and a short time (300 s) after merger using inviscid hydrodynamics. Additionally, Shen et al. (2012) and Schwab et al. (2012) follow the post-merger evolution of a range of super-Chandrasekhar mass white dwarf binaries in 1D spherical and 2D spherical geometry, respectively, using a Shakura-Sunyaev viscosity prescription for the transport of angular momentum. The comparison of our work to these previous models is complicated by our focus upon an equal-mass sub-Chandrasekhar case, whereas most prior models in these studies focus upon unequal mass, super-Chandrasekhar models. We are, however, able to compare the broadest features and underlying physics of all models. The morphology of the mergers studied by both Yoon et al. (2007) and Schwab et al. (2012), which consist of a rotating white dwarf core, a hot envelope, and an accretion disk is in broad agreement with the structure of our initial SPH conditions, with the crucial difference that their unequal mass models have temperatures which peak off-center, as other studies have also shown -e.g. Lorén-Aguilar et al. (2009) andZhu et al. (2011). We do note that our initial central peak temperatures are somewhat higher than those found by Zhu et al. (2011) for similar sub-Chandrasekhar mergers, and that this is likely due to differences in the SPH modeling. Additionally, Zhu et al. (2011) find that peak central temperatures are achieved only for non-synchronously rotating white dwarfs, as ours are. Yoon et al. (2007) parameterize the end state of their SPH models by mass, and advance a range of models in a 1D hydrodynamic code incorporating a description of angular momentum transport via hydrodynamic instabilities with an imposed timescale of 10 4 − 10 5 yr. Our models, which include self-consistent angular momentum transport through the magnetorotational instability, as well as the models of Schwab et al. (2012) using an α model, find that the timescale for accretion of the disk is consistent with the turbulent viscous accretion timescale, and many orders of magnitude shorter than model assumptions of Yoon et al. (2007). Our findings are, however, consistent with the timescale of turbulent disk accretion suggested by van Kerkwijk et al. (2010). Our findings are also consistent with recent experiments which find that hydrodynamic turbulence is inefficient at transporting angular momentum (Ji et al. 2006). They are generally inconsistent with accretion at the Eddington rate (Saio & Nomoto 1985. Furthermore, Shen et al. (2012) and Schwab et al. (2012) find that the influence of a spatially-dependent turbulent viscosity (α ∼ 0.03) drives their mergers to spin down completely over a timescale of ∼ 3 × 10 4 s, quite similar to our finding with the full MRI that the white dwarf merger is magnetically braked on a comparable timescale. Moreover, they further find that as turbulent viscosity dissipates rotational energy into heat, a thermally-supported outer region develops, also broadly similar to our heated corona. Additionally, Schwab et al. (2012) find evidence for sustained carbon burning, as we do in our models. However, there is a subtle but important distinction in the physics of the angular momentum transport and heating mechanisms at work here. The Shakura-Sunyaev prescription posits that shear energy is locally dissipated as heat energy over a viscous timescale. In contrast, in a magnetized system, the conversion of shear energy in the disk does not directly lead to heating. Instead, under the action of the MRI, shear energy is converted into magnetic energy, which is subsequently buoyantly displaced into the corona. The corona is ultimately heated through the non-local dissipation of magnetic energy into heat energy. In our models, this is accomplished through numerical resistivity, though in reality the magnetic dissipation will occur through both physical resistivity and rapid reconnection. Similarly, the inner white dwarf merger is heated under compressional work from both accretion and spin-down. The end result in our case is a highly intermittent and non-uniform thermal structure -see figure 1(b). Indeed, about 7 × 10 47 erg, or roughly half of our peak magnetic energy, is outfluxed entirely from our problem domain. In contrast, Schwab et al. (2012) find their outer envelope is smoothly heated through the action of turbulent viscosity, which is the direct result of the conversion of local shear energy into heat. Further magnetized models will be able to determine the precise extent to which the thermal structure of the hydrodynamic and magnetic models differ, and identify possible implications for nuclear burning and detonation. The most significant difference, however, between Schwab et al. (2012) and the models presented here relate to mass outfluxes. Schwab et al. (2012) find that mass outflows are limited to less than 10 −5 M in their standard 0.6 M + 0.9 M model. In contrast, our sub-Chandrasekhar model drives a vigorous outflow of un-bound mass two orders of magnitude greater. 9 Our results are consistent with global MRI studies, which often find significant outflows -see for instance, De Villiers et al. (2005) and Flock et al. (2011). The outflows generated in magnetized disks are fundamentally driven by the buoyancy of the magnetized corona and its interaction with the strongly-magnetized biconical jet. The Shakura-Sunyaev prescription utilized by Schwab et al. (2012) captures the basic aspects of mass and angular momentum transport by specifying the r-φ and θ-φ components of the stress tensor, but lacks both magnetic pressure and tension and the Lorentz force, which play a crucial role in driving magnetized outflows. Moreover, while our models suggest that magnetorotationally-driven outfluxes may influence the immediate circumstellar environment surrounding the white dwarf merger, the precise fate of this outfluxed matter is underdetermined in our simulations, which do not capture the sonic surface (R S = 2GM/c 2 s 10 12 cm for T ∼ 5 × 10 7 K gas) of the outflowing material within the simulation domain. There is some evidence based upon visualizations of the density and magnetic field that our result may be more complex than a pure infall or thermal wind outflow, and may consist of a combination of a thermal wind and reconnectiondriven infall (Igumenshchev et al. 2003). The total net mass outflow and infall is, of course, central to the determination of the nuclear energetic yield and corresponding light curves of any possible SNe Ia that originates from a double-degenerate merger -not only of sub-Chandrasekhar mass as studied here, but potentially also of super-Chandrasekhar mass if the merger does not result in a prompt detonation. Furthermore, the topology of the seed field plays a crucial role in determining whether a large-scale ordered field gives rise to jets (Beckwith et al. 2008). Consequently, our findings with regard to magnetic braking, as well as the jet and outflow, are to some extent dependent upon our initial seed magnetic field, which consists of a single torus of magnetic flux. Further work will need to be done to determine the degree to which these conclusions are robust in the presence of realistic seed magnetic fields generated from the white dwarf merger process itself. CONCLUSIONS The nuclear burning time at the endpoint of our standard simulation t CC ∼ 1 yr is much longer than our evolutionary time of 6 hours. Thus, while the central temperature continues to rise even throughout the simulation, the outcome of the nuclear burning is not yet clear. The conditions are, however, favorable to supersonic detonations, as opposed to subsonic deflagrations. At a density ∼ 10 7 g cm −3 , the specific energy release in carbon burning exceeds the specific internal energy by about an order of magnitude, which gives rise in an overpressure also about an order of magnitude greater than the background pressure -see, for instance, Nomoto (1982). Thus our results suggest that under appropriate conditions of near-equal mass non-synchronous mergers, a sub-Chakdrasekhar merger may give rise to a central detonation powering a SNe Ia. However, while this scenario is promising, a detonation is not assured, as the detonation initiation condition depends on the temperature profile ). Consequently, further computational studies will need to be carried out to explore the outcome in greater detail. Additionally, while the 2D axisymmetric models presented here begin to shed light on the role of the magnetic field in binary white dwarf mergers, both the accretion itself and the development of the Maxwell stresses leading to magnetic braking of the rotating white dwarf are necessarily limited in 2D axisymmetry. Additionally, some of the long-lived magnetic flux tori which we see in 2D may or may not prove to be stable in 3D, which may significantly alter the evolution of the corona. A fuller picture of the evolution of the rotating white dwarf merger will require 3D simulations, and while these will be demanding, we expect that these will should be more favorable to thermonuclear runaway. An off-centered ignition, possibly leading to an accretion-induced collapse, may still be possible for significantly unequal mass mergers, but we expect that the influence of the magnetic field will alter some conclusions of previous models quantitatively, if not qualitatively. Furthermore, we expect that for unequal mass or significantly lower-mass mergers, the peak temperatures reached even during the post-merger phase may never reach ignition conditions. In these cases, the outcome will not be either a SNe Ia or an accretion-induced collapse, but rather a stable object. Our findings are consistent with theoretical models which have predicted the growth of the white dwarf magnetic field through binary systems (Tout et al. 2008;Nordhaus et al. 2011;García-Berro et al. 2012). Our simulations demonstrate that such an object will be strongly-magnetized, and may account for the existence of HFMWDs in general and hot DQ white dwarfs specifically. Moreover, if both the double-degenerate channel of white dwarf mergers, as conjectured by García-Berro et al. (2012), and a separate channel of white dwarf-low mass companions, as conjectured by Nordhaus et al. (2011), are simultaneously active, we expect the HFMWD distribution to be bimodal in field strength. HFMWDs and SNe Ia may therefore represent disparate outcomes of the same fundamental astrophysical process of merging white dwarfs. Consequently, the HMWWD magnetic field distribution may help inform our understanding of which double-degenerate systems are failed SNe Ia, yielding stable white dwarfs as opposed to thermonuclear detonations. Further magnetized models of a wide range of white dwarf masses, ranging from ONe white dwarfs to He white dwarfs through low-mass stellar and planetary companions, will help establish the HMFWD birth field distribution, which can then be modeled to compare directly against their observed values in the field (Vanlandingham et al. 2005). Such a study will also help more fully elucidate the conditions for thermonuclear runaway to SNe Ia in the doubledegenerate channel. As we have demonstrated, the magnetorotationallydriven disk turbulence produces a outflux of ∼ 10 −3 M of unbound mass from the disk. We expect such magnetically-driven outfluxes to be a general outcome of the MRI during the post-merger stage of evolution of the double-degenerate system. Therefore, our models suggest that the immediate CSM environment surrounding double degenerate white dwarf mergers may not be as clean as previously believed. This finding has significant ramifications for observational studies of the CSM surrounding SNe Ia using NaID absorption lines in the optical, as well as in the radio and X-ray. In particular, if a delay of ∼ 10 4 yr follows the initial white dwarf merger, the cooled magnetized outflows may ultimately give rise to NaID lines though the snowplow effect as the outflow interacts with the interstellar medium. However, given the dynamics of the snowplow and the low densities of the ISM, it may be challenging to account for the existence of CSM nearby ∼ 10 16 cm some SNe Ia (Patat et al. 2007) in the context of the double-degenerate channel. Our simulations demonstrate these magnetized outflows to be strongly asymmetric, with an opening angle of ∼ 50 • . Furthermore, to date, only upper limits for the X-ray luminosity for both 53 SNe Ia using Swift (Russell & Immler 2012), as well in the X-ray and radio for 2011fe (Horesh et al. 2012;Chomiuk et al. 2012) have been established. Further magnetized models on extended domain sizes, using a range of seed fields, and capturing both the explosion and the sonic point of the outflow material may help elucidate a precise, predic-tive observational signature of magnetized outflows from the double-degenerate channel of SNe Ia. Such simulations will address the final fate of the outflows -a wind, infall, or a combination thereof. They will also address to what extent if any the outfluxes may have on the light curve and spectra at early times, as the supernova blast wave encounters outfluxed circumstellar matter surrounding the double-degenerate system, or infall back onto the white dwarf merger powers the light curve through accretion energy. The results presented here are a first step towards a fuller understanding of the post-merger evolution of the coalescence of binary white dwarfs. Clearly, more work in this direction is needed, including full 3D MHD simulations. We expect that just as with many other important astrophysical systems, including both core collapse and single-degenerate models of SNe Ia, 2D simulations will help shape our understanding of the key physical processes involved in double-degenerate mergers. Yet, the limitations of 2D ultimately require us to make the leap to full 3D studies, and work in this direction is already in progress. Dimensionality is a crucial issue, since the development of the magnetic field and turbulence, and possibly the observational properties of the merger remnant may be qualitatively different when simulated in 3D. The current work paves the road towards a fuller understanding of the magnetized merger remnants. APPENDIX In this appendix, we detail the azimuthal-averaging procedure used to average the 3D SPH particles onto an axisymmetric Eulerian r-z mesh. A simple method involves averaging the 3D SPH particles onto a 3D cylindrical Eulerian mesh, then angle-averaging this mesh. However, a more elegant and efficient method, which we adopt, is to break up the angle-dependent quantities, which involve only the smoothing kernel, from the particle data. This reduces the angle-averaging procedure to simple quadrature, with the weights written as integrals to be pre-tabulated once. The averaged quantities are then rapidly and efficiently calculated by weighting the particle data at each point on the r-z mesh over lookups of the pre-tabulated weights. The kernel function is the central mathematical object within the context of SPH which allows continuous fluid quantities to be calculated from discrete particle data. We adopt the form (Monaghan & Lattanzio 1985): Here ν is the dimensionless distance scaled to the smoothing length -ν = |r|/h. We further define a rescaled dimensionless kernel functionW as W depends solely on ν. In the "scatter" interpretation of SPH, each quantity A at a specific position r is given by with the number of particles N and subscripts denoting quantities for a specific particle. To angle-average this quantity A, we integrate it over the angle ϕ and divide it by 2 π: A(r, z) = 1 2 π 2π 0 dϕ A(r, ϕ, z) Because the quantities m i , ρ i and A i do not depend upon angle, we may interchange the order of integration and summation, and write A(r, z) as Written in this way, we can transparently see that in the scatter interpretation, angle-averaging involves only the smoothing kernel itself. We can further simplify this expression by defining the dimensionless 2D cylindrical distance ν 2D = (r − r i ) 2 + (z − z i ) 2 /h i and dimensionless ratio x = 2rr i /h 2 i to express Next, we define W 0 (ν 2D , x) to be the angle-averaged dimensionless kernel, expressed as a function of ν 2D and x: W 0 (ν 2D , x) = 3 8π 2 2π 0 dϕW (ν 2D , x, cos(ϕ)) This allows us to efficiently compute the azimuthally-averaged temperature T and the z-velocity v z at each point as
2013-09-03T17:44:55.000Z
2013-02-22T00:00:00.000
{ "year": 2013, "sha1": "70d890bbd4a4e8a1933f3685853493034ae21646", "oa_license": null, "oa_url": "https://upcommons.upc.edu/bitstream/2117/24217/1/21.0004-637X_773_2_136.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "70d890bbd4a4e8a1933f3685853493034ae21646", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237875147
pes2o/s2orc
v3-fos-license
A matrix for the evaluation of COVID-19 contact risk in healthcare workers: Technical note Detecting risky contacts and early isolation of healthcare workers who have been in contact with COVID-19 cases will increase the likelihood of limiting the spread of the infection. The aim of this technical note is to propose a reliable, fast-adaptable and easy-to-use matrix that accurately classies risk for contact tracing of healthcare workers with COVID-19 patients. The researchers have created a matrix with the support of the literature and their experience within the university hospital surveillance team. This matrix enables a detailed High / Medium / Low Risk classication of contacts of healthcare workers with COVID-19 cases, covering many different contact situations encountered in a university hospital. The distinction between different contact risk categories implies different preventive measures: High-risk contacts are isolated at home for 7 days and return to work if they test PCR-negative on day 7; while medium and low risk contacts continue to work using masks, with medium-risk contacts tested on day 7, so it is important to standardize the classication among different interviewers. Three main headings have come to the fore in health worker contact risk classication: 1. Differences caused by the ventilation of the environment: Indoors, well ventilated indoors, outdoors. 2. Direct contact or material sharing. 3. Aerosol generating procedure (AGP). The matrix has been used effectively in 1169 risky contact interviews over a two-month period (24 August - 23 October 2020). It has been evaluated by two groups: the surveillance team using it routinely for contact tracing and by experts. The matrix is quickly adapted by new surveillance team members and is easy to use. Introduction It is accepted that the transmission of coronavirus disease 2019 (COVID-19) from person to person occurs through direct contact or droplets [1]. This type of spread puts healthcare workers (HCWs), who are crucial for the functioning of health systems, at great risk [2]. Although a systematic report on COVID-19 infections among HCWs around the world has not been submitted, the World Health Organization (WHO) reported that one in 10 HCWs were infected in some countries [3]. To control the spread of COVID-19, it is necessary to ensure that the number of new cases arising from each con rmed case remains below 1 [4]. Management strategies for identifying risky contacts of healthcare workers who have been in contact with a COVID-19 case and their early isolation will increase the likelihood of controlling the spread of the infection [5]. WHO recommends questioning for contact detection of COVID-19 from 1-3 days before the occurrence of the symptoms of the case to 14 days later [1]. Histories of contact with a patient with COVID-19 disease such as being within one meter from the case for more than 15 minutes, direct contact, use of common materials without the use of appropriate personal protective equipment (PPE) are de ned as different contact risk groups, taking also into account the characteristics of the environment [6]. It is also important for HCWs to use appropriate PPE in aerosol generating procedures [7]. The rst case of COVID-19 was seen in Turkey on March 11, 2020 and on the same day the World Health Organization (WHO) has declared the pandemic [8]. Then, the Ministry of Health published the COVID-19 Guideline, and the de nitions of contact and close contact were used in the guide. Those who had unprotected direct contact with the COVID-19 patient or their secretions and those who stayed in the same indoor environment for more than 15 minutes at a distance of less than 1 meter with the COVID-19 patient were considered as close contacts [9]. On April 5, the guide "Contact Tracing in Health Care Workers" was published, which categorized healthcare workers according to the actions and the precautions they had taken. The contact risk of healthcare workers who did not use a medical mask or N95 in aerosol-forming procedures listed was deemed high in this guide. Moderate and low-risk contacts were de ned by matching the choice of mask use with the use of PPE by the healthcare provider [10]. However, these options focusing on aerosol-generating procedures remained limited compared to many different instances of contact encountered in the hospital setting, and professionals who made contact tracing have faced many contact histories that did not t these options, thus had to make subjective evaluations, taking also into account the "close contact" (requiring isolation) and "contact" (no isolation) de nitions for the general public In the advanced stages of the pandemic, the risky contacts of healthcare workers began to be concentrated in the COVID-19 areas, as well as in recreational areas, social areas or home environments with their colleagues. This situation started to cause problems in terms of continuity of health service delivery [3]. Hospital surveillance teams are often composed of HCWs and may face contact risks faced by HCWs. In cases where the continuity of the surveillance service is at risk or when it is necessary to increase the workforce that performs contact assessment due to the increase in the number of contacts, it is necessary to add new employees to the team. It is also vital that new members of the team quickly adapt to how they will carry out the contact assessment. The aim of this technical note is to propose a reliable, fast-adaptable and easy-to-use matrix that accurately classi es the risk of contacts of healthcare workers with COVID-19 patients. Risk Assessment In Health Care Workers, Eumf Hospital Experience Hospital surveillance teams are often composed of healthcare professionals and may face contact risks experienced by healthcare professionals. In cases where the continuity of the surveillance service is at risk or when it is necessary to increase the workforce that performs contact assessment due to the increase in the number of contacts, it is necessary to add new employees to the team. It is also vital that new members of the team quickly adapt to how they will carry out the contact assessment. The Employee Health and Safety Unit (EHSU) conducts COVID-19 contact-tracing among the healthcare workers of Ege University Medical Faculty (EUMF) Hospital. The rst PCR positive case of the hospital was diagnosed on March 18, 2020, and from that day on, contact detection was made in accordance with the guidelines of the Ministry of Health and classi cation was carried out according to the algorithm [11]. From March 11 to September 10, 2020, 1742 patients were found to have positive COVID-19 PCR tests [11]. In the same period, 1759 risky contacts were identi ed by the surveillance team and 616 (35.0%) of these risky contacts were classi ed as high, 570 (32.4%) as medium, 573 (32.6%) as low risk group [11]. In the surveillance working group, in line with the Ministry of Health's (MoH) guidelines then in effect between March 18 -April 5, 2020, the contacts of health staff were classi ed as "Close-contact / Contact" and close contacts were isolated. Simultaneous with the updates of the MoH guidelines, contacts of HCWs were classi ed as "High / Medium / Low Risk" between April 6 and August 23, 2020. During the process, it has been realized that the coverage of the guidelines remained limited. After the new normalization process as of June 1, 2020, as the contact stories began to diversify, the surveillance group had di culty in making evaluations according to the updated guidelines. As the pandemic progressed, an increasing experience showed that risky contacts are not only of patient origin. Of the 94 employees diagnosed as COVID-19 with a known contact history, 32 had developed in contact with patients, 34 in contact with family or partner and 25 are in contact with a colleague [11]. The variety of the environment in terms of risky contacts in the workplace (open / well ventilated closed / closed / the environments where the aerosol-generating process is performed), the use of mask in the case and the healthcare worker, and the type of mask, together with the exposure time and distance (variety of distance), a need for a guide has arisen. While evaluating the diversity of the use of personal protective equipment by healthcare workers in the environment where aerosol-generating processes take place, the possibility of the presence of another healthcare worker in the same environment should not be ignored. In addition, it was necessary to evaluate the use of PPE and contact distance of the health worker together. When questioning direct contact with the cases or the use of common materials like computer keyboards, the need to question hand hygiene after contact was also required. During the pandemic period, rapid adaptation of surveillance members to contact assessment was an important need. Surveillance members may also become isolated as SARS-CoV 2 patients or case contacts, or may be assigned to other work environments due to the pandemic. In this situation, rapid adaptation is critical. In addition, standardization has gained importance in the application of isolation to ensure consistency between practitioners. For this reason, a matrix based on the guidelines of the Ministry of Health was developed by the researchers with the support of the literature and started to be used as of August 17 (Figure 1,2,3). The High / Medium / Low Risk classi cation has been updated [6,7,[12][13][14]. The rst draft of the matrix, which was submitted for the approval of seven nurses, two occupational safety specialists and two medical doctors involved in contact determination studies, was updated in line with the recommendations and started to be implemented on August 24. At the end of the rst month, the matrix was evaluated by the surveillance team and 7 experts (2 public health, 2 microbiology, 2 infectious diseases experts, 1 epidemiologist), and its application was continued following its approval. Evaluation Of The Matrix Page 5/13 The matrix was submitted to the surveillance team and expert evaluation at the end of the rst month. With an online survey method, the question "How many of the ten contacts do you think each matrix classi es correctly?" was asked. The matrix received an average score of 8.6/10 (min: 8.2, max: 9.0) and was used in 1169 risky contact interviews in two months (24 August -23 October 2020). Three main headings came to the fore in health care workers contact risk classi cation. Indoors are more risky areas where it is di cult to keep the distance between people wide [15]. Another possible situation that increases the risk is the longer stay together indoors [16]. Well ventilation, essential for a healthy indoor climate, helps limit the spread of the SARS-CoV-2 virus [17]. However, according to available data, the contamination potential is much lower outdoors than indoor environments, due to the turbulence levels found outdoors [16]. In evaluating the contact of HCWs with COVID-19 cases, it was necessary to categorize the contact environment as closed / well ventilated indoor / outdoor. However, in situations where the same environment is shared, the risk is associated with many factors including ventilation of the environment, use of masks, distance and exposure time [13]. SARS-CoV-2 spreads between people who are in close contact with each other. A distance of at least 1 meter is recommended for COVID-19 patients to reduce the risk of infection when talking or coughing [18]. However, there are also sources that suggest staying at least 2 meters away from other people even in open environments [15]. In contact risk assessment, it is important to take into account that a physical distance of at least 1 meter reduces the risk of SARS-CoV-2 transmission, but 2 meters may be more effective, and the greater the distance, the more likely to be protected [19]. The risk of SARS-CoV-2 spread is determined by how closely the interaction with the COVID-19 case takes places and how long this interaction lasts.. For healthcare workers, high-risk exposures are directly related to face-to-face contact lasting 15 minutes or longer [6,13]. Using the 15-minute contact time limit on the basis of evidence provides practicality in classi cation of contact risk [14]. It should also be taken into account that the cumulative exposure time in repeated contacts affects the risk of transmission [20]. The mask worn by the person acts as a simple barrier to help prevent respiratory droplets from getting into the air and other people. The use of masks is particularly important in environments where people are close to each other or where social distance is di cult to maintain [19]. Mask use details of both the HCW and the patient are important in determining the risk of COVID-19 exposure [21]. Direct contact or material sharing A high-risk contact occurs when healthcare workers care for COVID-19 patients without or with inappropriate PPE. If hand hygiene has not been achieved after direct contact with the patient, with the patient's body uids, or with the patient's contaminated environment, it is also within the scope of highrisk contact [6]. This feature becomes more important when the case in contact is a colleague, so many different items likes pens and keyboards could be shared with the case in the two days before the symptoms or diagnosis. Aerosol generating procedures (AGP) Situations such as exhaling, singing, coughing, and sneezing create high-momentum gas clouds containing respiratory droplets. This moves droplets faster than background ventilation streams and they can reach distances of more than 2 meters in a short time [13]. Some procedures performed on patients are more likely to generate higher concentrations of infectious respiratory aerosols than coughing, sneezing, talking or breathing [22]. Performing aerosol generation procedures in healthcare settings or potentially elsewhere in closed, crowded, poorly ventilated environments increases the risk of infection [1]. High risk contact can be considered when a healthcare worker is applying the procedure or is present in the environment without PPE or with inappropriate PPE during an AGP [6]. The Ministry of Health's Assessment of the Contact Status of the Health Care Worker with the COVID-19 patient recommends the use of N95 masks and face shields or eyeglasses together in aerosol-generating procedures, considering the use of a medical mask instead of N95 or not using a face shield/eyeglasses as medium risk [7]. However, there are di culties in determining whether the reported transfers during AGPs are due to aerosols or other exposures [22]. Another issue is that currently there is insu cient evidence to support the effectiveness of face shields for resource control. Therefore, face shields are not currently recommended to replace masks [22]. Limitations Despite all this evidence, there are still situations where the matrix may be limited in its use. It should be noted that as the contact time increases in a poorly ventilated indoor, the importance of the distance and the protection of the mask decrease. Even if the contact time is short, how long the patient has been in the environment might be important and whether the patient is not wearing a mask as well. The total time of repeated exposure, whether it exceeds 15 minutes might be evaluated in detail by an experienced surveillance team. In addition, if there is more than one contact type, all of them should be evaluated separately and the HCWs followed-up according to the one with higher risk. It should not be forgotten that the evaluations rely on the self-reports of the person coming into contact with a COVID-19 case. Conclusion The matrices that have been applied as of August 24, 2020 in the surveillance studies of the EUMF Hospital, classifying the risky contacts of healthcare workers with COVID-19 cases, which are found to be quickly adapted by surveillance teams and easy to use are recommended. These matrices, in their widespread use, will provide more protection for high-risk situations, while enabling the sustainability of healthcare delivery by detecting lower risk situations. Figure 1 Contact risk assessment matrix for indoors, well ventilated indoors, and outdoor environments in the lack of an aerosol generating procedure Contact risk assessment matrix for indoors, well ventilated indoors, and outdoor environments in the lack of an aerosol generating procedure Contact risk assessment matrix in direct contact or material sharing Contact risk assessment matrix in direct contact or material sharing Contact risk assessment matrix for aerosol generating procedure (AGP) environments Contact risk assessment matrix for aerosol generating procedure (AGP) environments
2020-12-27T11:05:40.192Z
2020-11-24T00:00:00.000
{ "year": 2021, "sha1": "d093e614f8fded7a42daf684aeb74673c7e34057", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-112427/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "227a294ce9b7033bf122876dccd31a957b08d041", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
22608583
pes2o/s2orc
v3-fos-license
Significance of CD47 expression in gastric cancer Integrin-associated protein (CD47) is ubiquitously expressed on the surface of cells and functions as an identifier of self. In blood cancer, tumor cells expressing CD47 evade phagocytosis by macrophages, leading to a poor patient prognosis. However, the status of CD47 expression in solid tumors, particularly in gastric cancer, is not well understood. The purpose of the present study was to examine the level of CD47 in the primary tumor, peripheral blood (PB) and bone marrow (BM) of patients with gastric cancer, and to determine its effect. Reverse transcription-quantitative polymerase chain reaction analysis was performed to determine the level of CD47 mRNA expression in primary tumor, PB and BM samples collected from 168 patients with gastric cancer. Cell sorting was performed to investigate CD47 protein expression in PB and BM fractions, and to identify the source of CD47 expression. In primary tumors, the expression of CD47 was not associated with any clinicopathological factors or prognosis. By contrast, in PB, the low CD47 expression group demonstrated a significantly increased tumor size, and frequency of lymphatic invasion and lymph node metastasis, compared with the high CD47 expression group. In addition, the clinical tumor stage of the low CD47 expression group was significantly increased compared with that of the high CD47 expression group. Conversely, in PB, the high CD47 expression group had a significantly higher frequency of lymphatic invasion and lymph node metastasis compared with the low CD47 expression group. The lymphocyte fraction exhibited the highest CD47 expression compared with the other fractions in PB and BM samples. Low expression of CD47 was associated with the advancement of gastric cancer, in contrast to other cancers, and it may be associated with a decrease in lymphocytes during later stages. These results indicate that CD47 expression in the PB and BM may serve as a marker to analyze the immunological function of patients with gastric cancer; however, the significance of CD47 in gastric cancer requires further study. Introduction In the United States, the incidence of gastric cancer is not particularly high compared with that observed in other countries; however, ~21,600 patients were diagnosed with gastric cancer in 2013, and the 5-year survival rate for all stages of the disease was only 27% (1). The disease is also a leading cause of cancer-associated mortality in males and females in Japan (2). The prognosis for patients with gastric cancer has gradually improved over the last 10 years; however, the 5-year survival rate of metastasized gastric cancer remains at ~40% (3). Therefore, the identification and development of therapeutics is required to improve the prognosis of patients with advanced gastric cancer. Integrin associated protein (CD47) is a glycoprotein that is ubiquitously expressed on the plasma membrane of all hematopoietic cells and the majority of other cell types (4,5). Oldenborg et al (6) revealed that CD47 is a marker of self on murine red blood cells (RBCs), and that CD47-negative RBCs are rapidly eliminated from the circulation by macrophage phagocytosis. Furthermore, tumor cells expressing CD47 evade elimination by macrophages (7). The clinical significance of CD47 expression in blood cancer has been extensively studied (6)(7)(8). In solid tumors, including bladder cancer, cells expressing CD47 were identified to be of a tumor-initiating cell population (9). In addition, the prognosis of patients with breast cancer with high levels of CD47 expression was significantly poorer compared with patients with low expression of CD47 (10). In 2015, it was revealed that in addition to being associated with macrophage phagocytosis, a CD47 blockade drives the T-cell mediated elimination of immunogenic tumors (11). Therefore, CD47 is emerging as a focus of the field of immunotherapy. Targeting CD47 via anti-CD47 antibody therapy was recently proposed (12). A variety of cancer types may be targeted with this therapy, and therefore, the status of CD47 expression requires investigation in various malignant tumors. However, little is known about the status of CD47 expression in patients with gastric cancer. Thus, the present study aimed to determine the level of CD47 expression in the primary tumor, peripheral blood (PB) and bone marrow (BM) of patients with gastric cancer, and to explore the potential of anti-CD47 antibody therapy in gastric cancer. Materials and methods Patients. A total of 168 patients (108 male, 60 female; mean age, 69±11.8 years) who were diagnosed with gastric cancer and treated with gastrectomy, including lymphadenectomy, at Kyushu University Beppu Hospital (Oita, Japan) between April 2000 and April 2005 were enrolled in the present study. Written informed consent was obtained from all patients according to the guidelines provided by the Institutional Research Board of Kyushu University Beppu Hospital, which approved the present study. None of the patients received pre-surgical chemotherapy or radiotherapy. Pathological diagnosis and disease staging were performed according to the criteria of the Japanese Classification of Gastric Carcinoma (13). All patients were monitored closely for the recurrence of cancer following surgery every 3 months for 3-5 years. Sample collection. PB samples were obtained from 6 healthy volunteers and the 168 patients with gastric cancer prior to surgery. PB samples (10 ml) were taken and mixed with 4 ml ISOGEN-LS reagent (Nippon Gene Co., Ltd., Toyama, Japan) per 1 ml whole blood. The sample was stored for 5 min at room temperature, then snap-frozen immediately in liquid nitrogen and stored at -80˚C until required for RNA extraction. BM samples were obtained from 160 patients with gastric cancer under general anesthesia prior to surgery. BM samples were taken from the sternum using a 15-gauge needle. Since there was a possibility of aspirating skin with the BM samples, the first 1 ml of the sample was discarded, and another syringe was used to aspirate 20-30 ml of BM. ISOGEN-LS reagent was then added as previous, and the BM samples were subsequently processed in the same manner as the PB samples. A total of 10 tumor and corresponding normal 4-µm thick formalin-fixed paraffin-embedded tissue section samples from the patients who underwent surgery were also obtained and used for immunohistochemical analysis. A total of 6 autopsy bone marrow samples were collected, 3 from patients who succumbed to advanced gastric cancer, and 3 from pneumonia. Formalin-fixed paraffin-embedded 4-µm tissue sections were made from these bone marrow samples. RNA preparation and reverse transcription (RT). Total RNA was extracted from PB and BM samples using the ISOGEN-directed chloroform extraction and isopropanol precipitation protocol, as described previously (14). cDNA was synthesized using RT as described previously (15). . The NUGC-4 (JCRBO834) cell line was provided by the Japanese Collection of Research Bioresources Cell Bank (National Institutes of Biomedical Innovation, Health and Nutrition, Osaka, Japan). KATO-III cells were maintained in McCoy5A medium supplemented with 10% fetal bovine serum and 1% penicillin and streptomycin antibiotics (all from Thermo Fisher Scientific, Inc., Waltham, MA, USA). The other cell lines were maintained in an RPMI-1640 medium supplemented with 10% fetal calf serum and 1% penicillin and streptomycin (all from Thermo Fisher Scientific, Inc.). All cell lines were cultured in 10 cm culture dishes with 5% CO 2 at 37˚C. Cultured cells from each cell line were dissolved in 350 µl of Buffer RLT containing 1% β-mercaptoethanol, and total RNA was extracted and purified using the RNeasy Mini kit (Qiagen GmbH, Hilden, Germany) in accordance with the manufacturer's protocol. Quantitative polymerase chain reaction (qPCR) evaluation of CD47 expression in clinical samples. The following primers were used to amplify CD47 from the RNA samples from the cells, PB and BM: Sense, 5'-GGC AAT GAC GAA GGA GGT TA-3' and antisense, 5'-ATC CGG TGG TAT GGA TG A GA-3'. GAPDH, used as an internal control, was amplified using the following primers: Sense, 5'-TGA ACG GGA AGC TCA CTG G-3' and antisense, 5'-TCC ACC ACC CTG TTG CT G TA-3'. qPCR was performed using the LightCycler™ system and SYBR-Green I dye (both from Roche Diagnostics, Indianapolis, IN, USA) according to the manufacturer's protocol. Briefly, each reaction contained 80 ng cDNA, 2 µl DNA Master SYBR-Green I mix, 50 ng primers and 2.4 µl 25 mM MgCl 2 . The final volume was adjusted to 20 µl with water. qPCR was performed with the following thermocycling conditions: Initial denaturation at 95˚C for 10 min; followed by 40 cycles of 95˚C for 10 sec; annealing at 60˚C for 10 sec; and extension at 72˚C for 10 sec. The product was confirmed by gel electrophoresis. Following amplification, the products were subjected to a temperature gradient from 68-95˚C at a rate of 0.2˚C/sec, under continuous fluorescence monitoring, in order to produce a melt curve for each product. All concentrations were calculated relative to the concentration of the positive control, cDNA produced from Human Universal Reference Total RNA (Clontech Laboratories, Inc., Mountain View, CA, USA), using a previously described method (16). The negative control was water. Normalized expression values were obtained by dividing the mRNA concentration of CD47 by that of the internal control. Immunohistochemistry. Formalin-fixed and paraffin-embedded tissue sections corresponding to the samples used for mRNA expression analysis were analyzed. Tissue sections were de-paraffinized, soaked in 0.01 M sodium citrate buffer and boiled in a microwave oven for 15 min at 500 W to retrieve cell antigens. In the case of bone marrow samples, decalcification was performed according to the manufacturer's protocol (Super Decalcifier I; Polyscience Inc., Warminster, PA, USA). The tissue sections were stained immunohistochemically using the streptavidin-biotin peroxidase method (Universal Dako Cytomation LSAB ® kit; Dako; Agilent Technologies, Inc., Santa Clara, CA, USA) according to the manufacturer's protocol, with primary antibodies against CD47 (mouse monoclonal antibody; dilution, 1:200; cat. no. sc-12730; Santa Cruz Biotechnology, Inc., Dallas, TX, USA). Briefly, the tissue sections were blocked with 3% H 2 O 2 for 5 min and incubated overnight with the primary antibody at 4˚C. The samples were then washed with TBS buffer and subsequently incubated with the secondary antibody from the LSAB ® kit for 30 min at room temperature. Cell sorting. Immunomagnetic cell sorting was performed on 30 ml of PB collected from 6 healthy donors and 30 ml BM collected from 6 patients with gastric cancer. Fractionation was performed using the autoMACS ® Pro Separator (Miltenyi Biotec GmbH, Bergisch Gladbach, Germany) according to the manufacturer's protocol. Cells were prepared as follows: The PB or BM sample was loaded into 10 ml of Ficoll density medium (GE Healthcare Life Sciences, Chalfont, UK) prepared in a 50 ml polypropylene tube. Gradient centrifugation was subsequently performed at room temperature for 30 min at 450 x g. Sample interfaces were collected subsequent to discarding the supernatants. The interface containing the mononuclear fractionated cells was washed twice with 20 ml PBS and centrifuged at 150 x g for 5 min at 4˚C. Human CD326 [epithelial cell adhesion molecule (EpCAM)]-phycoerythrin MicroBeads (Miltenyi Biotec GmbH) were used for positive selection of viable epithelial tumor cells from the samples, CD45-MicroBeads (Miltenyi Biotec GmbH) were used to detect leukocytes and CD14-MicroBeads (Miltenyi Biotec GmbH) were applied for the detection of monocytes and macrophages. The MicroBeads were used according to the manufacturer's protocol. In brief, the CD14 + fraction was sorted, then the CD14 -/CD45 + fraction and finally the EpCAM-positive fraction (Fig. 1A). The expression of CD47 in each fraction was measured using RT-qPCR as described above. Statistical analysis. The association between CD47 mRNA expression and clinicopathological factors was evaluated using the χ 2 test. For the evaluation of primary tumor tissue samples, the median CD47 mRNA expression value was used as the cut-off to divided patients into high CD47 expression and low CD47 expression groups. For PB and BM samples, the median value was calculated using all samples. Overall survival was calculated by the Kaplan-Meier method and the differences in survival between the groups were compared using the log-rank test. All tests were performed using JMP software version 12.2.0 (SAS Institute, Inc., Cary, NC, USA) and GraphPad Prism software version 6.0 h (GraphPad Software, Inc., La Jolla, CA, USA). P<0.05 was considered to indicate a statistically significant difference. Results Expression of CD47 in primary gastric tumors. Immunohistochemical analysis was performed to investigate the protein expression level of CD47 in the tissues of gastric cancer and adjacent normal mucosa. In normal gastric mucosa, intracellular expression of CD47 protein was observed in the bottom layer of the gastric gland, which almost disappeared in the upper layer ( Fig. 2A). In gastric tumor tissues, a varying pattern of CD47 protein expression was observed. In one case, CD47 was highly expressed throughout the tissue, and this case was diagnosed as well-differentiated adenocarcinoma (Fig. 2B, case 1). In another case, the expression of CD47 was relatively high, and the expression pattern was not consistent throughout the tissue; this case was diagnosed as moderately-differentiated adenocarcinoma (Fig. 2B, case 2). The expression of CD47 nearly disappeared in another case, which was diagnosed as poorly-differentiated adenocarcinoma (Fig. 2B, case 3). As demonstrated by immunohistochemical staining, CD47 protein levels appeared to be associated with the histological grade, and therefore, the CD47 mRNA expression level was compared with the histological grade; however, no significant difference was observed between CD47 expression and different histological grades (data not shown). The expression of CD47 was compared between gastric tumor tissue and corresponding normal tissue using RT-qPCR. However, as demonstrated in Fig. 2C, no significant difference was observed in CD47 mRNA expression between primary tumor tissue and normal tissue. To determine the association between CD47 mRNA expression and the clinicopathological characteristics of patients with gastric cancer, patients were divided into two groups: High CD47 expression and low CD47 expression, however, no significant association was observed between CD47 expression and any of the clinicopathological factors tested (Table I). In addition, the overall survival rate between the high and low CD47 expression groups was compared, and no significant difference was observed between the two groups (Fig. 2D). Expression of CD47 in the PB. CD47 expression in PB was analyzed, and its association with the clinicopathological characteristics of patients with gastric cancer was investigated (Table II). Tumor size and depth were significantly increased in patients with low compared with high CD47 expression. The frequency of lymphatic invasion was also significantly higher in the low compared with high CD47 expression group. The percentage of tumor type 2-5 was significantly higher at 73% in the CD47 low expression group compared with 44% in the CD47 high expression group. The frequency of lymph node metastasis and lymphatic invasion in the low CD47 expression group was significantly increased compared with that in the high CD47 expression group. As a result, the clinical tumor stage of the low CD47 expression group was significantly increased compared with that of the high CD47 expression group. The expression level of CD47 mRNA was compared with the clinical tumor stage in the PB sample (Fig. 3A). CD47 expression was demonstrated to be highest in tumor stage I in PB samples (Fig. 3A). Expression of CD47 in the BM. CD47 protein expression in BM samples was analyzed using immunohistochemical staining. As demonstrated in Fig. 4, CD47 expression was markedly increased in the BM of patients with stage IV gastric cancer. To assess the importance of CD47 expression in the BM, the clinicopathological characteristics of patients with gastric cancer who exhibited high or low CD47 mRNA expression were compared. As indicated in Table III, the percentages of lymphatic invasion and lymph node metastasis were significantly higher in the high CD47 expression group compared with the low CD47 expression group. By contrast, the frequency of distant metastasis was significantly higher in the low CD47 expression group compared with the high CD47 expression group. In addition, the clinical tumor stage was significantly increased in the high compared low CD47 expression group. The BM expression of CD47 mRNA was significantly higher in stage III/IV patients than in stage I/II patients (Fig. 3B). Origin of CD47 in the PB and BM. The results of the analysis of the association between CD47 mRNA expression and the clinicopathological characteristics of patients with gastric cancer were different in primary tumors, PB and BM. To detect the original source of CD47 expression in PB and BM, cell sorting was performed. PB and BM samples were sorted into the following four fractions (Fig. 1A): Fraction 1, whole blood cells; fraction 2 [myeloid cell-specific leucine-rich glycoprotein (CD14) + cells], macrophages or monocytes; fraction 3 (CD14 -/CD45 + cells), lymphocytes; and fraction 4 (CD14 -/CD45 -/EpCAM + cells), circulating tumor cells. Prior to the analysis, CD47 expression was confirmed in 4 gastric cancer cell lines by RT-qPCR (data not shown). The expression of CD47 mRNA in each fraction is demonstrated in Fig. 1B and C. CD47 mRNA expression was expected to be highest in the circulating tumor cell fraction; however, the fraction expressing CD47 mRNA was larger in the PB (Fig. 1B) and BM (Fig. 1C) samples than the lymphocyte fraction. Table I. Association between CD47 mRNA expression in tumor tissue and the clinicopathological characteristics of patients with gastric cancer. Discussion In the present study, the localization of CD47 protein was confirmed in normal tissue and primary gastric tumor tissue using immunohistochemical staining. Despite initial expectations, no statistically significant association was observed between primary tumor CD47 expression and clinicopathological factors, which has previously been detected in hematological malignancies (7,17,18) and breast cancer (10). The Kaplan-Meier overall survival curve revealed that the survival rate of the high CD47 expression group was decreased compared with the low CD47 expression group from 3 years following surgery. This indicates that high CD47 expression is associated with late phase metastasis or recurrence of gastric cancer; however, the difference in survival between the two groups did not reach a statistically significant level. By contrast, low expression of CD47 in PB samples was significantly associated with increased tumor depth, lymphatic invasion, lymph node metastasis and clinical tumor stage. Conversely, high expression of CD47 in BM samples was associated with increased tumor depth, lymphatic invasion, lymph node metastasis and clinical stage. It was also revealed that CD47 mRNA expression was highest in the lymphocyte fraction in PB and BM samples. The contradiction of the CD47 Table II. Association between CD47 mRNA expression in peripheral blood and the clinicopathological characteristics of patients with gastric cancer. expression pattern between the primary tumor, PB and BM must be explained. CD47 expression group ---------------------------------------------------------------------------- In normal gastric tissue, CD47 expression was limited to the lower layer, termed the fundic glands, in the current Table III. Association between CD47 mRNA expression in bone marrow and the clinicopathological characteristics of patients with gastric cancer. study. The expression of CD47 is particularly observed from the portion of isthmus to the base of the glands. The fundic glands contain gastric mucosal stem cells and progenitor cells (19)(20)(21). By expressing CD47, the stem cells evade macrophage phagocytosis and elimination (6), and therefore the CD47 expression observed in normal mucosa is consistent with previous reports. In primary gastric tumors, there was no indication of any statistical association between CD47 mRNA expression and clinicopathological factors. In the immunohistochemical staining analysis, a varied pattern of staining was observed, but there was no definitive pattern indicating that CD47 protein expression contributes to macrophage phagocytosis evasion. In addition, it was difficult to objectively evaluate which staining patterns were CD47-dominant or -negative; thus, it may not be appropriate to evaluate CD47 expression using immunohistochemical staining analysis in cancer. To make objective evaluations, the mRNA expression level of CD47 was calculated; however, bulky samples were used for mRNA collection in the present study, such that the heterogeneity of the tumor as a whole may have been erased. CD47 expression group ----------------------------------------------------------------------------- In PB, upregulation of CD47 mRNA expression was associated with improved clinicopathological factors and early stage disease. By contrast, the association between CD47 expression and clinicopathological factors was the opposite in the BM. CD47 mRNA was expressed in the lymphocyte fraction in PB and BM samples, and this expression was low in the circulating tumor cell fraction, indicating that CD47 mRNA expression reflects lymphocytes but not cancer cells. Downregulation of CD47 is presumed to be caused by a reduction in lymphocytes. Several studies have indicated that the neutrophil:lymphocyte ratio (NLR) is a prognostic marker for gastric cancer (22)(23)(24)(25)(26)(27). In early gastric cancer, cancer-associated mortality has been reported to be significantly more frequent among patients with an elevated NLR (>2) (28). In advanced gastric cancer, upregulation of the NLR to >2.5 is associated with a poor prognosis (29). The cause of the elevated NLR in this study was due to a surge in neutrophils and decline in lymphocytes, which is consistent with the decline in CD47 expression observed in the present study. Lymphocyte reduction may be explained partially by the effect of an elevated release of several biological factors and cytokines into the PB from cancer tissues. Saito et al (30) demonstrated that a decline in natural killer (NK) cells, a type of lymphocyte, was caused by elevated tumor necrosis factor ligand superfamily member 6 expression, which shortened the lifespan of circulating NK cells by inducing their apoptosis. By contrast, the elevation of lymphocytes in BM is potentially the result of reactive proliferation in response to decreased lymphocytes in the PB (Fig. 5). The reason for the discrepancy between the findings for CD47 expression in gastric cancer compared with blood and breast cancer requires further analysis. In the present study, the level of CD47 expression was investigated using PB samples, so that the expression level of CD47 would be representative of all cells in the blood, including lymphocyte and RBCs expressing CD47. Future studies should also collect the circulating tumor cells for analysis of CD47 expression levels. Recently, Yoshida et al (31) reported that the survival rate of patients with CD47-expressing gastric cancer was significantly worse compared with that of patients with CD47-negative gastric cancer, using immunohistochemical staining analysis. In the present study, increased CD47 mRNA expression in the primary gastric tumor was not an adverse prognostic factor. This suggests that there may be post-transcriptional differences that cause CD47 protein expression to be higher, such as disruption of the protein degradation or transportation. To the best of our knowledge, this is the first study to analyze CD47 expression in primary tumors, PB and BM simultaneously in patients with gastric cancer. The results indicate that CD47 expression in the PB and BM may serve as a marker to analyze the immunological function of patients with gastric cancer; however, to elucidate the significance of CD47 in gastric cancer is a complicated task. Therefore, if anti-CD47 antibody treatment is considered for gastric cancer, it may not be appropriate to make the decision based on the measurement of CD47 for this disease. Figure 5. Hypothetical schema explaining the results of the present study. CD47, integrin-associated protein; IL10, interleukin-10; FasL, tumor necrosis factor ligand superfamily member 6; TGFβ, transforming growth factor β; TNFα, tumor necrosis factor α.
2018-04-03T06:12:41.493Z
2017-05-26T00:00:00.000
{ "year": 2017, "sha1": "f4e75180d4fc827cd406e1060dbe0d6eecb3216a", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ol.2017.6257/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f4e75180d4fc827cd406e1060dbe0d6eecb3216a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
8386207
pes2o/s2orc
v3-fos-license
Rett syndrome induced pluripotent stem cell-derived neurons reveal novel neurophysiological alterations Rett syndrome (RTT) is a neurodevelopmental autism spectrum disorder caused by mutations in the methyl-CpG-binding protein 2 (MECP2) gene. Here, we describe the first characterization and neuronal differentiation of induced pluripotent stem (iPS) cells derived from Mecp2-deficient mice. Fully reprogrammed wild-type (WT) and heterozygous female iPS cells express endogenous pluripotency markers, reactivate the X-chromosome and differentiate into the three germ layers. We directed iPS cells to produce glutamatergic neurons, which generated action potentials and formed functional excitatory synapses. iPS cell-derived neurons from heterozygous Mecp2308 mice showed defects in the generation of evoked action potentials and glutamatergic synaptic transmission, as previously reported in brain slices. Further, we examined electrophysiology features not yet studied with the RTT iPS cell system and discovered that MeCP2-deficient neurons fired fewer action potentials, and displayed decreased action potential amplitude, diminished peak inward currents and higher input resistance relative to WT iPS-derived neurons. Deficiencies in action potential firing and inward currents suggest that disturbed Na+ channel function may contribute to the dysfunctional RTT neuronal network. These phenotypes were additionally confirmed in neurons derived from independent WT and hemizygous mutant iPS cell lines, indicating that these reproducible deficits are attributable to MeCP2 deficiency. Taken together, these results demonstrate that neuronally differentiated MeCP2-deficient iPS cells recapitulate deficits observed previously in primary neurons, and these identified phenotypes further illustrate the requirement of MeCP2 in neuronal development and/or in the maintenance of normal function. By validating the use of iPS cells to delineate mechanisms underlying RTT pathogenesis, we identify deficiencies that can be targeted for in vitro translational screens. Introduction Rett syndrome (RTT) is a severe neurodevelopmental autism spectrum disorder caused in the majority of cases by loss-of-function mutations in the methyl-CpG-binding protein 2 (MECP2) gene on the X-chromosome. 1 Girls with RTT exhibit what appears to be normal development for the first 6-18 months, which then apparently stalls and is followed by neurocognitive regression and the onset of autisticlike behaviour. 2 Although widely expressed, MeCP2 is most abundant in mature neurons, and its expression pattern correlates with neuronal differentiation and maturation. 3 The lack of proper MeCP2 function results in a predominantly neurological phenotype in both humans and mice, 1 and MeCP2 dysfunction in mature post-mitotic neurons alone is sufficient to cause phenotypic impairments in mice. 4 The effects of MeCP2 dysfunction on neuronal morphology, intrinsic membrane properties and synaptic connectivity vary across different brain regions and synaptic types. 5 Nevertheless, the prevailing view is that subtle deficits in synaptogenesis and/or synaptic function underlie the profound alterations in brain network activity in RTT. Interestingly, mutations in MECP2 have been implicated in a number of neuropsychiatric disorders, including autism, bipolar disorder and schizophrenia. [6][7][8][9][10][11] As a consequence, studies delineating phenotypes associated with Mecp2 deficiency may shed light on the pathogenesis of multiple neurological syndromes. While neurophysiological assessments in MeCP2deficient tissue have given insights into Rett pathogenesis, these investigations are hampered by the poor breeding fecundity and thus limited availability of MeCP2-deficient mice. 12 An attractive alternative to breeding MeCP2-deficient mice is the use of neuronally differentiated induced pluripotent stem (iPS) cells 13,14 as a model system. Recent studies have now shown that pluripotent stem cells can be generated directly from RTT patient fibroblasts, [15][16][17][18][19][20][21] and that these cells can be differentiated into neurons in vitro. While constituting a major advancement to allow patient-based in vitro assessments, similar attempts to generate iPS cells from mouse models of RTT have not been conducted to date. Here, using the Mecp2 308 mouse as a model system, 22 we discover dysfunctional phenotypes relevant to RTT through a detailed characterization of more than a dozen electrophysiological properties assessed in large numbers of neurons generated in vitro from iPS cells. Materials and methods For more detailed information, please refer to Supplementary Methods. Embryoid body (EB)-mediated differentiation Mouse iPS cell colonies were dissociated by treatment with 0.25% trypsin-ethylene diamine tetraacetic acid and cultured in suspension in non-treated petri dishes for 8 days. Cells were cultured in EB media containing Dulbecco's modified Eagle's medium with 10% FBS, 4-mM L-glutamine, 4-mM penicillin/streptomycin/ glutamine, 0.1-mM MEM non-essential amino acids and 0.55-mM 2-mercaptoethanol (all Invitrogen, Carlsbad, CA, USA) without leukemia inhibitory factor. EBs were then plated onto gelatin-coated tissue culture grade dishes for an additional 8 days for further differentiation before immunocytochemistry for markers representing the three germ layers. Media were changed every other day throughout the 16-day differentiation. Teratoma formation assays Teratoma experiments with NOD/SCID immunodeficient mice were performed as previously described. 15,16 All procedures using animals have been approved by the SickKids Animal Care Committee under the auspices of The Canadian Council on Animal Care. Neuronal differentiation Neuronal differentiation of iPS cell lines was performed using methods adapted with modifications from the retinoic acid-mediated differentiation protocol published by Bibel et al. 23,24 for generating glutamatergic neurons. Timing of subsequent media changes were as specified by Bibel et al. 23,24 for longterm culture of differentiated neurons. Electrophysiology Whole-cell patch-clamp recordings were made at room temperature 13-20 days after dissociated neuronal precursors were plated onto poly-L-ornithine/ laminin dishes. Electrical signals were digitized with a DigiData 1200 (Molecular Devices, Sunnyvale, CA, USA) and filtered at 2 kHz. Data were recorded using an Axopatch 1-D amplifier (Molecular Devices) and analyzed offline using Clampfit software (Molecular Devices). Results Establishment and characterization of wild-type and Mecp2 308 mouse iPS cells We first established iPS cell lines from female Mecp2 wild-type and Mecp2 308 heterozygous fibroblasts (referred to as WT and HET, respectively). Skin samples were isolated from a litter of embryonic mice, and fibroblasts were expanded and genotyped by PCR to confirm presence or absence of the truncated Mecp2 308 allele. Mouse embryonic fibroblasts were infected with retroviruses expressing Oct4, Sox2, and Klf4 (excluding c-Myc) and EOS reporter lentivirus to mark pluripotency as previously described. 15,16 EOS-EGFP-positive colonies with mouse embryonic stem (ES) cell-like morphology were expanded under puromycin selection, and the pluripotency of four WT and four HET iPS cell lines was extensively characterized, with representative data for WT #3 and HET #4 shown in Figures 1 and 2, and data for HET #1 previously published. 15,16 Immunocytochemistry verified the lines stain positive for alkaline phosphatase and express pluripotency markers Nanog and SSEA-1 (Figure 1a and Supplementary Figure 1a). Quantitative reverse transcription PCR (qRT-PCR) revealed the lines reactivate endogenous pluripotency loci, and primers specific to the retroviral transgenes demonstrated that the lines silence the exogenous transgenes, indicating full reprogramming (Figure 1b). Female mouse iPS cells have been shown to reactivate the silent X-chromosome in somatic cells during reprogramming. 25 Immunocytochemistry for the H3K27me3 silencing mark revealed that WT and HET lines reactivate the inactive X (Figure 1c and Supplementary Figure 1b). Immunofluorescence using an antibody to the C-terminus of MeCP2 that is unable to detect the truncated MeCP2 308 protein revealed that the heterozygous iPS cell lines express WT MeCP2 in all cells, indicating active expression from both X-chromosomes following reprogramming (Supplementary Figure 1c and Supplementary Tables 1 and 2). The genotypes of the cell lines were confirmed by PCR (Supplementary Figure 2a). We also confirmed In vitro-derived Rett neuron electrophysiology N Farra et al that the WT lines were female, to provide the most appropriate control for the female HET lines (Supplementary Figure 2b). Both WT and HET mouse iPS cell lines differentiate appropriately into the three embryonic germ layers in vitro and in vivo To assess in vitro differentiation capacity, iPS cells were cultured in suspension to form cellular aggregates (EBs). These were transferred onto an adhesive substrate and subjected to immunocytochemical analyses demonstrating that the lines spontaneously differentiated into cell types corresponding to ectoderm, mesoderm and endoderm in vitro (Figure 2a and Supplementary Figure 3a). Further, upon injection into the testes of immunodeficient mice, these lines also formed teratomas that contained mature tissue types corresponding to these three embryonic germ layers in vivo (Figure 2b and Supplementary Figure 3b). At this point, we performed Southern blots to confirm that the lines contain all three transgenes (Supplementary Figure 4). This analysis revealed that the iPS cells are subclones with the same integration sites derived from the same reprogramming event, a finding that presented the opportunity to evaluate whether phenotypic differences accumulate in the different sublines during the time they have been cultured independently. Examination of the qRT-PCR data ( Figure 1b) suggests subtle gene expression differences. To assess whether phenotypic variation accumulates between these sublines, we directed the iPS cells to differentiate into neurons for functional studies. WT and HET iPS cells can be directed to differentiate into glutamatergic neurons in vitro Progenitor cells isolated from MeCP2-deficient mouse brain retain the ability to properly differentiate into neurons in vitro. 26 To test whether WT and HET mouse iPS cells also retain this function, we directed the iPS cells to differentiate into glutamatergic neurons using a protocol previously shown to be effective in mouse ES cells. 23,24 Retinoic acid treatment of EBs derived from both WT and HET iPS cell lines generated cells possessing neuronal-like morphologies that expressed neuronal markers microtubule-associated protein 2 (MAP2) and vesicular glutamate transporter 1 (VGLUT1) (Figures 3a and b). Consistent with random X-chromosome inactivation during differentiation, neurons generated from HET cells contained a mosaic of cells expressing either the mutant or WT Mecp2 allele (as detected with a C-terminal antibody unable to detect the truncated protein) (Figure 3c, Supplementary Figure 5). MeCP2 staining was detected in 52% of cells (n = 1388) following neuronal differentiation of the heterozygous lines, indicating that recapitulation of X-chromosome inactivation yielded equal proportions of mutant and WT neurons in the HET cultures. This protocol is highly efficient for generating glutamatergic neurons, and yields fewer than 5% GABAergic neurons. 23,24 In accordance, virtually all cells express VGLUT1, and only low levels of GABAergic marker glutamate decarboxylase 65/67 (GAD65/67), the g-aminobutyric acid synthetic enzymes, are detected ( Supplementary Figures 6a and b). Neurons derived from both WT and HET mouse iPS cells are electrophysiologically active An unbiased electrophysiology approach was taken to characterize the passive and active properties in the putative neurons derived from the WT and HET iPS cells. Whole-cell recordings were made from neuronally differentiated WT (n = 95 cells) or HET (n = 82 cells) iPS cells. Under current-clamp recording conditions, both WT (n = 7) and HET cells (n = 11) spontaneously generate fast action potentials, sensitive to the voltage-gated Na þ channel blocker tetrodotoxin (TTX) (Figures 3a and b). The frequency of the spontaneous action potentials was not different between genotypes (Supplementary Figure 7). In addition, action potentials could be evoked by injecting depolarizing current (n = 58 and 67 for WT and HET, respectively). These action potentials were sensitive to TTX in 9 of 10 WT cells tested and 7 of 7 HET cells (Figures 3a and b). These findings, together with immunocytochemical results, indicate that functional neurons are derived from both WT and HET iPS cells. iPS-derived cells showing spontaneous or depolarization-induced action potentials were thus operationally defined as neurons for the remainder of this study. iPS-derived cells that did not generate action potentials were considered to be non-neuronal cells. Such cells may include astrocytes, as staining differentiated iPS cell cultures with the astrocyte marker glial fibrillary acidic protein indicated the presence of this cell type in the cultures (Supplementary Figure 8). Neurons derived from HET iPS cells display alterations in action potential characteristics For the iPS-derived neurons, we next assessed whether the presence of the Mecp2 308 allele affected the functional properties of action potentials. No significant difference was observed in the threshold of action potential generation between WT (n = 48) and HET iPS cell-derived neurons (n = 42) (Supplementary Figure 9a). However, the action potential amplitude in the HET neurons was significantly less than that in WT neurons (Figure 4a). Furthermore, the rise time (Figure 4b), decay time ( Figure 4c) and duration ( Supplementary Figure 9b) of the action potentials were significantly longer in HET iPSderived neurons. In addition, the number of action potentials generated by injecting current steps was significantly decreased in HET relative to WT (Figure 4d). Taken together, these findings indicate that HET iPS cell-derived neurons exhibit dysfunction in the generation of action potentials, and that compared with WT, the evoked action potentials have smaller amplitude and a longer time course. HET iPS cell-derived neurons have altered passive and active membrane properties The alterations in action potential properties suggest that disturbances in the functional expression of voltage-gated Na þ and/or K þ channels may be present in HET iPS cell-derived neurons, as these types of ion channels make primary contribution to action potential generation. We therefore directly investigated Na þ and K þ currents evoked in WT and HET iPS cell-derived neurons by a series of depolarizing voltage steps from À60 mV to þ 60 mV (in 10-mV increments). Both inward and outward currents evoked by these voltage steps were greatly decreased in HET iPS cell-derived neurons compared with WT (Figure 4e, Supplementary Figure 10). These results indicate that MeCP2 deficiency in neurons generated from HET iPS cells leads to reductions in Na þ and K þ currents. In addition to examining active membrane responses evoked by injecting current, the passive membrane properties of neurons derived from the different WT and HET iPS cells were also assessed. On average, resting membrane potentials of HET iPS cell-derived neurons (n = 67) were depolarized with respect to those of neurons from WT (n = 58) (Figure 4f). The only outliers in this and other electrophysiology criteria are the WT #1 (n = 3) and HET #1 cells (n = 13) in which the fewest cells were analyzed. This is in contrast to the consistent results obtained from the other three sublines of each genotype, where recordings were made from larger cell numbers. In addition, input resistance was significantly higher in HET compared with WT neurons (Figure 4g). Collectively, these data indicate that many passive and active membrane properties are altered in HET as compared with WT iPS cell-derived neurons. In contrast to the iPS-derived neurons, neither average resting membrane potential nor input resistance in non-neuronal HET cells were different from those in WT cells ( Supplementary Figures 11a and b). Thus, the differences in electrophysiological characteristics between HET and WT cells are specific to neurons and not a common feature of all iPS-derived cells. Spontaneous mEPSC event frequency is decreased in HET iPS cell-derived neurons Another cardinal property of neurons, in addition to generating Na þ -dependent action potentials, is forming functional synaptic connections. To assess whether MeCP2 308 iPS cell-derived neurons form normal functional synaptic connections, we took advantage of the fact that synapses show action potential-independent release of quanta of neurotransmitter. 27 As the differentiation protocol was biased towards producing glutamatergic neurons, we looked for and analyzed spontaneously occurring miniature excitatory postsynaptic currents (mEPSCs) with voltage-clamp recordings done in the presence of TTX, and blockers of GABA A and glycine receptors. When the membrane potential was held at À60 mV, spontaneously occurring inward currents were observed that exhibited two components (Supplementary Figures 12a and b): a fast component sensitive to AMPA receptor blocker 6-cyano-7-nitroquinoxaline-2,3-dione, and a slow component sensitive to NMDA receptor blocker DL-2-amino-5-phosphonovaleric acid. Thus, both WT and HET neurons show In comparing the properties of mEPSCs in WT and HET neurons, we found that the frequency of mEPSC events was significantly higher in WT iPS cell-derived neurons than in HET (Supplementary Figure 12d). In contrast, there were no differences detected in mEPSC amplitude (Supplementary Figure 12c). Together, the decrease in mEPSC frequency in the absence of a change in mEPSC amplitude suggests that there is a synaptic dysfunction-decreased synaptic number or release probability-at glutamatergic synapses in HET iPS cell-derived neurons. Neurophysiology phenotypes are reproducible among multiple mutant iPS cell lines and due to Mecp2 deficiency To determine whether the cellular phenotypes observed in HET iPS cell-derived neurons were or were not cell line-specific or due to particular transgene integration sites, we generated and analyzed an additional seven hemizygous male mutant and five WT male iPS cell lines (Supplementary Figure 13). In this way, we assessed not only differences across lines, but also the effect of reprogramming neurons in the absence of a mosaic culture environment. Following reprogramming, Mecp2 308/y and WT Mecp2 þ /y lines (308 and WT #, respectively, in the figures) were karyotypically normal and maintained the genotype of their parental somatic cells (Supplementary Figures 14a and b). These functionally pluripotent lines (Supplementary Figures 15, 16 and 17) were directed to differentiate as described above into MAP2-positive neurons. Immunocytochemical staining confirmed the presence of full-length MeCP2 in Mecp2 þ /y cells and its absence in Mecp2 308/y cells (Figure 5a). In electrophysiological recordings, hemizygous male neurons had significant changes compared with WT male neurons in: input resistance, action potential amplitude, rise time, voltage-activated Na þ currents and the number of action potentials generated by injecting depolarizing current steps ( Figure 5 and Supplementary Figure 18). While other parameters were not significantly different in Mecp2 þ /y versus Mecp2 308/y neurons (Supplementary Figure 19), the majority of the electrophysiological aberrations observed in HET iPS cell-derived neurons were also seen in Mecp2 308/y cells. Taken together, our results from multiple cell lines are consistent with the idea that Mecp2 deficiency results in less excitable neurons. Discussion In this study, we assessed whether iPS cell technology could be extended into mouse models focusing on delineating the pathophysiology of RTT. To do this, we turned to the well-characterized monogenic Mecp2 308 mouse model, where underlying neuronal phenotypes have been described and many behavioural impairments reminiscent of the human disorder are recapitulated. After extensive characterization of their pluripotency, the HET lines were differentiated into functional glutamatergic neurons that displayed several electrophysiological differences compared with neuronally differentiated WT iPS cells. The accumulation of electrophysiology from cumulatively larger neuron numbers allowed a more thorough investigation of neuronal dysfunction compared with previous studies of disease phenotyping using differentiated iPS cells. [17][18][19][20][21][28][29][30][31][32][33][34][35] Recording from almost 400 cells in total, we found common phenotypes in the HET-derived and Mecp2 308/y -derived neurons: higher input resistance, decreased action potential amplitude, prolonged rise time, diminished voltage-activated Na þ currents and a reduction in the number of action potentials generated by injecting depolarizing current steps. Together, these differences give a picture of neurons lacking MeCP2 as deficient in intrinsic excitability and, from the lowered mEPSC frequency in the HET neurons, deficient in excitatory synaptic transmission. Deficiencies in intrinsic excitability and excitatory synaptic transmission have been reported in MeCP2-lacking neurons in vivo and in vitro, although there is variability in the abnormalities in different brain regions. 5,36 As all our findings in neurons derived from iPS cells recapitulate changes observed in some neurons in or directly from the brain, the iPS cell system is a representative model of the neuronal cellular and synaptic defects in RTT. Besides being a system that can be used to study neuron function, iPS cells allow the possibility of studying disease processes in a setting more amenable to genetic rescue or pharmacological intervention compared with intact mice or primary neurons. Further, as RTT is postulated to be a consequence of a failure of neurons to mature properly, the iPS cell system is ideal for future in-depth analyses of the developmental time course of mutant neural progenitor differentiation and neuron maturation, experiments that are more technically challenging in whole mice. The common responsiveness of the HET cultures is somewhat surprising, as these cells contain a mixture of neurons expressing either WT or truncated MeCP2 due to random X-chromosome inactivation. Despite this, we recorded electrophysiological characteristics across differentiated iPS cell lines that were seemingly homogeneous, suggesting strongly that the lack of MeCP2 exerts both cell autonomous and noncell autonomous influences in these mixed cultures. This is not unprecedented, as previous studies using human iPS cell-derived MeCP2-deficient neurons [17][18][19]21 found a similar outcome, and recent reports have also now demonstrated that Mecp2deficient glia provide non-cell autonomous influences on neuron properties. [37][38][39][40] It is possible that Mecp2-mutant cells in the culture may contribute a non-cell autonomous effect on WT cells and on overall neuronal network function. Ballas et al. 37 also demonstrated that conditioned media from WT In vitro-derived Rett neuron electrophysiology N Farra et al astrocytes rescued some aspects of aberrant neurophysiology in mutant neurons, and in accordance, we observed modest improvements in electrophysiological phenotypes when WT-conditioned media was transferred onto 308 cultures (Supplementary Figure 20). Alternatively, the deficiencies in excitatory synaptic function may also reflect a deficit in overall connectivity between mosaic HET neurons. Collectively, these influences may explain why the HET neurons do not appear to behave in a bimodal manner In vitro-derived Rett neuron electrophysiology N Farra et al based on whether they express WT or truncated MeCP2. To avoid non-cell autonomous effects or connectivity differences from HET neurons, hemizygous male iPS cell lines were generated, neuronally differentiated and assayed using the same electrophysiological paradigms. The results show strong consistency with the HET sublines, and thus strongly support our conclusion that the phenotypes we observe reflect RTT pathogenesis, and not spurious cell line-specific outcomes. Previous reports illustrate that Mecp2 deficiency in the brain results in subtle alterations in synaptic responsiveness, at least some of which we now show can be recapitulated in neurons generated from mouse iPS cells. 5,36 However, our work further shows that MeCP2 is required for iPS cell-derived neurons to develop proper intrinsic excitability as seen in the decreased numbers of action potentials generated in response to directly stimulating the neurons by injecting current. This decrease in excitability indicates that MeCP2-deficient neurons are impaired in encoding information into action potentials and in propagating that information to other neurons. The reduced intrinsic excitability may be explained by our observation of decreased Na þ currents in the mutant neurons. The decrease in Na þ currents may be due to reduced number or function of Na þ channels at the cell surface. Interestingly, neurons differentiated from human RTT-iPS cells have lower expression of messenger RNA for the Na þ channels SCN1A and SCN1B. 19 Although the mechanism through which the changes in Na þ current manifests remains to be established, our results show clearly that the presence of functional MeCP2 is required for the normal ontogeny of neuron development and signalling. Further, we identify a current deficit that can be targeted in future drug screens using automated cellbased electrophysiology systems for studies of ion channel function. These observed neurophysiological alterations may contribute to a better understanding of the pathogenesis of a variety of additional MECP2associated neuropsychiatric disorders. In summary, we provide the first account of the neuronal differentiation of iPS cells generated from a mouse model of RTT. To our knowledge, this is the first instance in which such neurons have been generated from any mouse model of a neurological disorder, and thus our results provide a strong basis for the use of neuronally differentiated iPS cells derived from defined mouse models in future studies that aim to elucidate pathophysiological mechanisms.
2016-04-30T18:35:23.062Z
2012-01-10T00:00:00.000
{ "year": 2012, "sha1": "4ee38d6e224aec65a622f936f6237e5f0e54b479", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/mp2011180.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d6f4ae7e725d31084fec22bb35383d29c65ab0fb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237234275
pes2o/s2orc
v3-fos-license
Detection of Chlamydial Antibodies in Animal Sera by Double Diffusion in Gel Postinoculation sera collected from pigeons, turkeys, guinea pigs, sheep, a calf, a rabbit, and a horse experimentally infected with various strains of Chlamydia psittaci yielded a high incidence of positive reactions when tested by double diffusion in gel. Antigen was a deoxycholate extract of SA-2 strain of C. trachomatis. Good correlation was obtained with results of complement fixation tests, whereas double diffusion in gel was less sensitive. Immunoelectrophoresis of the antigen revealed presence of two antigens in the extract. Postinoculation sera collected from pigeons, turkeys, guinea pigs, sheep, a calf, a rabbit, and a horse experimentally infected with various strains of Chlamydia psittaci yielded a high incidence of positive reactions when tested by double diffusion in gel. Antigen was a deoxycholate extract of SA-2 strain of C. trachomatis. Good correlation was obtained with results of complement fixation tests, whereas double diffusion in gel was less sensitive. Immunoelectrophoresis of the antigen revealed presence of two antigens in the extract. In comparison to other serological methods such as complement fixation (CF) and immunofluorescence, double diffusion in gel (DDG) has not been widely used for the detection of chlamydial antibodies in sera. Recently, deoxycholate extracts of yolk sac-propagated chlamydiae (SA-2 strain of Chlamydia trachomatis) containing group-specific antigen have been shown to produce reaction lines when diffused in agar gel against sera from persons with various chlamydial infections (3). Furthermore, concentrated hemagglutinin prepared from the 6BC strain of C. psittaci has also been shown to form reaction lines when diffused against human antichlamydial sera (6). Since chlamydiae infect numerous species of birds and mammals, it was of interest to explore the application of the same immunodiffusion method to the detection of chlamydial antibodies in the sera of a variety of animals. Conveniently available from previous experimentation (5; L. A. Page, unpublished data) were sera and serum fractions prepared by sucrose gradient centrifugation obtained from seven species of experimentally infected birds and mammals. The reactions observed by diffusing these samples in gel against extracts of the SA-2 strain are the subject of this report. MATERIALS AND METHODS Organisms. The SA-2 strain of C. trachomatis, TRIC/2/HAR-2/OT obtained from S. D. Bell, Jr., Department of Microbiology, Harvard School of Public Health, Boston, was used to prepare antigen. The organisms were propagated in the BGM line of African green monkey kidney cells (2). Antigen preparation. Heavily infected BGM cells in Blake bottles were scraped into the cell culture medium (200 ml) and centrifuged at 27,000 x g for 30 min in a Sorvall RC2-B centrifuge. The sediment was suspended to 'A0 of the original volume in phosphate-buffered saline (PBS), pH 7.2, disrupted for 25 sec with a Branson model W-185C sonifier at a setting of 75 watts. The material was subjected to one -20 C freeze-thaw cycle. Fifteen milliliters of 0.5% trypsin (Nutritional Biochemicals Corp., Cleveland, Ohio, 1-300) was added to the suspension, incubated at 37 C, chilled, and centrifuged at 2Z,000 x g for 30 min. The pellet was resuspended in PBS and washed three times by centrifugation and resuspension. The final pellet was suspended in PBS to Ao00 of the original volume and extracted with sodium deoxycholate as previously described (3). Control antigen from uninfected cell cultures was prepared in a similar manner. Antisera. Sera from clotted bloods of seven species of birds and mammals bled before and after experimental infection with various strains of C. psittaci were used (Table 1). These sera had been prepared by one of us (Page) and were previously utilized for studies of the biophysical characteristics of chlamydial antibodies (5; L. A. Page, unpublished data). Antisera to SA-2 strain of C. trachomatis (rabbit 258), 6BC (rabbit 226), and meningopneumonitis, Francis (rabbit 269), strains of C. psittaci, and uninfected progenitor cultures of BGM (rabbit 2033) were produced in the laboratory of the senior author. Normal animal sera were obtained from collections of laboratories of the Department of Microbiology, State University of New York at Buffalo. Method for DDG. A method described previously (3) for diffusing serum and antigen in gel was used. Briefly, 0.8% Ionagar no. 2 (Consolidated Laboratories, Inc., Glenwood, Ill.) dissolved in 0.15 M saline was pipetted onto microscope slides. Serum wells of 4.0-mm diameter and antigen wells of 5.0-mm diam-eter were cut into the hardened agar. The distance between the edges of antigen and antibody wells was 2.0 mm. Sera were prediffused for 30 min before addition of antigen to wells, and slides were incubated at 4 C in a humidified chamber. Lines of reaction were usually visible after 16 to 24 hr and were recorded photographically after 3 to 4 days. Lines were graded visually according to their intensity and recorded as strong (S), moderate (M), or weak (W). Serum inactivation. Unless otherwise indicated, all sera were heated to 56 C for 30 min prior to diffusion tests because previous tests of unheated sera from apparently normal chickens and sheep had occasionally produced nonspecific precipitin lines that did not appear if the sera were inactivated. Other preliminary tests employing heated and unheated chlamydial antisera demonstrated that 56 C inactivation did not affect formation of lines caused by specific antigen-antibody reactions. Antigen and antibody titrations. Antigen and antibodies were serially diluted in PBS and diffused against each other in a "box-type" titration. The end point in each case was the highest dilution giving a line of reaction. Immunoelectrophoresis. Immunoelectrophoresis of sera was performed using an LKB model 6800A instrument according to the micromethod of Scheidegger (7) with the modification that wells were 5.0 mm in diameter. CF method. Both the direct and indirect procedures employed for this study have been published in detail elsewhere (4). Unless otherwise indicated, the results presented were obtained by the direct method. RESULTS DDG reactions with whole sera. Typical reaction lines appearing between wells containing the chlamydial antigen and adjacent wells containing sera from animals infected with various strains of C. psittaci are illustrated in Fig. 1. The lines fused among themselves and also with the lines formed by rabbit antisera to C. psittaci and C. trachomatis strains. No lines appeared when the antigen was diffused in gel against antiserum to uninfected cell cultures. Also, no lines appeared when extract of BGM cells was diffused against chlamydial antisera. Results of DDG tests with pre-and postinoculation sera of Chlamydia-infected pigeons, turkeys, guinea pigs, sheep, a calf, a rabbit, and a horse are shown in Table 1. Of 10 preinoculation sera, none formed lines of reaction. Lines of varying intensity were observed for 18 of 22 postinoculation sera tested. Three of the nonreacting postinoculation sera were from turkeys inoculated with strains of bovine origin that are known not to multiply in turkeys. Negative results were obtained by DDG with a serum from a turkey injected with a large Positive DDG tests were obtained with 5 of the 21 sera representing consecutive bleedings of a calf (5309) inoculated intravenously over a period of 3 months. The five positive sera represented bleedings between the 45th and 72nd day of the experiment ( Table 2). Antigen and antibody titrations. Reactions of serial twofold dilutions of antigen against varying dilution of each of four antisera are summarized in Table 3. Precipitin lines were not observed when the antigen was diluted more than 1: 2 and reacted against calf, sheep, and turkey sera, but a 1:8 dilution of antigen still produced a line against undiluted hyperimmune rabbit serum. Serum from sheep 205 which had a (CF) titer of 192 reacted weakly in dilutions up to 1:16 with antigen diluted 1: 2. Comparison of DDG reactions with CF titers. With few exceptions, positive DDG reactions were obtained with postinfection sera that had chlamydial CF titers ranging from 8 to >256 (Table 1). Most of the postinoculation sera tested had CF titers of >32. There appeared to be no correlation between the intensity of the DDG reaction and the CF titer. When 21 sera representing consecutive bleedings of a calf injected with small doses of live epizootic bovine abortion chlamydiae were tested, only sera having a CF titer of > 32 were DDG positive (Table 2). Other discrepancies between CF and DDG positiveness appeared in tests of fractions of chlamydial antisera separated by sucrose gradient separation (see below). DDG reactions of serum fractions. Results of DDG tests of chlamydial antisera produced in two turkeys, a pigeon, calf, rabbit, and sheep and fractionated by sucrose gradient centrifugation during previous experimentation (5) are compared with CF results in Table 4. Fractions three through five represented immunoglobulin G antibodies, and fractions six through 10 represented immunoglobulin M antibodies. Positive DDG reactions corresponded with CF titers of >8, with the exception of fractions three through five of turkey 756 serum. These fractions were CF negative, but positive DDG reactions were obtained. A similar discrepancy occurred in fraction three of sheep 205 serum. Multiple lines were formed by several turkey antisera, as described above, and sucrose gradient fractions of turkey 756 antiserum formed multiple precipitin lines. Multiple lines were also obtained with fractions four and five from rabbit 2 but not with whole serum. Characterization of deoxycholate extract. Immunoelectrophoresis of the deoxycholate extract of strain SA-2 indicated that two antigens were present (Fig. 2). These antigens had similar electrophoretic mobilities. Boiling the extract for 30 min as described previously (1) did not affect the formation of the two lines observed by immunoelectrophoresis. However, treatment of the extract with periodate using the procedure of Barron and Collins (1) destroyed the activity of both antigens. DISCUSSION DDG, as used in this study, appears to detect chlamydial antibodies satisfactorily in the sera of a variety of animal species. There was good agreement between development of gross lesions in infected animals and the results by DDG. As expected, an antibody response was not detected in animals inoculated with a chlamydial agent which was not infectious for that species. All of the animals tested were infected experimentally in the laboratory and the results by DDG encourage extension of the use of this procedure into field studies in cases of natural infections. The method did not seem to be as sensitive as the CF test, and when DDG was positive the CF titer was at least 8. The method of DDG inherently does not have remarkable sensitivity for antibody detection, but the procedure is rather simple and this moderate insensitivity actually may be of some advantage in field work to eliminate minor reactors. The deoxycholate extract antigen detected antibodies to the chlamydial group antigen(s) because it was prepared from the SA-2 strain of C. trachomatis, and sera tested were collected from animals infected with C. psittaci strains. The results obtained by immunoelectrophoresis suggested that the SA-2 extract contained more than one antigen. At the present time, the number of group antigens in chlamydiae has not been fully established. Recently, Kuo et al., (3a) reported three groupspecific antigens, two of which were demonstrable in deoxycholate extracts and one in cell wall preparations. The majority of sera produced one line of reaction when tested by DDG. However, mul- tiple lines were observed with some turkey sera and sucrose fractions collected from turkey sera and a rabbit serum. The fact that the antigen was prepared as a deoxycholate extract might account for difficulties in resolution of lines. A possible explanation for positive DDG results in the absence of CF activity in sucrose fractions of turkey serum containing slow-sedimenting antibodies is that CF antigen detected antibodies to only one of the group antigens present in chlamydiae.
2020-12-10T09:04:12.297Z
1972-04-01T00:00:00.000
{ "year": 1972, "sha1": "680bade2255819eecd0e55eec6708f79e8a08d9d", "oa_license": null, "oa_url": "https://aem.asm.org/content/aem/23/4/770.full.pdf", "oa_status": "GOLD", "pdf_src": "ASMUSA", "pdf_hash": "8e7a55756d221109e0ebf620f7d59350bf9e2836", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
91944789
pes2o/s2orc
v3-fos-license
Quantitative Analysis of a Transient Dynamics of a Gene Regulatory Network In a stochastic process, noise often modifies the picture offered by the mean field dynamics. In particular, when there is an absorbing state, the noise erases a stable fixed point of the mean field equation from the stationary distribution, and turns it into a transient peak. We make a quantitative analysis of this effect for a simple genetic regulatory network with positive feedback, where the proteins become extinct in the presence of stochastic noise, contrary to the prediction of the deterministic rate equation that the protein number converges to a non-zero value. We show that the transient peak appears near the stable fixed point of the rate equation, and the extinction time diverges exponentially as the stochastic noise approaches zero. We also show how the baseline production from the inactive gene ameliorates the effect of the stochastic noise, and interpret the opposite effects of the noise and the baseline production in terms of the position shift of the unstable fixed point. The order of magnitude estimates using biological parameters suggest that for a real gene regulatory network, the stochastic noise is sufficiently small so that not only is the extinction time much larger than biologically relevant time-scales, but also the effect of the baseline production dominates over that of the stochastic noise, leading to the protection from the catastrophic rare event of protein extinction. I. INTRODUCTION The probability of a rare event in stochastic reaction processes has been a subject of much interest and extensive studies . Such an event can drastically modify the picture provided by the mean field dynamics, if that event brings the process to an absorbing state. A representative example is the extinction of a population or a disease . Under the assumption of an isolated population, the state of the vanishing population or disease is an absorbing state. In this case, even when the mean-field dynamics predicts that the whole or infected population reaches a non-vanishing stationary value, called the stable fixed point, a finite probability flux into the absorbing state leads to an eventual extinction of the population or disease. The stable fixed point is converted into a transient state by the stochastic noise in this case, implying the mean-field description is valid during a finite time, and eventually breaks down. Obviously, the extinction event can be prevented by removing the absorbing state. This can be done by introducing the influx of population or disease, so that the state of the vanishing population or disease is not an absorbing state any more. The rate of such an influx determines the relative dominance of the the mean-field stable fixed point versus the state of vanishing population or disease on the stationary distribution [24]. A gene regulatory network with positive feedback [28], shares the same qualitative features as the models of population dynamics or epidemics discussed above, in that the protein activates its own production by binding to the DNA: when there is no protein production from the inactive gene, the state of the vanishing number of proteins becomes an absorbing state. A small amount of the protein production from the inactive gene, called the baseline production, plays the role of population (disease) influx in the case of population dynamics (epidemics), in that it removes the absorbing state. Therefore, it is clear that a similar quantitative analysis can be conducted on the gene regulatory network as in the case of the population dynamics or epidemics. Although the role of stochastic noise in gene regulation has been a focus of much interest recently , it is difficult to find a quantitative analysis of how the stochastic noise turns a stable fixed point into a transient state, and how the baseline production rescues the proteins from being extinct, in a gene regulatory network. In this work, we will consider the simplest form of genetic regulatory network with positive feedback and obtain the time-dependent distribution as a numerical solution of the chemical master equation. We also obtain an analytic solution under the assumption of appropriate time-scale separations. We indeed see that the stable fixed point of the deterministic meanfield dynamics turns into a transient peak of the probability distribution, and gets erased from the stationary distribution in the absence of the baseline production. We then compute the time-scale for the leakage of the probability to the absorbing state, and find that the leakage time increases as the stochastic noise decreases, making the deterministic equation valid for longer time duration. We then analyze how the baseline production from the inactive gene ameliorates the effect of the stochastic noise by removing the absorbing state. We show that the opposite effects of the stochastic noise and the baseline production can be explained in terms of the position shift of the unstable fixed point. The order of magnitude estimates using biological parameters suggest that for a real gene regulatory network, the stochastic noise is sufficiently small so that not only is the leakage time much larger than biologically relevant time-scales, but also the effect of the baseline production dominates over that of the stochastic noise, leading to the protection from the catastrophic rare event of protein extinction. II. THE MODEL The model we consider is the simplest genetic regulatory network with positive feedback loop. We consider a protein X that binds to the DNA to activate its own production: where D * and D denote the DNA with protein bound and unbound, respectively. Although X is produced from D * or D via transcription and translation, we assume that they can be approximated as a one-step process. We assume that the degradation rate of the bound protein is not greater than that of the free protein, so that 0 ≤ ρ ≤ 1. Although most of the results presented are for ρ = 0, the value of ρ does not affect the qualitative feature of the results. The nonnegative number ǫ parametrizes the rate of transcription from the inactive gene, called the baseline production 1 [33,42,45,[69][70][71]. Because we are considering the case of a positive feedback, the value of ǫ is restricted to be 0 ≤ ǫ < 1. III. THE DETERMINISTIC RATE EQUATION We assume that the time-scale of equilibration between D and D * is much shorter than other relevant time scales, so that they can be assumed to be equilibrated instantly. We also assume that the number of X molecules is large enough so that its fluctuation can be neglected. Then the probability that the DNA is bound to a protein molecule is given by x +r (2) at any instant of time, wherer ≡k 1 /k 0 with k 0 andk 1 being the binding and unbinding rates between the protein X and the DNA, and x is the concentration of the protein X 2 . The rates for the production and the degradation of X are proportional to p(D * ) + ǫp(D) and x, respectively, leading to the deterministic rate equatioṅ describing the mean-field dynamics of x where its fluctuation is neglected. The effect of the degradation of the bound protein is negligible in this limit, as shown in Appendix E, and therefore ρ does not appear in Eq.(3). Although physically x ≥ 0, we first obtain the fixed points of Eq.(3) and examine their stability without such restriction for the convenience of analysis. Fixed points of Eq.(3) are obtained by settingẋ to zero, which is equivalent to solving the equation The roots of Eq.(4) are whose stability can be analyzed by expanding Eq.(3) up to linear order in δx ≡ x − x ± : Because from which we getãr where the relations are satisfied as equalities only when ǫ = 0 andã/b =r so that x + = x − . Therefore, Consequently, we see that x + and x − are stable and unstable fixed points respectively, for the former case. For the latter case, the stability of x ± = 0 is analyzed by expanding Eq. (3) up to second order in δx, where we find thaṫ Therefore, x = 0 is a half-stable fixed point becauseδx/δx < 0 for δx > 0 andδ x/δx > 0 for δx < 0. Now we restrict ourselves to the physical region of x ≥ 0. When ǫ > 0, x + > 0 and x − < 0, and therefore only x + lies in the physical region. Therefore x + > 0 is not only the unique stable fixed point, but it is also the unique fixed point. For the case of ǫ = 0, we have x + > 0 and x − = 0 ifã/b >r, and x + = 0 and x − < 0 ifã/b <r. Therefore, x + > 0 and x − = 0 are the stable and unstable fixed points ifã/b >r, whereas x − = 0 is the unique stable fixed point that is also the unique fixed point ifã/b <r. Finally, the unique fixed point at x = 0 for ǫ = 0 andã/b =r is also a stable fixed point because now we allow only δx with positive sign. The results are summarized as follows: > 0 is the unique fixed point that is stable. ii) ǫ = 0 andã/b >r There are two fixed points. x + =ã/b −r > 0 is the stable fixed point and x − = 0 is the unstable fixed point. iii) ǫ = 0 andã/b ≤r x + = 0 is the unique fixed point that is stable. Note that the case for ǫ > 0 can be understood in terms of position shift of a fixed point, starting from ǫ = 0 cases. Starting from the case (ii), turning on non-zero ǫ shifts the position of the unstable fixed point x − = 0 to an unphysical value of x − < 0, leaving only the non-zero stable fixed point x + in the physical region, leading to the case (i). If we start from the case (iii), the position of the unique stable fixed point x + = 0 is shifted to x + > 0 by turning on non-zero ǫ, again leading to the case (i). The position shift of the fixed point at x = 0 by the baseline production and the stochastic noise will be discussed again later (Section VII), where we will show that their effects are opposite to each other. The case of ǫ = 0 andã/b >r is of much interest, because the features of the stationary distribution obtained from the stochastic equation is quite the opposite to the picture offered by the deterministic rate equation. Because x + is the unique stable fixed point of the deterministic rate equation, x → x + in the limit of t → ∞, even if the initial value of x was close to x = 0. This seems to suggest that in the context of the stochastic dynamics, the stationary distribution should have a peak near x + . However, as will be shown next, the stationary distribution is concentrated at x = 0 which was predicted to be an unstable fixed point, and has vanishing probability at the stable fixed point. We will see that introducing a nonzero value of ǫ ameliorates this effect, but as long as its value is sufficiently small, the probability distribution is still dominated by x = 0. where it is to be understood that P (m, n, t) ≡ 0 whenever m < 0. Let us call the states with n = 0 and n = 1 as the free and the bound mode, the set of states with free and protein-bound DNA, respectively. The Markov chain corresponding to Eq.(9) is shown in Figure 1, where we immediately see that (m, n) = (0, 0) is an absorbing state for ǫ = 0, because once the system enters this state due to stochastic fluctuation, there is no way that it can escape to another state, because no protein molecule can be produced from a free DNA. In other words, is not only a stationary solution of Eq.(9) with ǫ = 0, but also P (m, n, t) converges to P st (m, n) regardless of the initial condition [72]. This is in stark contrast to the picture given by the deterministic rate equation Eq.(3), where the system is predicted to move away from m = 0 and converges to a state with non-vanishing number of the protein molecules. The situation is different from that of a mutual repressor model where the number of peaks of the stationary distribution differs from that of the stable fixed point only when the stochastic noise is sufficiently large [66]. In the current model, the results of the deterministic and the Because it is difficult to obtain a general time-dependent solution of Eq. (9) in an analytic form, the equation was solved numerically using the finite-buffer discrete chemical master equation method [73,74], where the state space is truncated to a finite subspace. The state space was truncated so that m ≤ 30, which is a reasonable approximation because P (30, n, t) < 10 −6 at all times, for the initial conditions and the parameters used in the computation. The master equation was integrated using EXPOKIT package [75]. For ease of comparison with the results from the next sections, the marginal probability distribution p m (t) ≡ P (m, 0, t) + P (m − 1, 1, t) was obtained, which is the probability that the total number of proteins, both bound and unbound, is m at time t. The marginal distributions p m (t) at t = 0.5b −1 , 1.0b −1 , 2.0b −1 , and 10.0b −1 are shown in Figure 2 The stationary distribution of the master equation Eq.(9) can be obtained analytically, under the assumption that the rates for the binding and unbinding of the protein molecule to DNA is instantaneous. We first replace the parameters k i by K ≡ k 0 and r ≡ k 1 /k 0 . Then, in the limit of K → ∞, we derive the master equation for the marginal probability p m (t) (Appendix B): where p −1 (t) ≡ 0. The corresponding Markov chain is shown in Figure 3. First, we compute the stationary distribution. In general, obtaining an analytic form of the stationary solution is difficult, and one often resorts to additional approximations such as WKB formalism 27]. However, the stationary solution of Eq.(11) can be computed exactly, by noting that a stationary distribution of a Markov chain without a cycle must obey a stronger condition called the detailed balance [76][77][78], which is for the Markov chain described by Eq. (11). Eq. (12) can be solved to obtain the solution: where C is the normalization constant. When ǫ = 0, Γ(m + rǫ) term in the numerator diverges for m = 0, and therefore C = 0 and consequently p st m (ǫ = 0) = 0 for m > 0, recovering the obvious result: C. Analytic form of a time-dependent solution for ǫ = 0 Now consider a time-dependent solution of Eq.(11) for ǫ = 0. Denoting the transition rate from the state with protein number m to that with n as k m→n , we see that k m→m+1 = am/(m + r) and k m→m−1 = bm(r + m − 1 + ρ)/(m + r). Therefore, k m→m−1 /k m→m+1 = b(m + r + ρ − 1)/a, and although there is a non-zero probability for transitions in both directions for m > 0, the most probable direction for transition is the positive direction for 0 < m < a/b + 1 − r − ρ, and negative direction for m > a/b + 1 − r − ρ, consistent with the picture provided by the deterministic rate equation: The particle number converges to a non-zero stable fixed point. While there is a non-zero probability that the system makes a series of transitions in negative direction to m = 0 and gets trapped there, the probability for such a rare event can be neglected at early times. Therefore, we assume additional timescale separation, that p 0 (t) is essentially constant during the time-scale where the states with m > 0 equilibrate among themselves. We have already seen that this assumption is reasonable, by numerically solving the original master equation Eq.(9), but it can also be checked from the analytical solution itself a posteriori, as will be discussed below. During the time-scale where the leakage to m = 0 state is negligible, the dynamics of the states with m > 0 is described the approximate equatioṅ which is obtained from Eq.(11) with ǫ = 0 by blocking the transition from the state m = 1 to m = 0. Then the quasi-steady distribution for m > 0 can be defined as the stationary solution of the modified master equation (15). The detailed balance condition for Eq. (15) is again given by Eq.(12) with ǫ = 0, except that now the value of m is restricted to be positive, leading to the quasi-steady distribution The quasi-steady distribution p qs m has exactly the same form as the stationary distribution p st m in Eq.(13) for ǫ = 0, but because m is restricted to be positive values,C is not zero any more. Once we take the leakage to m = 0 state into account, the overall normalization constantC becomes a slowly decreasing function of time. From the normalization condition for m > 0. The local maximum m * of the quasi-steady distribution is obtained from the condition where it is to be understood that the actual value of m * should be taken as the integer value close to the real value of m satisfying Eq. (18). In the regime where m * ≫ 1, Eq.(18) reduces from which we get The deterministic rate equation is written in terms of the concentration x ≡ m/m, wherem is a large number of size O(m * ), and by definingã ≡ a/m andr ≡ r/m, Eq.(20) is rewritten as Because m * ≃ a/b − r is the most probable number of protein molecules at early times, it is of the order of average molecules. In this case, the magnitude of the fluctuation is expected to be of order O( √ m * ), and consequently the relative error is of order O((m * ) −1/2 ) [79,80]. Therefore, (m * ) −1/2 ≃ (a/b − r) −1/2 can be considered as the parameter characterizing the size of the stochastic noise due to the protein number fluctuation, and Eq.(21) tells us that the peak of the quasi-steady distribution is concentrated at the stable fixed point of the deterministic rate equation if the stochastic noise is small. We note that for m * = a/b − r ≫ 1, typical values of the transition rates k m→n between m > 0 and n > 0 are much larger than k 1→0 . For m > 0, we have k m→m+1 = am/(m + We can also obtain the analytic form of p 0 (t) that determines the overall normalization 1 − p 0 (t) of the quasi-steady distribution for m > 0. From Eq.(11), we havė for ǫ = 0. Since we are using the quasi-steady state approximation for p m (t) with m > 0, we may substitute p qs 1 (t) given in Eq.(17) into p 1 (t) of Eq.(22) to geṫ the solution of which is where Eqs. (17), (24) and (25) completely specify the analytic form of the time-dependent distribution. V. THE TIME-SCALE SEPARATION AND THE RATE OF LEAKAGE In general, when we construct a matrix K whose (i,j)-th element is given by the transition rate k i→j of a Markov process, then zero is an eigenvalue of K, and all the remaining non-zero eigenvalues are negative [72]. Let us denote the negative eigenvalues as 0 > λ 1 > λ 2 > · · · , and call λ 1 the lowest eigenvalue. These eigenvalues parametrize the multi-exponential convergence of the probability distribution p m (t) to the stationary one p st m : where m is the m-the component of the eigenvector corresponding to the eigenvalue λ k , whose normalization is determined by the initial condition. In general, the multi-exponential behavior in Eq. (26) can be approximated as a single exponential: Figure 4 with dashed lines for several initial conditions, where the vertical axis is in log scale. We indeed see that they form parallel straight lines for bt 10 , where 1 − p 0 (t) 0.8, confirming the single exponential form in Eq. (27). The graphs of 1 − p 0 (t) for a/b = 10, r = 1, ρ = 0, and ǫ = 0, are also plotted in Figure 5 for several values of K/b, where the single-exponential form is found for 1 − p 0 (t) 0.9 when K/b ≥ 1, again indicating the time-scale separation. These results do not depend on initial probability distribution unless it is concentrated near m = 0, which is again due to the time-scale separation (Appendix C). We also see that the increase of K/b slows down the leakage to m = 0, which is also confirmed in the graph of the dimensionless mean leakage time bτ as a function of K/b in Figure 6, shown for both ρ = 0 and ρ = 0.2, with other parameters being the same as those in Figure 5. The faster leakage for a smaller value of K/b is due to the free mode that flows straight down to m = 0 state without wasting time by making frequent transitions to the bound mode where the mean direction of flow is in the positive m direction (Fig.1). This is even more evident from the separate snapshots of the time-dependent probability distributions for the bound and free modes in Figure 7, where the behaviors for K/b = 1 and K/b = 100 are compared. We note in Figure 6 that increase of ρ leads to the decrease of τ as is should, but the fact that it is an increasing function of K/b remains unchanged. This feature was observed up to ρ = 1 (Data not shown). Note that the fluctuation between the bound and the unbound mode is also a stochastic noise. From the results above, we see that the effect of this fluctuation is qualitatively similar to that of the free protein number fluctuation, in that it enhances the leakage to the absorbing state. The behavior of p 0 (t) becomes highly dependent on initial conditions for K/b ≪ 1, as in the case of a small value of a/b − r, due to the decoupling of the free and the bound mode: If the initial distribution is concentrated at the free mode, it is most probable that the proteins quickly get extinct before there is a chance for a protein to bind to the DNA, whereas if the initial distribution is concentrated on the bound state, it is most probable that protein number stays non-zero for some time before becoming extinct much later. The graphs of 1 − p 0 (t) are shown in Figure 8 for ρ = 0, ǫ = 0, a/b = 100, r = 0.4, and K/b = 0.005, for the initial conditions P (m, n, 0) = δ m,50 δ n,1 (gray line), and P (m, n, 0) = δ m,50 δ n,0 (black line), respectively 4 . We see that not only p 0 (t) depends on the initial condition, but also 1 − p 0 (t) is not even a single exponential for the second initial condition, indicating that the time-scale separation does not hold any more. These features can be most easily understood by considering the extreme case of K/b = 0 where analytic solution is available (Appendix D). It is obvious that a nonzero value of K/b acts only as a perturbation if it is sufficiently small, and therefore the qualitative features of K/b = 0 case are maintained. Note that the form of p 0 (t) obtained under the quasi-steady approximation, Eq.(24), already has a single exponential form. This is because it is the solution of Eq.(23) that is an approximation obtained under the assumption that the m > 0 modes are equilibrated instantly. The assumption of instantaneous equilibration underestimates the leakage time, 4 The states were truncated at m = 1000. The probability at m = 1000 remained below 2 × 10 −64 at all times. because actually it takes a finite time for a state with m > 1 to reach m = 1. However, it can be shown that Therefore, the approximation is better for larger value of a/b − r. The graph of 1 − p 0 (t) obtained from the quasi-steady approximation, Eq. (24), is also shown in Figures 4 and 5, where we indeed see that the leakage is faster than that of the exact solution at K/b = ∞, but captures the exact behavior much better at a/b − r = 9 where the time-scales are more In summary, for a/b − r = 4 and K/b = ∞, the time-scales are sufficiently well separated in order for p 0 (t) to follow a single exponential form for p 0 (t) 0.2, but not separated enough for |λ 1 | to be approximated by τ −1 q . For a/b − r = 9 and K/b = ∞, the time-scales are much better separated so that not only p 0 (t) follows a single exponential form for p 0 (t) 0.1, but also |λ 1 | ≃ τ −1 q . For a/b − r = 100 and K/b = 0.005, p 0 (t) does not follow single exponential form in general, because the time-scales are not well separated. Note that even for sufficiently large K/b, the probability distribution is dominated by the stable fixed point at x = x + only for t ≪ τ ≡ |λ 1 | −1 . That is, the average behavior of the system follows the deterministic rate equation only at early times. For t ≫ τ , the probability distribution approaches the stationary one, and we have p m (t) ≃ δ m,0 . However, also note that larger the value of m * = a/b − r, the smaller the stochastic noise, and hence better the deterministic approximation. In fact, the analytic form of τ q for K/b = ∞ in Eq. (25) shows that it is an exponentially increasing function of a/b − r, as shown in Figure 9 for several values of r, and τ → ∞ in the limit of a/b − r → ∞. That is, if the stochastic noise is very small, it takes a very long time for the probability distribution to make transition from the transient quasi-steady state to the true stationary one, and indeed the dynamics is well described by the deterministic rate equation for a long time duration. Because a large value of r leads to the dominance of the free mode, it is obvious that the increase of r speeds up the leakage, as shown in figure. VI. EFFECT OF THE BASELINE PRODUCTION When ǫ > 0, the numerator in Eq. (13) does not diverge for m = 0, and this expression is well defined for m ≥ 0 with a nonzero value of C. Therefore, p m (t) for m > 0 does not vanish in the limit of t → ∞, in contrast to the case of ǫ = 0. This is because m = 0 is not an absorbing state any more. Similar situation is encountered in population dynamics, where influx of immigration plays the role of baseline production in the current model [24]. However, the discussion for ǫ = 0 is still relevant when ǫ is sufficiently small, because the initial behavior of the time-dependent probability distribution is similar to that for ǫ = 0: The stable fixed point of the deterministic equation is the dominant state only at early times, and the occupation probability of the stable fixed point of the deterministic rate equation will be much smaller than p 0 (t) in the limit of t → ∞, most of it concentrated at m = 0. By comparing Eqs. (13) and (16), we find that the functional form of p qs m (t) for m ≥ 1 is approximately equal to that of p st m (ǫ), up to the overall normalization constant, as long as ǫ is sufficiently small. Therefore, the stationary distribution for ǫ > 0 is approximately the same as the analytic form of the time-dependent distribution in Eq.(17) for ǫ = 0 at some time-point t. That is, we can find a pair of t and ǫ satisfying p qs m (t) ≃ p st m (ǫ). For example, for a/b = 10 and r = 1, we find that p(t = 2000b −1 ) for K/b = 100 and ǫ = 0 shows a reasonably good agreement with p st (ǫ = 0.000035) for both K/b = 100 and K/b = ∞ (Figure 2 (c)). The graph of p st 0 (ǫ) is shown in Figure 10 for several values of K, where we see that it is a monotonically decreasing function of ǫ as to be expected. The effect of K/b on p st 0 (ǫ) is similar to its effect on p 0 (t): a large value of K/b hinders the flow of the probability to m = 0 5 . In summary, the baseline production ameliorates the effect of the stochastic noise in that p st m > 0 for m > 0, but for sufficiently small ǫ, the qualitative behavior of the probability distribution is similar to that for ǫ = 0: at early times, the probability distribution converges to a quasi-steady distribution dominated by the stable fixed point, but the stable fixed point is almost erased in the limit of t → ∞, although not completely destroyed. When ǫ is large enough so that the peak of p st m (ǫ) around the stable fixed point is comparable to p st 0 (ǫ) , we have a bistability driven by the stochastic noise in the limit of t → ∞ [33]. For both of these cases, the deterministic rate equation describes the average behavior of the system only at early times. When ǫ is too large, then the effect of the baseline production dominates that of the stochastic noise in that p st 0 (ǫ) is now smaller than the peak of p st m (ǫ) at the stable fixed point. Then the deterministic rate equation description is valid throughout all the time scales, as long as the average behavior is concerned. Therefore, the effect of the baseline production is to oppose that of the stochastic noise. The stochastic noise and the baseline production have also been shown to exhibit opposite effects on the response to the the change of the production and/or the decay rates , the former and the latter favoring the binary and the graded responses, respectively [42]. The threshold value ǫ θ , defined as the value of ǫ where the area under the two peaks are equal, is plotted in Figure 11 as the function of a/b − r, for several values of r, for ρ = 0, and K/b = ∞. We see that ǫ θ is an exponentially decreasing function of a/b − r. The fact that ǫ θ is a decreasing function of a/b − r is to be expected: Because the effect of the stochastic noise and the baseline production oppose each other, larger (smaller) amount of baseline production is required to overcome the effect of the stochastic noise for larger (smaller) stochastic noise, corresponding to a smaller (larger) value of a/b − r. Also, for a larger value of r, the leakage effect is enhanced, and therefore more baseline production is required to resist such a leakage. As we will discuss in the next section, we can interpret the opposite effects of the stochastic noise and the baseline production in terms of the shift of the position of the unstable fixed point. VII. SHIFT OF THE FIXED POINTS BY STOCHASTIC NOISE We have seen in the context of the deterministic setting that the position of a fixed point gets shifted by the baseline production, and such a shift can remove the fixed point from the physical region. In the stochastic formalism, a fixed point turns into an extremum of the stationary distribution, and its position gets shifted not only by the baseline production, but also by the stochastic noise [82,83]. To study this effect, and to see when the picture offered by the fixed points breaks down, we first go to the continuum limit where the chemical master equation for the stationary distribution turns into the Fokker-Planck equation of the where Now let us consider the extremum of π st (x) when B(x) is very small. Taking derivative of π st (x) with respect to x and setting it to zero, we get from which we get the equation for the extremum x m : That is, we see that to the zeroth order ofm −1 , x m coincides with a fixed point x * of the deterministic rate equation, and the small stochastic noise acts as a perturbation that shifts the position of the x m with respect to x * . To see whether x m is a local maximum or minimum of π st (x), we compute the second derivative of π st (x) at x m : where Eq. (32) By multiplying Eq.(34) by x m +r, we obtain Now, to find out the shift of x m to the leading order inm −1 and ǫ, we make expansion x m = x * + δx, where x * is a fixed-point of the deterministic rate equation with ǫ = 0: Eq.(35) is now expanded to the first order inm −1 and ǫ, to obtain (−2bx * − br +ã)δx = −arǫ + 1 2m from which we obtain that When x * = 0, we get Ifã > br so that x * = 0 is a local minimum (unstable fixed point), then δx > 0 ifm −1 (ã + br) > 2ãrǫ and δx < 0 otherwise. The stable fixed point also tends to get shifted in the opposite directions by the stochastic noise and the baseline production( Appendix F). In summary, the opposite effects of the stochastic noise and the baseline production on For simplicity we assume ρ = 0 throughout the estimates. The numerical computation for the parameters given above cannot be conducted long enough to compute τ , due to the accumulation of numerical errors. However, the probability distribution remains peaked around m ∼ 100 and shows no sign of leakage to m = 0, up to 10 6 min (Fig. 13, red line), 7 The authors thank an anonymous referee for providing the parameters. 8 The ID number of BioNumber Database [84]. regardless of the initial condition. This indicates that τ ≫ 10 6 min. Considering the fact that the cell generation time of E. Coli is T cycle ∼ 100 min [97] (BNID 105065), we see that These results suggest that the leakage to zero-protein state will be unobservable in real biological system even if there is no baseline production. In reality, ǫ > 0 except for artificially engineered systems [98][99][100]. For Lac promotor, we have ǫ = 10 −3 [101] (BNID 102075). With other parameters given as above, this amount of baseline production is sufficient to dominate over the effect of the stochastic noise, as shown in the stationary distribution that is peaked around m ∼ 100, with no trace of peak near m = 0 (Fig. 13). The effect of the baseline production is more important than that of the large value of τ , because it helps the system to start the positive feedback loop even if there is no protein in the beginning. Even if use the initial condition P (m, n, 0) = δ m,0 δ n,0 , the amount of the baseline production above is sufficient to let the system quickly escape from the state (m, n) = (0, 0). Only 30% of the probability remains at m = 0 at t = 50 min, as shown in Figure 13. Therefore, from these results, we expect that for wild-type genetic regulatory networks, the baseline production will restore the non-zero stable fixed point as the dominant peak of the stationary distribution. IX. DISCUSSION It is a well-known fact that stochastic noise modifies the picture provided by the deterministic rate equation. A representative example is the conversion of the stable fixed point into a transient peak of the probability distribution and its complete removal from the stationary distribution. Although this phenomenon has been extensively studied in the context of population dynamics and epidemics, it has been seldom discussed for the models of gene-regulatory networks. In this work, we performed quantitative analysis of transient dynamics of the simplest auto regulatory genetic circuit with positive feedback, both numerically and analytically. We found that as long as the magnitude of the baseline production is sufficiently small compared to that of the stochastic noise, the unique stable fixed point turns into a dominant peak of the transient, quasi-steady distribution, instead of the true stationary state. In the extreme case of vanishing baseline production, the trace of the stable fixed point is completely erased from the stationary distribution due to an absorbing state. However, for very small stochastic noise, the probability distribution is dominated by the stable fixed point for a very long time duration. In fact, we find that the leakage time is an exponentially increasing function of the inverse square of the relative fluctuation, a/b − r (Fig.9). This clarifies the true meaning of In reality, there is always a small amount of baseline production from an inactive gene, except for artificially engineered systems [98][99][100]. Because the baseline production removes the absorbing state, its effect is opposite to that of the stochastic noise. The magnitude of the baseline production relative to that of the stochastic noise determines the relative dominance of the non-zero stable fixed point relative to the peak at the zero-protein state, in the stationary distribution. In fact, we find that the baseline production rate required for overcoming the stochastic effect is an exponentially decreasing function of the inverse square of the relative fluctuation, a/b − r (Fig.11). We also showed that the opposite effects of the stochastic noise and that of the baseline production can be interpreted in terms of the position shift of the unstable fixed point. The order of magnitude estimates using biological parameters suggest that for a real gene regulatory network, the stochastic noise is sufficiently small so that not only is the leakage time much larger than biologically relevant time-scales, but also the effect of the baseline production completely dominates over that of the stochastic noise. Therefore, the wild-type 9 The main difference from the current result is that for the λ-phage decision circuit, the transient and gene-regulatory networks seem to be protected from the catastrophic rare event of protein extinction, by both of these effects. x > 0, and consequently p(x) = δ + (x) for ǫ = 0, where the distribution δ + (x) is defined by the property that δ + (x) = 0 for x > 0 and ∞ 0 dxδ + (x) = 1. Next we consider a discrete model where the proteins form dimer in the bulk and then bind to either DNA or RNA for positive regulation [38]. For the transcriptional regulation with fast mRNA dynamics, we have the stationary solution of the form The zero baseline production corresponds to the limit of r → 0 and ρ → ∞ with finite value of rρ which is proportional to the transcription rate from the active DNA . Because r f (i) i in the parenthesis of Eq.(A3) remains finite in this limit, p st n for n > 0 all vanish due to the extra r in the front of the right-hand side of Eq.(A3), leading to p st n = δ n,0 due to the normlization. In the continuum limit, Eq.(A3) is approximated as Because the zero baseline production corresponds to r → 0 with rρ finite, we have and since x 2 (x) → x/2 as x → 0, we have rf (x) ∝ x for small x in the case of zero baseline production, and consequently r x c duf (u)/u is non-zero and finite. Therefore, from Eq.(A6) we see that p st (x) ∝ x −1 as x → 0, and again we see that the integral of p st (x) diverges unless A c = 0. Therefore, we again see that p(x) = δ + (x). When there is a non-zero baseline production, rf (x) → r as x → 0, and r x c duf (u)/u = r ln(x/c). Therefore p st (x) ∝ x r−1 as x → 0 so the integral of p st (x) remains finite. The discrete and continuous solutions for the transcriptional regulation under the fast protein dynamics, as well those under translational regulations, have similar forms as the ones presented above, so they can be shown to reduce to Dirac delta and Kronecker delta functions respectively in the absence of the baseline production, following the same logic as above. anḋ From Eq.(B4), we get and we see that ξ is of order O(1/K). Therefore, Eq.(B3) becomeṡ which is Eq.(11) in the limit of K → ∞. More rigorously, the coupled equations Eqs. (B3) and (B4) reduce to one equation in the limit of K → ∞, due to Tikhonov's theorem on dynamical system [103,104], where ξ is obtained from Eq.(B5) after setting K −1 to zero, and then substituted into Eq.(B3) to obtain Eq.(11). Appendix C: The leading order contributions to |λ 1 | and v (k≥2) 0 when λ 1 /λ k≥2 ≫ 1 Let us consider the transition rate matrix K for Eq.(9) or Eq.(11) with ǫ = 0, whose (i, j)-th element is k i→j , so that the probability distribution is represented as a row vector, and the time derivative is obtained by multiplication of the transition matrix from the right. For the current model, the transition matrix takes the form where the first state is taken to be the absorbing state, whose index is taken to be zero, and A is the submatrix formed by the transition rates between the other states, whose indices are m = 1, 2, · · · . For Eq.(11), α ≡ k 1→0 = b(r + ρ)/(1 + r). We see that v (0) | = (1, 0, 0 · · · ) is the left eigenvector of K with the eigenvalue 0, the stationary state. Because j k i→j = 0, we have K|I = 0 where |I ≡ (1, 1, · · · 1) T . This also tells us that for any left eigenvector v| = (v 0 , v 1 , · · · ) for a non-zero eigenvalue λ, we have λ v|I = v|K|I = 0, leading to Also, because of the special form of K given in Eq.(C1), expressing the left eigenvector as which shows that λ is also an eigenvalue of A with the corresponding left eigenvector ṽ|, When we set α to zero, the corresponding modified transfer matrix K 0 describes the Markov model in Eq. (15), where A is replaced by A 0 , defined as where P is a projection matrix with the definition P ij ≡ δ i1 δ j1 . The submatrix A 0 possesses the left eigenvector p qs | with the zero eigenvalue, satisfying which we called the quasi-steady distribution in the main text. The eigenvalues and eigenvectors of K can be obtained from those K 0 of by the perturbations of size O(α/A) where A is the typical size of A 0ij that determines the sizes of the nonnegative eigenvalues of A 0 . In particular, the left eigenvector ṽ (1) | of A for the eigenvalue λ 1 is obtained from p qs | as From Eqs.(C5) and (C8), we have where we see that the first term in the final expression is nothing but τ −1 q given in Eq. (25). The corresponding eigenvector in the full state space is where Eq.(C3) was invoked. The eigenvectors for λ k≥2 are obtained from those for the negative eigenvalues of A 0 . Because A 0 is a transition rate matrix in the subspace of m ≥ 1 states, a left eigenvector ṽ| = (v 1 , · · · ) of A 0 for an eigenvalue λ < 0 satisfies the equation i≥1 v i = 0. Therefore, we see that for an eigenvector 1 , · · · ) T for the eigenvalue λ k with k ≥ 2, we have Consequently, and v (k) for k ≥ 2. From Eq.(C13), we see that v From the form of the matrix, it is easy see that the eigenvalues are 0, −b, −2b, −3b, · · · . Similarly, from the matrix for the bound mode, where b in the expression above is replaced by ρb, we see that the corresponding eigenvalues are 0, −ρb, −2ρb, −3ρb, · · · . For both the free and the bound modes, we see that sizes of the eigenvalues are not well separated. In other words, the time-scale separation does not hold, and 1 − p 0 (t) exhibits initial condition dependent multi-exponential behavior, where the constants A k s and B k s are determined by the initial condition. Even if we assume p 0 (0) = 0, these constants are not fully determined. For the special case of ρ = 0, we have the form The chemical master equation for single species can be written as [68,79,80] where j = 1, · · · R labels reactions in the system. In Eq. (E1), F j (n) and S j are the transition rate and the increase of the particle number, respectively, for the j-th reaction, andm is the size parameter, a large number whose size is comparable to the average protein number. For the reduced master equation Eq.(11), we have two reactions, the creation and the degradation, with S 1 = 1 and S 2 = −1, and When the average number of protein molecules is large, one can approximate the discrete variable x ≡ m/m as a continuous variable. Considering F j as functions of with π(x, t) ≡mp m (t), we get the Kramer-Moyal expansion [79] ∂ t π(x, t) =m where a k (x) ≡ j S k j f j (x). Assuming thatm is large enough so that the expansion can be kept only up to the second order, we obtain the Fokker-Planck equation [31,79] π(x, t) = −∂ x (A(x)π(x, t)) where A(x) ≡ a 1 (x) and B(x) ≡m −1 a 2 (x). For Eq.(E2), we get with the rescaled ratesr = r/m andã = a/m. Therefore, we have Note that, in the limit ofm → ∞, Eq.(E4) reduces to Because there is no diffusion term in Eq.(E7), uncertainty originates purely from the initial condition. Therefore in the chemical master equation is taken to be much larger than the binding rate k 0 when m ≫ 1. Therefore, the inactivation of the DNA due to the degradation of the bound protein is negligible compared to that due to the unbinding. It has been noted that when the production rate is a function of protein numbers that is concave downward, than the production rate slows down due to stochastic noise, which in turn decreases the average value of the protein number of the steady state relative to the value obtained by the deterministic rate equation [80]. We sketch the derivation for the shift of the steady state average number of the proteins below. Although the multi-species master equation was considered in ref. [80], we restrict ourselves to the single species case for notional simplicity. The shift of the average value was obtained by expanding Eq.(E1) around the solutionx that satisfies the deterministic rate equation, and considering the probability density Π(ǫ, t) =m 1/2 P (n, t) [79,80] 11 . After substituting into Eq.(E1), the expansion to orderm −1/2 results in the equation [80] ∂ By multiplying the both sides of Eq.(F3) by ǫ and integrating over ǫ, one obtains [80] where the brackets denote the average value, and the term of O(m 1/2 ) was removed by using the equation Eq.(F1). Therefore, for stationary state where ǫ = 0, we get the leading order Whenx is the stable fixed point, then f ′ j (x) < 0, and the shift is negative if f ′′ n (x), as in the case of Michelis-Menten type production rate and linear degradation rate [80], which is also the case for our model. That is, the average value of the particle number of the stationary distribution is less than the stable fixed point, to the leading order in stochastic noise. We note one subtle point. In ref. [80],m dependence of f j (n) was not considered. In our model, f 2 (n) in fact contains a term of O(m −1 ) unless ρ = 1. Therefore, Eq.(F1) is not 11 The factor ofm 1/2 is absent in Eq.(9) of Ref. [80], but it is required for relating the probability mass function of a discrete variable to the probability density of a continuous variable. This factor cancels out in the left and the right-hand of the master equation, so the master equation remains unchanged. exactly the same as the deterministic rate equation we considered previously, where them −1 dependent term was dropped. Therefore, we now have to consider the stable fixed pointx of Eq.(F1) that can be written aṡ for our model, and examine the additional position shifted due to the stochastic noise. Let us also consider ǫ = 0 and examine the effect of the shift both due to the stochastic noise and the baseline production. For ǫ = 0 , Eq.(F6) is then written aṡ Therefore, again, we see that the effect of the stochastic noise and the baseline production is opposite. The former tends to shift the maximum to negative direction whereas the baseline production tends to shift it in positive direction. When the stochastic noise is small and the non-zero peak is dominant, its position is approximately the average particle number. When the peak at the zero gives sizeable contribution to the probability distribution, then the average particle number is less than the position of the non-zero peak. Therefore, the negative shift of the peak relative to the stable fixed point of the Eq.(F6) is consistent with the result of ref. [80], which states that the average number of particles are less than the stable fixed point of Eq.(F6). In this example, the probability flows to the state m = 5 on average, but there is also a leakage to m = 0, whose effect becomes important at late times.
2018-10-16T03:56:26.000Z
2018-06-28T00:00:00.000
{ "year": 2018, "sha1": "35aaeaea6163019e0547e2b5f85169f1b0b0f34b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1806.10949", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "83bcff72ab3676b16d3247f93792fafd2a9cb695", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics", "Biology" ] }
78093075
pes2o/s2orc
v3-fos-license
Budgetary impact analysis of a primary care-based hepatitis C treatment program: Effects of 340B Drug Pricing Program Purpose Safety-net health systems, which serve a disproportionate share of patients at high risk for hepatitis C virus (HCV) infection, may use revenue generated by the federal drug discount pricing program, known as 340B, to support multidisciplinary care. Budgetary impacts of repealing the drug-pricing program are unknown. Our objective was to conduct a budgetary impact analysis of a multidisciplinary primary care-based HCV treatment program, with and without 340B support. Methods We conducted a budgetary impact analysis from the perspective of a large safety-net medical center in Boston, Massachusetts. Participants included 302 HCV-infected patients (mean age 45, 75% male, 53% white, 77% Medicaid) referred to the primary care-based HCV treatment program from 2015–2016. Main measures included costs and revenues associated with the treatment program. Our main outcomes were net cost with and without 340B Drug Pricing support. Results Total program costs were $942,770, while revenues totaled $1.2 million. With the 340B Drug Pricing Program the hospital received a net revenue of $930 per patient referred to the HCV treatment program. In the absence of the 340B program, the hospital would lose $370 per patient referred. Ninety-seven percent (68/70) of patients who initiated treatment in the program achieved a sustained virologic response (SVR) at a net cost of $4,150 each, among this patient subset. Conclusions The 340B Drug Pricing Program enabled a safety-net hospital to deliver effective primary care-based HCV treatment using a multidisciplinary care team. Efforts to sustain the 340B program could enable dissemination of similar HCV treatment models elsewhere. Introduction Hepatitis C virus (HCV) infection is a leading cause of morbidity and mortality in the United States [US], with prevalence estimates between 1-2% or 2.7-3.9 million individuals infected, [1] and in excess of 10,000 deaths annually. [2] Untreated, HCV infection can produce complications including cirrhosis, liver disease, and necessity for liver transplantation, [3] at significant cost. [4] Historically, gastroenterology or infectious diseases sub-specialty practices have treated HCV. Emergence of directly acting, oral agents with improved efficacy and little toxicity enables HCV treatment in primary care settings. [5] Treating HCV in primary care is important for many reasons. Firstly, sub-specialty treatment capacity is limited in the US [6] and as a result, specialty practices may not be able to meet future demand for HCV treatment. Secondly, referral from primary to sub-specialty care is a well-documented point at which patients are lost to follow-up. [7] Treatment in primary care may offer an opportunity to addrsess this loss to follow-up among difficult to treat patients. Another impediment to HCV treatment is cost. [8] In 2013, total estimated costs associated with HCV treatment were $6.5 billion. [9] In an effort to combat drug price increases for Medicaid participants, in 1992, the US Congress created the 340B Drug Pricing Program as part of the Public Health Service Act. [10] Under this program, manufacturers provide medications at significant discounts to 340B covered entities. Organizations are able to dispense these discounted medicines, while Medicaid and private insurers reimburse at full-market price. The 340B program can therefore generate net positive returns, which programs can reinvest in care delivery systems in an effort to improve outcomes. [11] Recently, intense policy debate has questioned expansion of the 340B program under the Affordable Care Act, its broader role and suitable application of 340B regulations. [12] Thus, the outlook for the 340B program is uncertain. Providing HCV treatment in primary care is resource-intensive, requiring significant staff support. [13] Uncertainty surrounding the future of 340B generates challenges for healthcare providers considering implementing primary care-based HCV treatment, as potential cost and revenue remain unclear. Therefore, our objective was to conduct a budgetary impact analysis of a primary care-based HCV treatment program from the perspective of a large safety-net medical center, [13] both with and without 340B program support. Methods We performed a budgetary impact analysis of the Boston Medical Center HCV Primary Care Treatment Program. Assuming the cost perspective of the medical center, we followed methodological guidelines recommended by Mauskopf et al [14] to analyze resource utilization and cost data for 302 patients referred to the program during 2015-2016. Our study involved analyses of retrospective, de-identified data and posed no more than minimal risk to subjects. The Institutional Review Board at Boston University School of Medicine approved the study protocol and did not require collection of written informed consent from participants. Population, setting and program description The Boston Medical Center HCV Primary Care Treatment Program is based in the Adult Primary Care (General Internal Medicine) Practice, an accredited patient-centered medical home situated in New England's largest safety-net hospital. Program details have been described elsewhere. [13] A multidisciplinary team, including a public health social worker ("case manager"), seven general internists trained to treat HCV ("HCV MD treater"), a pharmacy technician, and a pharmacist staffed the HCV treatment program. The program receives referrals from general internal medicine primary care providers (PCPs); the laboratory via automatic notifications of positive HCV antibody tests; electronic health record (EHR) reports; and peer referrals. The case manager performs many patient navigation functions (e.g. addresses insurance and transportation barriers, and connects patients to additional services), schedules and provides appointment reminders and telephone support from patient referral to discharge. The case manager provides these services according to patients' specific barriers to engagement in HCV treatment. HCV MD treaters assess patient readiness for treatment, and appropriateness of the primary care setting for treatment, perform liver staging, and determine course of treatment. The program has a dedicated pharmacy technician who manages prior authorizations for medications. During visits with the staff pharmacist, patients are provided education about medications and strategies to promote adherence, screened for medication side effects, and administered monitoring laboratory tests. Three months after the end of treatment, the HCV MD treater evaluates presence of HCV RNA to assess for sustained virologic response (SVR; the measure of HCV cure) [15] and counsels patients about reducing risk of reinfection. Cost inputs We identified four components of cost from the medical center perspective: 1) program staff, 2) medications, 3) laboratory tests, and 4) overhead. We excluded costs related to laboratory tests because that infrastructure already exists at our center and the marginal cost of an additional test is trivial relative to costs of program staff and medications. [16] Similarly, we did not explicitly cost space and overhead, because the clinical space utilized by the program is already being used for the general internal medicine practice. The cost of using the space, therefore, is effectively zero, as it is simply the opportunity cost of using the space for delivery of HCV visits rather than primary care visits for the same patient population. Program staff We obtained costs related to salary support for the time spent on HCV care for each team member. Because HCV MD treaters are also primary care providers in the practice and spend much of their time caring for patients who do not have HCV, we estimated the proportion of the total full-time equivalent (FTE) that each provider spent on HCV care. We divided total relative value units (RVUs) generated from evaluating HCV patients by total RVUs generated over the study period. We estimated annual salaries of HCV MD treaters using the Association of American Medical Colleges report of the national median salary among general internal medicine Assistant Professors in the US. We used the real-world annual salary packages for non-MD program staff and included fringe benefit rates. To estimate personnel cost per patient evaluated, we assumed full program capacity and divided the total cost of personnel by the total number of patients served. Finally, we estimated costs (based on national median salary data) to support the salaries of the transient elastography (Fibroscan) technician and gastroenterologist for performing and reading Fibroscans, respectively, and the ultrasound technician and radiologist for performing and reading abdominal ultrasounds, respectively. We only included effort needed to perform and read Fibroscans and ultrasounds completed for the 302 patients included in this analysis. Medications We obtained actual data on the number, type and costs of HCV medications from the hospital-based pharmacy. Boston Medical Center is a disproportionate share safety-net hospital and is thus eligible for the 340B Drug Pricing Program. We obtained the actual 340B "discounted cost" of each medication from the hospital-based pharmacy. Some patients treated in the primary care-based HCV program filled their medications at other pharmacies. We therefore only included medication costs from patients who filled HCV medications directly with the hospital-based pharmacy. A dispensing fee of approximately $10 is passed on to the patient when prescriptions are filled at the hospital pharmacy. However, we decided not to itemize this cost for two reasons: 1) the dispensing fee is built into the price of the medication generally through patient copay, or is directly recovered from the patient's insurance provider; 2) when compared to the overall cost of HCV meds, we considered the dispensing fee to be trivial and a non-significant source of cost. Revenue inputs In order to assess program revenues, we performed manual chart reviews in the EHR. We collected data regarding payer-specific reimbursement for services delivered and calculated revenue on a per-patient basis. Beginning with the first HCV MD treater visit, in order to estimate total billable services ordered during each encounter, we assumed each patient received a standard list of services over a fixed number of visits. Practice guidelines for HCV management informed this list of services. [15] We adopted this approach in order to 1) provide information regarding pre-treatment intake visits and loss to follow-up and 2) to increase generalizability of our findings to other settings where program uptake may vary, yet a standard set of clinical services is delivered. We identified three domains of revenue: reimbursement for 1) clinical visits; 2) HCV medications dispensed and 3) laboratory and diagnostic tests, including but not limited to Fibroscans and ultrasounds. Reimbursement for clinical services provided Physician visits prior to treatment and three months after treatment completed. Our approach to estimating reimbursement for clinical services provided by the HCV MD treaters was to cost resources consumed at each visit based on the patient's payer (Medicaid or Medicare). Medicaid or Medicare insured more than 90% of the patients in the program. For the 21 patients who had other types of insurance, we assumed reimbursement to be the weighted average of Medicare and Medicaid reimbursement, using the relative proportion of each payer type within the program as the weight. We stratified by payer type to account for differential reimbursement strategies of Medicare (fee-for-service) and Medicaid (capitated). We collected information regarding reimbursement rates for each payer type and applied the appropriate dollar amounts for covered services on a per-patient basis. Medicare reimbursement rates included a payment of $122 for each level 4 patient intake visit and $75 for each of two level 3 visits with an HCV MD treater. In addition to this payment, Medicare provided reimbursement for all billable services, including labs and imaging, completed during the clinic visit. Medicaid reimbursed at a flat rate of $255 for each HCV MD level 4 patient intake visit regardless of services delivered. Subsequent level-3 encounters with an HCV treater were reimbursed at a flat rate of $54 per visit. Pharmacist visits on treatment We estimated a total of three pharmacy visits over the course of treatment; the pharmacist billed each visit at $90 for Medicare patients and $254 for Medicaid patients. Medication reimbursements The hospital pharmacy provided actual estimates of revenue generated from the 340B program. We only included medication revenue from patients who filled HCV medications at the hospital-based pharmacy. Payers reimbursed the pharmacy at market price; we based estimates of reimbursements using the "Red Book" catalogue of medications. [17] 340B Budgetary Impact Analyses We estimated budgetary impact in the absence of the 340B program, where payers would reimburse HCV medications at the cost of acquisition. As such, the "no 340B" scenario examined increased cost and reduced revenue related to HCV medications, along with cost and revenue related to provision of clinical care. Patient demographic and clinical characteristics Among the 302 patients referred to the program, three-fourths were male, 53% were white, and the mean age was 45 years. Medicaid and Medicare patients comprised most of the observed payer mix, with Medicaid patients accounting for 77%, and Medicare 15%. (Table 1). One hundred fifty-seven of the 302 patients (52%) referred for treatment attended an initial HCV MD treater visit (Fig 1). One hundred and forty-five patients (48%) did not engage in primary care treatment, were already receiving specialty care, or were referred to specialty care. Of the 157 patients who attended the initial visit, 136 (87%) completed a Fibroscan for liver staging. Among these 136 patients, the pharmacy technician submitted a prior authorization (PA) request for 93 (68%). Of these 93 patients, insurance companies approved treatment for 77 (83%), and 70 of these 77 patients (91%) intimated treatment. All 68 patients (97%) who attended a visit three months after completing treatment achieved SVR. Economic evaluation of the HCV treatment program cost We estimated the seven HCV MD treaters contributed a total of 0.18 FTE to the program. Based on a national median salary of $153,000, HCV MD treater salaries accounted for $27,000 of cost. Costs attributable to salary support for program staff included $120,000 for a full-time pharmacist, $60,000 each for a full-time pharmacy technician, and full-time case manager. Costs for other staff members included $4,860 for 1.3% FTE radiologist; $4,940 for 1.3% FTE gastroenterologist; $2,270 for 6.3% FTE ultrasound technician; and $3,700 for 3.3% FTE Fibroscan technician. In total, we estimated salary costs at $282,770 annually, or approximately $940 per patient referred to the program, and $4,160 per patient who achieved SVR. The cost of medications dispensed to patients totaled approximately $660,000 (Table 2). Revenue 340B Pharmacy revenue. Nearly half (28/68) of patients who received treatment in the program filled prescriptions at the hospital-based pharmacy. Reimbursement for these medications accounted for the largest portion of program revenue, at approximately $1.1 million (Table 3), resulting in a net revenue of approximately $440,000. Provider services. HCV evaluation and treatment produced revenue of $1,600.00 per Medicare and $1,380.00 per Medicaid patient evaluated. The total cost of the program was $942,770 (Table 2), with revenue estimated at approximately $1.2 million (Table 3). Based on the total revenue and number of patients evaluated, we estimate the program generated net revenue of $930 per patient referred to the program (Table 4). 340B budgetary impact analysis. Without the 340B benefit, we estimate a net cost of $370 per patient referred to the HCV treatment program. This net reduction is largely due to the fact that, in the absence of 340B specialty pricing, costs of medications rose to just over (Table 4), leading to net returns of just over $62,000, or an overall reduction of roughly $380,000. Discussion The 340B program enabled a safety-net hospital to deliver primary care-based HCV treatment using a multidisciplinary care team, and resulted in a net revenue of $930 per patient referred. Without the 340B program, the practice would experience a net loss of $370 per patient referred and would likely not be sustainable in resource-poor settings, which disproportionately care for HCV-infected patients. Further, if the 340B program were removed, patient adherence to treatment could decline, as the hospital might be unable to support the ancillary services provided by the case manager and pharmacist that facilitate adherence to treatment. The hospital invested 340B revenue to support case management, including providing case management and navigation services to HCV-infected patients in settings outside of GIM, as well as the general operating expenses of the safety-net hospital. Case management is of particular import, as patients at greatest risk for HCV infection often have comorbid substance use [18] and/or mental health disorders, [19] and are more likely to receive their care at safety-net hospitals. [20] Such patients may derive the greatest benefit from services of a case manager who may identify and address barriers to engagement and retention in HCV treatment. We are unaware of other funding mechanisms that could support a multidisciplinary HCV treatment program such as the one described in this study. Further, the costs associated with these ancillary services may be too great for safety-net providers to subsidize long-term, highlighting the importance of financial support from the 340B program. Given the projected increase in HCV-related disease, HCV infection will likely remain a significant burden to the US health care system. [21] Benchmarks for HCV treatment proposed by the Institute of Medicine and Centers for Disease Control may be difficult to attain given the limited capacity of safety-net providers to treat HCV in specialty settings. [22] Expansion of HCV treatment into primary care is an ideal alternative and can be effectively delivered [23] in a cost-effective manner [24][25][26][27][28]. Staffing costs in the model were substantial. Salary support for a case manager, pharmacy technician, and pharmacist may be an impediment to smaller or community-based practices seeking to replicate this program. Despite these costs, the multidisciplinary nature of the program staff is likely essential to its success. Similarly, smaller practices may lack access to a Fibroscan machine and may need to rely on blood testing (e.g. Fibrosure) for liver staging. Treatment of HCV in primary care likely requires significant up-front investment, which can be a barrier for resource limited health care organizations. Consequences of failure to treat HCV are stark. These consequences include continued transmission, [29] re-infection, [30] and long-term health effects such as cirrhosis and end-stage liver disease, for which early treatment may be a cost-effective strategy. [31,32] Our study has several limitations. First, it was conducted at a single site so may lack generalizability. Massachusetts has among the most generous Medicaid coverage for HCV treatment and does not restrict treatment to advanced fibrosis or specialty settings. Secondly, we assumed a fixed number of visits per patient evaluated in the program; it is possible that some patients required additional visits. We also assume fixed costs for medicatons. Should drug costs change significantly, additional downstream effects on program costs, revenue and patient uptake are possible. Additionally, we have not included costs related to space, and capital equipment, such as a Fibroscan machine, which may be of significant import to smaller practices. We did not consider costs from the patient perspective; the impact of such costs, though not directly related to the practice budget, warrants further investigation. Finally, insurers such as Medicaid may directly negotiate medication pricing with pharmaceutical companies and therefore the 340B drug discount program may not be applicable. We are unaware of previous studies which have examined budgetary impacts of 340B pricing on a multidisciplinary HCV care delivery model. Previous studies such as The Extension for Community Healthcare Outcomes (ECHO) project demonstrated clinical and cost-effectiveness of HCV care delivery in primary care; [23] however, the effect of 340B pricing was not examined. Expansion of treatment into primary care will likely be necessary to address the expected rise in incidence of HCV infection. At the same time, treatment models like those we describe could also be implemented into existing specialty HCV practices. Implementation and sustainment of multidisciplinary treatment models similar to the program evaluated here will likely depend upon maintenance of the 340B program. Future policy discussions should include consideration of ways to preserve the current 340B program or, alternatively, creation of funding mechanisms that assist resource-limited providers with care delivery to vulnerable populations at increased risk for HCV infection.
2019-03-16T13:02:33.150Z
2019-03-14T00:00:00.000
{ "year": 2019, "sha1": "896ce58505fa92ace1c0765a9599f2780d6f08ad", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0213745&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "896ce58505fa92ace1c0765a9599f2780d6f08ad", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216419039
pes2o/s2orc
v3-fos-license
Responses to Water Deficit and Salt Stress in Silver Fir ( Abies alba Mill.) Seedlings : Forest ecosystems are frequently exposed to abiotic stress, which adversely a ff ects their growth, resistance and survival. For silver fir ( Abies alba ), the physiological and biochemical responses to water and salt stress have not been extensively studied. Responses of one-year-old seedlings to a 30-day water stress (withholding irrigation) or salt stress (100, 200 and 300 mM NaCl) treatments were analysed by determining stress-induced changes in growth parameters and di ff erent biochemical markers: accumulation of ions, di ff erent osmolytes and malondialdehyde (MDA, an oxidative stress biomarker), in the seedlings, and activation of enzymatic and non-enzymatic antioxidant systems. Both salt and water stress caused growth inhibition. The results obtained indicated that the most relevant responses to drought are based on the accumulation of soluble carbohydrates as osmolytes / osmoprotectants. Responses to high salinity, on the other hand, include the active transport of Na + , Cl − and Ca 2 + to the needles, the maintenance of relatively high K + / Na + ratios and the accumulation of proline and soluble sugars for osmotic balance. Interestingly, relatively high Na + concentrations were measured in the needles of A. alba seedlings at low external salinity, suggesting that Na + can contribute to osmotic adjustment as a ‘cheap’ osmoticum, and its accumulation may represent a constitutive mechanism of defence against stress. These responses appear to be e ffi cient enough to avoid the generation of high levels of oxidative stress, in agreement with the small increase in MDA contents and the relatively weak activation of the tested antioxidant systems. Introduction Drought and soil salinity are considered the most adverse and critical environmental factors for plants, causing massive losses in agricultural production worldwide and, at the same time, substantially affecting the distribution of wild species in nature [1,2]. Drought and salinity affect more than 10 percent of total arable land, and desertification and salinization are rapidly spreading globally [3][4][5]. Accumulation of salts dissolved in irrigation water leads to the progressive 'secondary' salinization of irrigated cropland, especially in arid and semiarid regions, and this problem will worsen in the near future due to the effects of the present climate change [6]. Plants can perceive abiotic stresses, such as water deficit and salt stress, and activate appropriate physiological, biochemical and molecular responses, with altered metabolism, growth Considering that the identification and evaluation of reliable biochemical stress markers will significantly contribute to elucidate the mechanisms of tolerance to drought and salinity, this work aimed to select the optimal indicators associated with silver fir's general responses to salt and water stress, at the seedling stage. Improving our knowledge on the mechanisms of stress tolerance of silver fir, apart from its academic interest, may help to design and implement more efficient conservation programmes for this economically and ecologically important species, allowing an appropriate use of reproductive seed material on reforestation sites. For this study, A. alba seedlings were subjected to water deficit (withholding irrigation) and salt stress (watering the plants with increasing salt concentrations) treatments, under controlled greenhouse conditions. After the treatments, the following variables were determined in control and stressed plants: growth parameters, levels of photosynthetic pigments, ions concentrations in needles and roots, the leaf contents of common osmolytes, the degree of oxidative stress, by quantification of malondialdehyde (MDA, a reliable oxidative stress marker), total phenolic compounds and flavonoid contents, as representative non-enzymatic antioxidants, and the specific activities of major antioxidant enzymes. Plant Growth and Stress Treatments One-year-old seedlings of silver fir from the Romanian Carpathians Mountains (Gârda Seacă nursery, 46 • 31 N/22 • 46 E) were transferred to the greenhouse of the Institute for the Preservation and Improvement of Valencian Agrodiversity (COMAV), Universitat Politècnica de València, Valencia, Spain. After one week of acclimatisation, 175 seedlings, at approximately the same developmental stage, were selected, transplanted into 0.3 L individual pots containing 'Humin-substrat N3' (Klasmann-Deilmann, Germany) substrate, and randomly distributed into five groups of 35 seedlings, each group of pots placed within a plastic tray. The pots were maintained in a greenhouse with controlled temperature (minimum of 15 • C and maximum of 30 • C) under natural light and watered twice a week with tap water. Salt and water stress treatments started 21 days later. Salt treatments were applied by watering the plants twice weekly with NaCl solutions of 0 (for the controls), 100, 200 or 300 mM final concentration (in tap water), adding 1 L of solution per tray. The water stress (WS) treatment was performed by altogether withholding irrigation. Treatments were stopped after 30 days, before any seedling mortality was observed. Plant samples (needles and roots) were collected separately for the measurement of growth parameters and biochemical analyses. Seven replicates, each one consisting of a pooled sample of five seedlings, were used per treatment. Substrate Analysis The electrical conductivity of the substrate was measured after the treatments. Soil samples were collected from every pot, air-dried and passed through a 2 millimetre sieve. A soil:water suspension (1:5) was prepared in deionised water, mixed at 600 rpm for one hour at room temperature and then filtered through filter paper. Electrical conductivity was measured with a Crison 522 Conductivity-meter and expressed in dS m −1 . The gravimetric method was used to determine soil moisture, as follows: a fraction of each soil sample was weighed (soil weight, SW), dried in an oven at 105 • C until constant weight and then weighed again (dry soil weight, DSW). The soil water content was calculated as: Soil humidity (%) = [(SW − DSW)/SW] × 100 Plant Growth Parameters Before starting the stress treatments (time 0), the number of needles and the stem length were determined for all A. alba seedlings. To analyse the effects of water and salt stress on A. alba, at the stage of vegetative growth, the increases in the number of needles (Nno) and stem length (SL) with respect Forests 2020, 11, 395 4 of 21 to the values measured at time 0, and the fresh weight (FW) of needles, were determined. Part of the needle material was weighed (FW), dried at 65 • C until constant weight, and weighed again (dry weight, DW), and the water content percentage (WC%) of needles was calculated as: Photosynthetic Pigments Chlorophyll a (Chl a), chlorophyll b (Chl b) and total carotenoids (Caro) were determined following a previously described method [40]. Fresh needle material (0.05-0.10 g) was ground in the presence of liquid nitrogen. One ml of ice-cold 80% acetone was added to the sample, which was shaken overnight in the dark, at 4 • C. Following a 10 min centrifugation at 12,000 rpm, at 4 • C, the supernatants were collected, and the absorbance was measured at 470, 645 and 663 nm. The following equations were used for the calculation of pigment concentrations [40]: Chlorophyll and carotenoid contents were finally expressed in mg g −1 DW. Ion Contents Sodium (Na + ), potassium (K + ), calcium (Ca 2+ ) and chloride (Cl − ) ions were determined in roots and needles of all replicates, after four-week treatments, according to Weimberg [41]. Dried plant material (0.05-0.10 g) was ground to a fine powder and extracted in 15 mL of Milli-Q water, by heating the samples for 1 h in a boiling water bath, followed by cooling on ice and filtration through a nylon filter of 0.45 µm pore. Na + , Ca 2+ and K + were quantified with a PFP7 flame photometer (Jenway Inc., Staffordshire, UK) and Cl − was measured with a chloride analyser (Sherwood, model 926, Cambridge, UK). Osmolyte Quantification Two main types of osmolytes were analysed in silver fir needles, proline (Pro) and total soluble sugars (TTS). Pro content was measured by the ninhydrin-acetic acid method [42]. Briefly, needle extracts were prepared from fresh plant material in a 3% (w/v) sulfosalicylic acid solution, mixed with acid ninhydrin, incubated for one hour at 95 • C in a water bath, cooled and extracted with toluene. The absorbance of the organic phase was measured at 520 nm, using toluene as the blank. Pro concentration was calculated from a standard curve, prepared with known amounts of the osmolyte, and expressed in µmol g −1 DW. TSS contents were measured according to a published procedure [43]. Needle fresh material (0.05-0.10 g) was ground in the presence of liquid N 2 and suspended in 3 mL of 80% (v/v) methanol. The samples were vortexed, centrifuged at 12,000 rpm for 10 min, the supernatants were collected, and a fraction of each extract was diluted 10-fold with water. The diluted sample (0.5 mL) was supplemented with concentrated sulphuric acid (2.5 mL) and 5% phenol (0.5 mL). Finally, the absorbance of the sample was measured at 490 nm. TSS contents were expressed as 'mg equivalent of glucose' (used as standard), mg eq. glucose g −1 DW. Malondialdehyde (MDA) Methanol extracts (80%, v/v, in water) were prepared by grinding 0.05-0.10 g of fresh needles, shaking the samples in a rocker shaker overnight, at room temperature, followed by centrifugation at 12,000 rpm for 15 min. MDA was quantified in the supernatants, as previously described [44]. Each sample was mixed with 0.5% thiobarbituric acid (TBA) prepared in 20% trichloroacetic acid (TCA)-or with 20% TCA without TBA for the controls-and then incubated at 95 • C for 15 min, in a water bath. The reactions were stopped on ice, the samples were centrifuged at 12,000 rpm for 10 min, at 4 • C, and the absorbance of the supernatants was measured at 532 nm. After subtracting the non-specific absorbance at 600 and 440 nm, the MDA concentration was calculated applying the equations described by Hodges [44] based on the molar extinction coefficient of the MDA-TBA adduct at 532 nm (ε 532 = 155 mM −1 cm −1 ). Non-Enzymatic Antioxidants Concentrations of total phenolic compounds (TPC) and total flavonoids (TF) were determined in the same 80% methanol extracts used for MDA measurements. TPC were determined according to the protocol of Blainski [45], which is based on the reaction with the Folin-Ciocalteu reagent, in the presence of NaHCO 3 . The reaction mixtures were incubated at room temperature, in the dark, for 90 min, and the absorbance was then recorded at 765 nm. TPC concentration was expressed as equivalents of the standard, gallic acid (mg eq. GA g −1 DW). TF were determined by nitration of catechol groups with NaNO 2 , followed by reaction with AlCl 3 under alkaline conditions [46]. The absorbance of the samples was read at 510 nm, using catechin as the standard. TF concentration was expressed as equivalents of catechin (mg eq. C g −1 DW). Antioxidant Enzyme Activities The specific activity of four major antioxidant enzymes, namely, superoxide dismutase (SOD), catalase (CAT), ascorbate peroxidase (APX) and glutathione reductase (GR), was determined at room temperature (~25 • C) in crude protein extracts prepared from silver fir needles, essentially as previously described [47]. The plant material was ground in liquid N 2 and mixed with extraction buffer (20 mM Hepes, pH = 7.5, 50 mM KCl, 1 mM EDTA, 0.1% (v/v) Triton X-100, 0.2% (w/v) polyvinylpyrrolidone, 0.2% (w/v) polyvinylpolypyrrolidone and 5% (v/v) glycerol). To improve protein extraction, 1/10 volume of 'high salt buffer' (225 mM Hepes, pH 7.5, 1.5 M KCl and 22.5 mM MgCl 2 ) was added to the samples, mixed well by vortexing and kept for 15 min on ice. After centrifugation at 13,500 rpm for 15 min at 4 • C, the supernatants were collected, concentrated in U-Tube concentrators (Novagen, Madison, WI, USA), and centrifuged again to remove precipitated material. The supernatants, referred to as 'protein extracts', were frozen in liquid N 2 and stored in aliquots at −75 • C. Protein concentration in the extracts was measured by the method of Bradford [48], using the Bio-Rad reagent and bovine serum albumin (BSA) as standard. SOD activity in the protein extracts was determined following, at 560 nm, the inhibition of nitroblue tetrazolium (NBT) photoreduction in reaction mixtures containing riboflavin as the source of superoxide radicals [49]. A SOD unit was defined as the amount of enzyme that causes 50% inhibition of NBT photoreduction under the assay conditions. CAT activity was assessed by the decrease in absorbance at 240 nm, which parallels the consumption of H 2 O 2 added to the extracts [50]. A CAT unit was defined as the amount of enzyme that will decompose one mmol of H 2 O 2 per minute at 25 • C. For ascorbate peroxidase (APX), the enzyme activity was determined by the decrease in absorbance observed at 290 nm as ascorbate becomes oxidised in the reaction [51]. One APX unit was defined as the amount of enzyme required to consume one mmol of ascorbate per minute, at 25 • C. GR activity was quantified according to Reference [52], following the decrease in absorbance at 340 nm due to oxidation of NADPH-the cofactor of the GR-catalysed reduction of oxidised glutathione (GSSG). One GR unit was defined as the amount of enzyme that will oxidise one mmol of NADPH per minute, at 25 • C. Statistical Analyses Data were analysed using the program Statgraphics Centurion XVI (Statgraphics Technologies, The Plains, VA, USA). Significant differences between treatments were tested by one-way analysis of variance (ANOVA) at the 95% confidence level, and post hoc comparisons were made using the Tukey's HSD test at p < 0.05. All mean values throughout the text are based on seven biological replicates, each one corresponding to a composite sample of five pooled individual seedlings. Hierarchical cluster analysis (HCA) and the corresponding heatmap were performed using the ClustVis tool [53], for traits for which significant differences (p < 0.05) had been detected in the ANOVA with the purpose to find a signature of responses specific to the water and salinity stresses applied. Unit variance scaling for the normalised and centred data was used. Distance measures for the HCA were based on Pearson correlations, and the average clustering method with the higher median value first for the tree ordering option was used. Substrate Analysis The electric conductivity (EC1:5) of the substrate increased in parallel to the increasing NaCl concentration in the irrigation solutions, reaching almost an eight-fold higher value in the presence of 300 mM NaCl than in the control pots, confirming the high correlation between EC and the concentration of the saline solutions used in the salt treatments. The water stress treatment also led to a significant increase in the substrate EC, probably due to concentration of salts in the soil. However, logically, it was lower than in the pots watered with NaCl solutions, even at the lowest concentration tested (100 mM). As expected, the humidity of the substrate was strongly reduced, by more than 40%, in the pots that were not irrigated (WS treatment), whereas those watered with saline solutions showed only slight reductions, as compared to the control (Table 1). Plant Growth Analysis Salt and water stress inhibited the growth of A. alba seedlings, as shown by the relative reductions in stem elongation, increase of the number of needles and biomass accumulation in the stressed plants, as compared to the non-stressed controls. For example, all stress treatments strongly reduced the mean stem elongation measured in the control seedlings, but no statistically significant (p < 0.05) differences were observed between the water deficit treatment and the three different NaCl concentrations ( Figure 1a). In terms of the increment in the number of needles (Figure 1b) or the needles' FW ( Figure 1c), growth inhibition of the salt-treated seedlings was concentration-dependent, with the strongest effects observed at the highest NaCl concentration tested. Under our experimental conditions, the water deficit treatment resulted in inhibition of growth similar to that caused by the lowest salinity applied to the A. alba seedlings. Increasing external salinity led to a significant, concentration-dependent reduction in the needles water content, down to about 45% in the presence of 300 mM NaCl ( Figure 1d). Nevertheless, the stress-induced dehydration of the needles was much more pronounced in the plants subjected to water deficit, where water content dropped to 13% (Figure 1d). Therefore, the observed Forests 2020, 11, 395 7 of 21 reduction of fresh weight in water-stressed seedlings is probably due, to a large extent, to loss of water by the needles. Photosynthetic Pigments Photosynthetic pigments' (chlorophylls a and b, carotenoids) concentrations decreased under stress. Seedlings subjected to either water deficit or salinity, showed lower chlorophylls contents than the non-stressed controls, and the differences were statistically significant in all cases, except for Chl a in plants watered with 100 mM NaCl. The strongest relative reductions were observed in waterstressed seedlings ( Figure 2). Needle levels of carotenoids were low and, in general, did not show significant differences between treatments, except for the increase observed in response to water stress ( Figure 2). . Stem length and number of needles' measurements were taken just before starting the treatments (time 0), and before collecting the samples (time 30). Bars represent means ± SE (n = 7). Different letters above the bars indicate significant differences between treatments, according to Tukey's test (α = 0.05). Photosynthetic Pigments Photosynthetic pigments' (chlorophylls a and b, carotenoids) concentrations decreased under stress. Seedlings subjected to either water deficit or salinity, showed lower chlorophylls contents than the non-stressed controls, and the differences were statistically significant in all cases, except for Chl a in plants watered with 100 mM NaCl. The strongest relative reductions were observed in water-stressed seedlings ( Figure 2). Needle levels of carotenoids were low and, in general, did not show significant differences between treatments, except for the increase observed in response to water stress ( Figure 2). Ions Levels As it should be expected, there were no significant differences in the levels of Na + and Cl − between control and drought-stressed plants, neither in needles nor in roots. On the contrary, Na + and Cl − levels were much higher in salt-treated plants than in the non-stressed controls. In the presence of 300 mM NaCl, Na + contents were about three-fold higher than under non-stress conditions, both in needles and roots (Figures 3a and 4a). On the other hand, the salt treatment induced the accumulation of Cl − to concentrations seven-fold (roots) or five-fold (needles) higher than in the corresponding controls (Figures 3b and 4b). For all treatments (controls, water deficit and each NaCl concentration in the irrigation water), both in roots and needles, Na + concentrations were always higher than those of Cl − under the same conditions. When comparing Na + and Cl − contents between roots and needles, they were always substantially higher in needles, for each treatment and the two ions stress. Seedlings subjected to either water deficit or salinity, showed lower chlorophylls contents than the non-stressed controls, and the differences were statistically significant in all cases, except for Chl a in plants watered with 100 mM NaCl. The strongest relative reductions were observed in waterstressed seedlings ( Figure 2). Needle levels of carotenoids were low and, in general, did not show significant differences between treatments, except for the increase observed in response to water stress ( Figure 2). . For each of the three pigments, different letters above the bars indicate significant differences between treatments, according to Tukey's test (α = 0.05). Ions Levels As it should be expected, there were no significant differences in the levels of Na + and Cl − between control and drought-stressed plants, neither in needles nor in roots. On the contrary, Na + and Cl − levels were much higher in salt-treated plants than in the non-stressed controls. In the presence of 300 mM NaCl, Na + contents were about three-fold higher than under non-stress conditions, both in needles and roots (Figures 3a and 4a). On the other hand, the salt treatment induced the accumulation of Cl − to concentrations seven-fold (roots) or five-fold (needles) higher than in the corresponding controls (Figures 3b and 4b). For all treatments (controls, water deficit and each NaCl concentration in the irrigation water), both in roots and needles, Na + concentrations were always higher than those of Cl − under the same conditions. When comparing Na + and Cl − contents between roots and needles, they were always substantially higher in needles, for each treatment and the two ions (compare Figure 4a with Figure 3a, and Figure 4b with Figure 3b). It is interesting to note the high Na + concentration (about 400 µmol g −1 DW) measured in needles of control plants, in the absence of salt (Figure 4a). Ca 2+ and K + levels were also higher in the needles than in the roots, in the controls and for all applied treatments (Figure 3c, d and Figure 4c, d). Mean K + contents generally decreased in roots and needles of water-stressed seedlings, and in response to salt stress in a concentration-dependent manner. Still, the differences with the non-stressed controls were statistically significant only in roots and in the presence of 200 or 300 mM NaCl (Figures 3c and 4c). As compared to the controls, mean Ca 2+ concentrations increased in parallel to external salinity, both in roots and in needles, but significant differences were found only in needles. This effect was not observed under water deficit conditions (Figures 3d and 4d). Osmolyte Contents Osmolyte biosynthesis is a general response of all organisms, including plants, to environmental conditions that generate osmotic stress, such as salinity or drought. The accumulation of these compatible solutes helps maintain osmotic balance, minimising or even avoiding cell dehydration. In this study, we measured the levels of the two main plant osmolytes: proline (Pro) and total soluble sugars (TSS), in needles of A. alba seedlings after the water and salt stress treatments. For this species, under our experimental conditions, Pro concentrations were low (less than 15 µmol g −1 DW) in the control plants, and did not vary significantly in the plants subjected for one month to the water deficit conditions; however, a significant increase of about two-fold was observed in the presence of salt (Figure 5a). The mean TSS level in non-stressed seedlings was 20 mg eq. glucose g −1 DW, approximately, and increased significantly, both under water deficit and salt stress conditions, in the latter case, in a clear concentration-dependent manner, reaching a maximum value of ca. 50 mg eq. glucose g −1 DW (Figure 5b). Ca 2+ and K + levels were also higher in the needles than in the roots, in the controls and for all applied treatments (Figure 3c,d and Figure 4c,d). Mean K + contents generally decreased in roots and needles of water-stressed seedlings, and in response to salt stress in a concentration-dependent manner. Still, the differences with the non-stressed controls were statistically significant only in roots and in the presence of 200 or 300 mM NaCl (Figures 3c and 4c). As compared to the controls, mean Ca 2+ concentrations increased in parallel to external salinity, both in roots and in needles, but significant differences were found only in needles. This effect was not observed under water deficit conditions (Figures 3d and 4d). Osmolyte Contents Osmolyte biosynthesis is a general response of all organisms, including plants, to environmental conditions that generate osmotic stress, such as salinity or drought. The accumulation of these compatible solutes helps maintain osmotic balance, minimising or even avoiding cell dehydration. In this study, we measured the levels of the two main plant osmolytes: proline (Pro) and total soluble sugars (TSS), in needles of A. alba seedlings after the water and salt stress treatments. For this species, under our experimental conditions, Pro concentrations were low (less than 15 µmol g −1 DW) in the control plants, and did not vary significantly in the plants subjected for one month to the water deficit conditions; however, a significant increase of about two-fold was observed in the presence of salt (Figure 5a). The mean TSS level in non-stressed seedlings was 20 mg eq. glucose g −1 DW, approximately, and increased significantly, both under water deficit and salt stress conditions, in the latter case, in a clear concentration-dependent manner, reaching a maximum value of ca. 50 mg eq. glucose g −1 DW (Figure 5b). conditions; however, a significant increase of about two-fold was observed in the presence of salt (Figure 5a). The mean TSS level in non-stressed seedlings was 20 mg eq. glucose g −1 DW, approximately, and increased significantly, both under water deficit and salt stress conditions, in the latter case, in a clear concentration-dependent manner, reaching a maximum value of ca. 50 mg eq. glucose g −1 DW (Figure 5b). Oxidative Stress Malondialdehyde (MDA) is a product of lipid peroxidation, often employed as a reliable biomarker of cellular oxidative stress [54]. Needle MAD concentrations were determined in silver fir seedlings after the stress treatments. When compared to control values, mean MDA contents showed small increases under water deficit conditions, as well as with growing external salinity. The observed differences, however, were not statistically significant ( Figure 6). Oxidative Stress Malondialdehyde (MDA) is a product of lipid peroxidation, often employed as a reliable biomarker of cellular oxidative stress [54]. Needle MAD concentrations were determined in silver fir seedlings after the stress treatments. When compared to control values, mean MDA contents showed small increases under water deficit conditions, as well as with growing external salinity. The observed differences, however, were not statistically significant ( Figure 6). Bars represent means ± SE (n = 7). Different letters above the bars indicate significant differences between treatments, according to Tukey's test (α = 0.05). Non-Enzymatic Antioxidants No significant changes in the needle concentrations of total phenolic compounds (TPC, Figure 7a) or total flavonoids (TF, Figure 7b), were observed when comparing the control, non-stressed plants to those subjected to the water deficit treatment or grown in the presence of low salt concentration (100 mM NaCl). At higher salinities (200 and 300 mM NaCl), however, a significant increase in TPC and TF needle contents was observed. In any case, the absolute levels of the antioxidants and their relative accumulation compared to the control samples were small, with a maximum of about two-fold in the presence of the highest salt concentration tested for TF, and even less for TPC (Figure 7). Bars represent means ± SE (n = 7). Different letters above the bars indicate significant differences between treatments, according to Tukey's test (α = 0.05). Non-Enzymatic Antioxidants No significant changes in the needle concentrations of total phenolic compounds (TPC, Figure 7a) or total flavonoids (TF, Figure 7b), were observed when comparing the control, non-stressed plants to those subjected to the water deficit treatment or grown in the presence of low salt concentration (100 mM NaCl). At higher salinities (200 and 300 mM NaCl), however, a significant increase in TPC and TF needle contents was observed. In any case, the absolute levels of the antioxidants and their relative accumulation compared to the control samples were small, with a maximum of about two-fold in the presence of the highest salt concentration tested for TF, and even less for TPC (Figure 7). Antioxidant Enzyme Activities The specific activities for some of the most important antioxidant enzymatic systems: SOD, CAT, APX and GR, were calculated in protein extracts prepared from all collected needle samples (Figure 8). A slight increase in the average values of the specific activities of all tested enzymes, except GR, was detected under water deficit conditions. However, the differences with the corresponding controls were statistically significant only for SOD. Regarding salt stress conditions, CAT-specific activity did not vary significantly for any of the applied treatments (Figure 8b), whereas for the other three enzymes, a slight, but significant, increase was observed, either at all NaCl concentrations (SOD, Figure 8a) or only at medium and high salinities (APX, Figure 8c; GR, Figure 8d). These data are also in agreement with the relatively low level of oxidative stress induced by water deficit or salinity in A. alba seedlings under our experimental conditions. Bars represent means ± SE (n = 7). Different letters above the bars indicate significant differences between treatments, according to Tukey's test (α = 0.05). Non-Enzymatic Antioxidants No significant changes in the needle concentrations of total phenolic compounds (TPC, Figure 7a) or total flavonoids (TF, Figure 7b), were observed when comparing the control, non-stressed plants to those subjected to the water deficit treatment or grown in the presence of low salt concentration (100 mM NaCl). At higher salinities (200 and 300 mM NaCl), however, a significant increase in TPC and TF needle contents was observed. In any case, the absolute levels of the antioxidants and their relative accumulation compared to the control samples were small, with a maximum of about two-fold in the presence of the highest salt concentration tested for TF, and even less for TPC (Figure 7). Antioxidant Enzyme Activities The specific activities for some of the most important antioxidant enzymatic systems: SOD, CAT, APX and GR, were calculated in protein extracts prepared from all collected needle samples ( Figure 8). A slight increase in the average values of the specific activities of all tested enzymes, except GR, was detected under water deficit conditions. However, the differences with the corresponding controls were statistically significant only for SOD. Regarding salt stress conditions, CAT-specific activity did not vary significantly for any of the applied treatments (Figure 8b), whereas for the other three enzymes, a slight, but significant, increase was observed, either at all NaCl concentrations (SOD, Figure 8a) or only at medium and high salinities (APX, Figure 8c; GR, Figure 8d). These data are also in agreement with the relatively low level of oxidative stress induced by water deficit or salinity in A. alba seedlings under our experimental conditions. Hierarchical Cluster Analysis (HCA) Growth, physiological and biochemical parameters that showed significant differences (p < 0.05) in the ANOVA were included in a hierarchical cluster analysis (HCA) of the data (Figure 9). The HCA analysis allowed for establishing a signature of responses depending on the stress applied. Considering the different treatments, the two main branches of the HCA separated the control, water stress and 100 mM NaCl treatments, on the one side, from the 200 and 300 mM NaCl treatments, on Hierarchical Cluster Analysis (HCA) Growth, physiological and biochemical parameters that showed significant differences (p < 0.05) in the ANOVA were included in a hierarchical cluster analysis (HCA) of the data (Figure 9). The HCA analysis allowed for establishing a signature of responses depending on the stress applied. Considering the different treatments, the two main branches of the HCA separated the control, water stress and 100 mM NaCl treatments, on the one side, from the 200 and 300 mM NaCl treatments, on the other. Besides, within the first cluster, the control and the 100 mM NaCl treatments were grouped together and separated from the water deficit conditions. In terms of the general profile of all analysed variables, the treatments at higher salinities, 200 and 300 mM NaCl, were the most closely correlated (Figure 9). The water stress treatment had a characteristic signature in the response, which was quite similar to the control, but with lower needle fresh weight (NFW) and needle water content (NWC), lower levels of chlorophylls and higher of carotenoids, lower concentrations of K, and slightly higher concentrations of osmolytes (Pro and TSS), phenolics (TPC) and flavonoids (TF), as well as of the enzymatic activities SOD and APX. Regarding salinity, the two treatments with highest NaCl concentrations (200 and 300 mM) were characterised by a specific signature, which was displayed with greater intensity at the highest NaCl concentration (300 mM), corresponding to high levels of concentrations of Na, Cl and Ca, and low levels of K, as well as high levels of osmolytes and non-enzymatic and enzymatic antioxidants. The 100 mM NaCl was generally intermediate between the control and the 200 mM NaCl treatment, except for a remarkably high content in K in the needles (Figure 9). NaCl concentrations (200 and 300 mM) were characterised by a specific signature, which was displayed with greater intensity at the highest NaCl concentration (300 mM), corresponding to high levels of concentrations of Na, Cl and Ca, and low levels of K, as well as high levels of osmolytes and non-enzymatic and enzymatic antioxidants. The 100 mM NaCl was generally intermediate between the control and the 200 mM NaCl treatment, except for a remarkably high content in K in the needles (Figure 9). Two major clusters were also distinguished regarding the different measured parameters. The first one includes needle fresh weight (NFW), K + in roots (Kr) and total carotenoids (Caro)-the two former variables were highly correlated and displayed the highest values in the control and the lowest in the presence of 300 mM NaCl. The rest of the traits form a second major cluster, in which several sub-clusters were identified (Figure 9). For example, needle water content (NWC), the chlorophylls (Chl a and Chl b) and K + in needles (Kn) can be grouped in one of the major sub-clusters, all displaying the lowest levels in the water stress treatment. The other major sub-cluster includes the rest of the variables: Ca 2+ , Na + and Cl − in needles and roots (Can, Car, Nan, Nar, Cln, Clr), osmolytes (Pro, TSS), antioxidant compounds (TPC, TF) and the SOD, APX and GR activities. Generally, these traits displayed higher levels under salt stress, in most cases increasing in parallel to increasing salinity. SOD, however, appeared to be activated predominantly by water deficit. The highest correlations were observed between the ion concentrations, in both roots and needles, as well as between the ions, Pro and GR activity (Figure 9). . Hierarchical cluster analysis (HCA) and heatmap of growth, physiological and biochemical parameters displaying significant differences (p < 0.05) in analysis of variance (ANOVA) tests, in oneyear-old A. alba seedlings after 30 days of water or salt stress treatments. NFW, needle fresh weight; NWC, needle water content; Chl a, chlorophyll a; Chl b, Chlorophyll b; Caro, total carotenoids; Nan, sodium in needles; Cln, chloride in needles; Can, calcium in needles; Kn, potassium in needles; Nar, sodium in roots; Clr, chloride in roots; Car, calcium in roots; Kr, potassium in roots; Pro, proline; TSS, total soluble sugars; TPC, total phenolic compounds; TF, total flavonoids; SOD, superoxide dismutase activity; APX, ascorbate peroxidase activity; GR, glutathione reductase activity. Figure 9. Hierarchical cluster analysis (HCA) and heatmap of growth, physiological and biochemical parameters displaying significant differences (p < 0.05) in analysis of variance (ANOVA) tests, in one-year-old A. alba seedlings after 30 days of water or salt stress treatments. NFW, needle fresh weight; NWC, needle water content; Chl a, chlorophyll a; Chl b, Chlorophyll b; Caro, total carotenoids; Nan, sodium in needles; Cln, chloride in needles; Can, calcium in needles; Kn, potassium in needles; Nar, sodium in roots; Clr, chloride in roots; Car, calcium in roots; Kr, potassium in roots; Pro, proline; TSS, total soluble sugars; TPC, total phenolic compounds; TF, total flavonoids; SOD, superoxide dismutase activity; APX, ascorbate peroxidase activity; GR, glutathione reductase activity. Two major clusters were also distinguished regarding the different measured parameters. The first one includes needle fresh weight (NFW), K + in roots (Kr) and total carotenoids (Caro)-the two former variables were highly correlated and displayed the highest values in the control and the lowest in the presence of 300 mM NaCl. The rest of the traits form a second major cluster, in which several sub-clusters were identified (Figure 9). For example, needle water content (NWC), the chlorophylls (Chl a and Chl b) and K + in needles (Kn) can be grouped in one of the major sub-clusters, all displaying the lowest levels in the water stress treatment. The other major sub-cluster includes the rest of the variables: Ca 2+ , Na + and Cl − in needles and roots (Can, Car, Nan, Nar, Cln, Clr), osmolytes (Pro, TSS), antioxidant compounds (TPC, TF) and the SOD, APX and GR activities. Generally, these traits displayed higher levels under salt stress, in most cases increasing in parallel to increasing salinity. SOD, however, appeared to be activated predominantly by water deficit. The highest correlations were observed between the ion concentrations, in both roots and needles, as well as between the ions, Pro and GR activity (Figure 9). Effect of Salt and Water Stress on Seedlings' Growth and Photosynthetic Pigments Growth inhibition is probably the first and most general response of plants to different types of stress, which induce a switch from primary to secondary metabolism as energy and metabolic precursors, normally used for biomass accumulation during the vegetative growth phase, are redirected to the activation of defence reactions against those stress factors [55,56]. In most plant species, therefore, determination of growth parameters is commonly used to assess the effects of stress on the plants. This approach is not so useful for slow-growing species such as conifers, including the silver fir, especially in the first years of life [22,57,58]. Nevertheless, after one month of water and salt stress treatments, we could detect a significant inhibition of growth in one-year-old silver fir seedlings, indicating that at this developmental stage, this species is relatively more sensitive to drought and high salinity than other conifers, for example, Picea abies [59]. Fresh weight and water content are probably the most precise parameters to assess the degree of stress-induced growth inhibition, but their measurement requires the destruction of the plants. It is, therefore, essential to identify and characterise appropriate physiological and biochemical stress indicators that could be quantified through simple, sensitive and minimally invasive assays requiring small amounts of plant material. Photosynthetic pigments can be included in these putative stress markers, as inhibition of photosynthesis is also a common effect of stress [60][61][62], and is accompanied by a reduction of chlorophyll contents, by inhibition of its synthesis combined with activation of its degradation [63]. Previous studies on plants of different conifer species exposed to salt or drought treatments have also reported a decrease in chlorophylls a and b and, in some cases, in total carotenoids [59,64,65]. Following this general trend, in the present study, water and salt stress caused a significant reduction of chlorophylls a and b contents in silver fir needles. On the contrary, carotenoid levels were very low and did not show any significant decrease under stress. Effect of Salt Stress on Ion Accumulation Salt-sensitive plants (glycophytes), which include all major crops, generally respond to salt stress by trying to keep low concentrations of toxic ions in the leaves, either by exclusion mechanisms at the root level or by blocking its transport to the aerial parts of the plant, mechanisms that are effective only at low soil salinities [66][67][68]. On the contrary, in A. alba, salt stress caused the accumulation of relatively high concentrations of Na + and Cl − (and also Ca 2+ ) in roots and needles, in parallel to the increasing NaCl concentration in the irrigation water. This behaviour has also been reported for other coniferous species [18,65,69]. What should be highlighted is that, for each NaCl concentration, the levels of all tested ions were substantially higher in the needles than in the roots, indicating the presence of active systems for their transport to the aerial parts of the plants. It is also worth noting the high Na + concentration (not so much for Cl − ) measured in needles of control seedlings. This result suggests that silver fir possesses active mechanisms for uptake and transport to the needles of these ions, mainly Na + , to be used as 'cheap' inorganic osmolytes, even under low salinity conditions. In terms of energy consumption, ion transport will be less demanding than the synthesis of organic osmolytes [70]. Therefore, it can be concluded that the responses of A. alba seedlings to stress include constitutive mechanisms based on the accumulation in needles of Na + and other inorganic ions, without reaching toxic levels. The increase in Ca 2+ concentration in parallel to increasing soil salinity should also be considered as a mechanism of tolerance against salt stress, as the role of calcium counteracting the harmful effects of NaCl is well established [71]. Accumulation of Na + in plant tissues is generally accompanied by a reduction of K + levels, as both ions compete for the same membrane transporters [72,73]. Maintenance of relatively high K + /Na + ratios is considered as a relevant mechanism for salt tolerance [74]. Salt-induced reduction of K + concentration has been detected, indeed, in roots of silver fir seedlings, but not in needles, where no significant differences with the controls were observed. The active transport of K + from the roots limited the reduction of K + /Na + ratios in the needles, thus contributing to salt tolerance. Osmolyte Synthesis Environment stress conditions that lead to cellular dehydration, including salinity and water stress, trigger the cytosolic accumulation of various compatible organic solutes or osmolytes. Proline (Pro) is a common osmolyte in plants, synthesised in many species in response to different abiotic stresses [75]. Previous studies have reported significant increases in Pro concentrations under water deficit and/or high salinity also in conifers, such as spruce [18,59,76] or pine [77,78]. In the present study, a significant increase in Pro concentration has been detected in response to salt stress, although the differences with the non-stressed controls were small, as were the absolute Pro levels were reached. Therefore, it is likely that the contribution of Pro to osmotic adjustment under salt stress conditions is also small; nevertheless, this does not exclude a role of Pro in the mechanisms of salt tolerance in A. alba, based on its additional functions as an 'osmoprotectant' [79]. On the contrary, Pro does not seem to play any role in the response of silver fir to water deficit, at least under our experimental conditions. Soluble carbohydrates also play functional roles in the abiotic stress responses of many different plant species, contributing to osmotic adjustment and as osmoprotectants [80]. Accumulation of total soluble sugars (TSS) in response to salts stress has been reported, for example, in the needles, sapwood and inner bark of some coniferous species [81]. The experiments performed in the present study showed a significant increase in TSS levels under water deficit and salt stress conditions, suggesting the participation of these osmolytes in the responses of silver fir seedlings to both drought and salinity. Oxidative Stress and Antioxidant Defence Mechanisms Reactive oxygen species (ROS) are produced in plants as by-products during different metabolic reactions, for example, photosynthesis and respiration, and play essential roles in cell signalling and homeostasis [82]. 'Normal' physiological ROS levels can increase dramatically under stress conditions-drought, salinity, high temperatures or UV irradiation, among others-breaking down the balance between ROS production and scavenging, and causing oxidative stress [83,84]. The harmful effects of excess ROS can be counteracted by the activation of antioxidant enzymes, such as superoxide dismutase (SOD), catalase (CAT), ascorbate peroxidase (APX) or glutathione reductase (GR) [85,86], and there are many published reports demonstrating the increase in the specific activity of these enzymes under stress conditions. SOD is considered as the first line of defence against oxidative stress, eliminating the highly reactive superoxide radicals [87]. SOD activity increases in response to high salinity stress, for example, in pea [88], maize [89], Morus alba [90] or Brassica napus [91]. The combined effects of drought and high light irradiation significantly increased SOD activity in spruce (Picea asperata Mast.) seedlings [92]. CATs are tetrameric heme-containing enzymes, mainly located in the peroxisomes, which convert the SOD-generated H 2 O 2 into O 2 and H 2 O, thus contributing to ROS scavenging [93,94]. APXs also eliminate hydrogen peroxide, playing an essential role in the antioxidant system of plants [95]. GR is a highly conserved enzyme localised mainly in the chloroplast stroma, but also present in mitochondria, cytosol and peroxisomes [83]. GR activity has been shown to increase in response to different stresses, for example, by chilling in cucumber (Cucumis sativus L.) leaves [96], at high temperature in Triticum aestivum [97], under salt stress in cotton calli [98] or in alfalfa nodules subjected to water deficit [99]. In addition to the antioxidant enzymes mentioned above (and other oxidoreductases), non-enzymatic antioxidants, such as phenolic compounds, especially the subgroup of flavonoids-many of them showing strong antioxidant activities-are also involved in ROS scavenging and maintaining cellular redox equilibrium. Therefore, the stress-induced biosynthesis of these metabolites can contribute to the mechanisms of defence against abiotic stress [100,101]. According to the results of the present study, the only antioxidant system that appears to be substantially involved in the response of silver fir seedlings to water deficit stress, under the specific experimental conditions used, is SOD. Neither the activities of the other three assayed enzymes (CAT, APX and GR) nor phenolics or flavonoids needle contents, showed significant changes with respect to the values measured in the controls. On the contrary, salt stress induced significant increases in the specific activities of all tested enzymes, except CAT, as well as in the levels of total phenolic compounds and flavonoids. In all cases, however, the observed stress-induced activation of antioxidant systems was relatively weak, as shown by data that correlate with the low degree of drought-and salt-induced oxidative stress suggested by the observed changes in MDA contents. It appears that other mechanisms, such as the regulation of ion transport (for salt stress) and the accumulation of specific osmolytes (for both, salinity and water deficit) are enough to avoid higher levels of oxidative stress under the particular conditions used in our experiments. Stronger stress treatments will likely cause a stronger degree of oxidative stress and consequently induce a more pronounced activation of the antioxidant enzymes, and the accumulation to higher levels of phenolic compounds and flavonoids. Hierarchical Cluster Analysis The joint analysis by HCA of all growth and biochemical variables that showed statistically significant changes in response to the applied treatments generally confirmed and extended the information provided by the individual experiments and revealed specific signatures of the growth, physiological and biochemical responses to the different stresses. Concerning the responses of A. alba seedlings to water deficit, it clearly showed inhibition of growth (reduction of needle fresh weight), needle dehydration (decrease in water content), the increase in carotenoid contents, decrease in chlorophylls and the activation of SOD, but also a weaker activation of APX and the accumulation of TSS and, to a much lesser extent, Pro. This analysis also confirmed the concentration-dependent reduction of fresh weight in the presence of increasing external NaCl, needle dehydration-although less intense than under water deficit stress-relatively strong activation of GR and APX and not so strong of SOD, and the accumulation, to a greater or lesser extent, of TPC, TF and all measured ions, except K + in roots. As shown here, the responses of A. alba seedlings to drought and salinity partly overlap, although quantitative differences between the two treatments have been observed. This is to be expected considering that both environmental conditions cause osmotic and oxidative stress in plants. There are, however, mechanisms of response specific for salt stress, based mostly on the control of ion transport and ion homeostasis. Conclusions This work provided new experimental data on silver fir (Abies alba), an economically and ecologically important coniferous species, for which very little information is available regarding its responses to drought and, especially, high salinity. Silver fir does not seem to be very resistant to water deficit or salt stress, at least at the seedling stage, since one-month application of both treatments inhibited growth. Nevertheless, the results presented here allowed for establishing the most relevant mechanisms of (limited) tolerance to drought, mostly based on the accumulation of soluble carbohydrates as osmolytes/osmoprotectants. Tolerance to salt, on the other hand, seems to depend on the active transport to the needles of Na + , Cl − and Ca 2+ , the maintenance of relatively high K + /Na + ratios and the accumulation of Pro and soluble sugars for osmotic adjustment. Interestingly, A. alba seedlings present relatively high Na + concentrations in their needles, in the absence of salt. As far as it does not reach toxic levels, Na + can contribute to osmotic balance as a 'cheap' osmoticum, and its accumulation may represent a constitutive mechanism of defence against stress. These responses appear to be efficient enough to avoid the generation of high levels of oxidative stress; therefore, the activation of antioxidant systems most likely play only a secondary role in the mechanisms of tolerance to drought and salinity in silver fir seedlings. In addition, from a practical point of view, we would like to suggest the increase in carotenoids contents, the decrease of chlorophylls a and b levels and the accumulation of soluble sugars as reliable biomarkers for drought stress in this species. Salt stress, on the other hand, is associated to the decrease in chlorophylls levels and the accumulation in the needles of TSS, Pro (as organic osmolytes) and, especially, cations such as Na + or Ca 2+ . These biochemical stress markers may be useful for the future management of silver fir forests and reforestation programmes.
2020-04-09T09:12:02.764Z
2020-04-02T00:00:00.000
{ "year": 2020, "sha1": "6fc391669f95452ac653bb624fba1c130019be53", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/11/4/395/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "236277af288bf0624b7122d25962b6a942389859", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Chemistry" ] }
21998520
pes2o/s2orc
v3-fos-license
Thioredoxin Reductase-2 Is Essential for Keeping Low Levels of H2O2 Emission from Isolated Heart Mitochondria* Respiring mitochondria produce H2O2 continuously. When production exceeds scavenging, H2O2 emission occurs, endangering cell functions. The mitochondrial peroxidase peroxiredoxin-3 reduces H2O2 to water using reducing equivalents from NADPH supplied by thioredoxin-2 (Trx2) and, ultimately, thioredoxin reductase-2 (TrxR2). Here, the contribution of this mitochondrial thioredoxin system to the control of H2O2 emission was studied in isolated mitochondria and cardiomyocytes from mouse or guinea pig heart. Energization of mitochondria by the addition of glutamate/malate resulted in a 10-fold decrease in the ratio of oxidized to reduced Trx2. This shift in redox state was accompanied by an increase in NAD(P)H and was dependent on TrxR2 activity. Inhibition of TrxR2 in isolated mitochondria by auranofin resulted in increased H2O2 emission, an effect that was seen under both forward and reverse electron transport. This effect was independent of changes in NAD(P)H or membrane potential. The effects of auranofin were reproduced in cardiomyocytes; superoxide and H2O2 levels increased, but similarly, there was no effect on NAD(P)H or membrane potential. These data show that energization of mitochondria increases the antioxidant potential of the TrxR2/Trx2 system and that inhibition of TrxR2 results in increased H2O2 emission through a mechanism that is independent of changes in other redox couples. Reactive oxygen species (ROS) 3 are continuously produced by respiring mitochondria through the reaction of molecular oxygen with respiratory complexes from the electron transport chain generating superoxide (O 2 . ) (1,2). Subsequent dismutation of O 2 . by superoxide dismutase generates hydrogen peroxide (H 2 O 2 ), which can, in turn, affect cell function by reacting with thiol residues in redox-sensitive proteins in either the mitochondria or cytoplasm. Under redox balanced conditions, mitochondrial H 2 O 2 production is offset by the scavenging capacity of the GSH and thioredoxin-2 (Trx2) antioxidant systems. The Trx2 and GSH systems buffer H 2 O 2 via peroxiredoxin-3 (Prx3) (3,4) and glutathione peroxidase-1 and -4 (5), respectively. The other H 2 O 2 scavenger, catalase, is present in very low concentrations in the mitochondria (6,7). Both GSH and Trx2 require a continuous supply of NADPH-dependent reducing equivalents from GSH reductase and mitochondrial thioredoxin reductase-2 (TrxR2) (8 -10). In addition to having common substrates and cofactors, GSH-and Trx2-dependent antioxidant systems may substitute for one another to maintain certain cell functions (11). Mounting evidence indicates that these two systems are not completely redundant, however, and that the flow of electrons through each system is independently regulated (12)(13)(14). When the rate of superoxide production exceeds the scavenging capacity of the antioxidant systems, H 2 O 2 emission from the mitochondria will increase (9,15,16). The importance of the antioxidant systems is highlighted by the observation that H 2 O 2 emission from the mitochondria into the cytoplasm will occur if TrxR2 is inhibited in rat liver (17). In the heart, there may be an even more dramatic impact of TrxR2 on ROS emission and overall cell function due to the high rate of mitochondrial respiration in this organ. However, this possibility has yet to be investigated. This issue is relevant to human health, where uncontrolled mitochondrial ROS emission is increasingly recognized as a causative factor for many cardiac disorders (18 -21), both acute (e.g. ischemia/reperfusion injury) (22) and chronic (e.g. congestive heart failure) (23). Intriguingly, although the expression of the enzymatic scavenger GSH reductase is highly conserved in aerobic life (24), its genetic knockdown results in viable mice (25). Conversely, ablation of either the Trx2 or TrxR2 gene confers a lethal embryonic phenotype (26,27), and cardiac tissue-restricted ablation of TrxR2 results in fatal dilated cardiomyopathy (26). That Trx2 contributes to protection against oxidative stress is epitomized by the fact that its overexpression in human HEK-* This work was supported, in whole or in part, by National Institutes of Health Grant R01-HL091923-01 (to M. A. A. and N. P.). This work was also supported by American Heart Association Postdoctoral Fellowship 10POST414001 (to B. A. S.). □ S The on-line version of this article (available at http://www.jbc.org) contains supplemental "Methods," "Results," Figs. S1-S5, and an additional reference. 1 Both authors contributed equally to this work. 2 293 cells results in enhanced ⌬⌿ m and increased resistance to oxidative stress induced by tert-butyl hydroperoxide (28). Therefore, Trx2 and TrxR2 play crucial roles in development and survival, and changes in this system likely contribute to cardiac dysfunction in response to oxidative stress. The relative contribution of the GSH and Trx2 systems in controlling ROS levels in the heart is still unclear. GSH is more abundant, with concentrations in the millimolar range (29), whereas Trx2 is present at low micromolar levels (3). Recently, it was shown in brain and liver mitochondria that the GSH redox state became much more reducing when respiratory substrates were provided, explaining at least in part why energized mitochondria were protected from an extrinsic oxidative stress (30). Whether mitochondrial energy status will alter the redox state of Trx2 (and thus the function of the TrxR2/Trx2 system) is currently unknown. Here, we investigated the possibility that TrxR2 controls H 2 O 2 emission by controlling the levels of reduced (active) Trx2 and Prx3. Trx2 became more reduced upon provision of respiratory substrates. The TrxR2 inhibitor auranofin (AF) blocked the reduction of Trx2, increased the oxidation of Prx3, and increased emission of H 2 O 2 from mitochondria but had no effect on the redox states of GSH or NAD(P)H. Similar effects were obtained when ventricular cardiomyocytes were treated with AF. Incubation of mitochondria with 1-chloro-2,4-dinitrobenzene (DNCB) also increased ROS emission but did so by depleting GSH levels in the absence of any effect on Trx2 or Prx3. Our study demonstrates that inhibiting TrxR2 alters mitochondrial redox balance and increases H 2 O 2 emission. This occurs without altering GSH or NAD(P)H levels and without affecting energetics. Energized mitochondria display more reduced Trx2. The latter appears as a means of self-protection against oxidative stress while rendering the organelle less prone to H 2 O 2 emission. EXPERIMENTAL PROCEDURES Mitochondrial Isolation from Heart-Mitochondrial isolation and handling from guinea pig heart were performed as described (31). The same procedure was utilized for isolating mitochondria from mouse heart but with slight modifications (see supplemental "Methods"). Mitochondrial protein concentrations were determined using the bicinchoninic acid method (BCA TM protein assay kit, Thermo Scientific). Cardiomyocyte Isolation and Fluorescent Probes-Ventricular cardiomyocytes from guinea pig or mouse were handled as described (32)(33)(34). Before imaging, cells were loaded with the fluorescent probes 5 Determination of TrxR2 Activity-To prepare mitochondrial extracts, mitochondria were suspended in buffer A (100 mM potassium phosphate (pH 7.0), 2 mM EDTA, and protease inhibitors (Complete, Roche Applied Science)) and subjected to three sequential freeze/thaw cycles between an ethanol dry ice bath and a 37°C water bath. Membranes were sedimented by centrifugation at 14,000 ϫ g for 2 min, and the supernatant was recovered for further analysis. The protein concentration was obtained using the BCA protein assay (Pierce). The activity of TrxR2 was determined using a modified version of the protocol published by Arnér and Holmgren (35), which excludes the use of BSA (BSA is known to bind and sequester AF). Briefly, ϳ20 g of mitochondrial extract was added to buffer A containing 3.33 mM 5,5Ј-dithiobis(2-nitrobenzoic acid) (DTNB) and 200 M NADPH. The relative rate of reduction of DTNB at ⌬A 412 nm was recorded under base-line conditions or after the addition of increasing AF concentrations. Processing Isolated Mitochondria for Trx2, Prx3, and GSH Detection-Mitochondrial suspensions (100 -200 g of mitochondrial protein) were precipitated with ice-cold TCA (10%, w/v) for 1 h on ice. Samples were then centrifuged at 21,000 ϫ g for 15 min; the pellet was further processed for determining Trx2 and Prx3 redox status, whereas after neutralization, the supernatant was utilized for quantifying GSH. The oxidized form of Prx3 (dimer) was confirmed by incubating samples with DTT (65 mM) and observing the collapse (ϳ90%) of the two dimeric bands into the bottom migrating band, which corresponds to the monomeric form of Prx3 (supplemental Fig. S4). Membranes were scanned using an Odyssey scanner (LI-COR Biosciences) with band densitometry performed using NIH ImageJ software (version 1.452q). The levels of both Trx(SH) 2 and TrxSS were determined by integrated intensity after background subtraction from a blank portion of the membrane. The portion of reduced Trx2 was determined as the integrated intensity of Trx(SH) 2 /(TrxSS ϩ Trx(SH) 2 ). A similar procedure was utilized to quantify Prx3. Redox Potential of Mitochondrial Thioredoxin-The redox potential of the thioredoxin redox couple was calculated according to the Nernst potential. In the case of the couple TrxSS/Trx(SH) 2 , for the half-cell reaction Trx(SH) 2 3 TrxSS ϩ H ϩ ϩ 2e Ϫ , the redox potential is described by Equation 1 (37), where E stands for the electrochemical potential of the hemiredox reaction of oxidized over reduced thioredoxin; Ϫ304 is the standard redox potential (expressed in millivolts) at 37°C and 1 bar, adjusting the published value of Ϫ292 (at pH 7.0) to reflect the pH (7.2) utilized in our experiments with isolated mitochondria (13); R is the universal gas constant; T is temperature in Kelvin degrees; n is the number of electrons required to reduce the oxidized form of the couple; and F is the Faraday constant that stands for the charge of 1 mole of electrons. RT/nF under our conditions is 61.51/2. The redox status of thioredoxin as the ratio (in percentage) of TrxSS to Trx(SH) 2 was determined by redox Western blotting as described above (36). Thus, when the ratio is 80/20%, the redox potential is Ϫ285 mV. GSH Detection-The supernatant of mitochondria after TCA precipitation was neutralized to pH 7.4 after the addition of 1 M Tris-HCl at pH 8.0. (Usually, 385 l of 1 M Tris-HCl (pH 8.0) was added to 200 l of TCA supernatant.) The pH was verified with a pH indicator because the alkaline condition is crucial for GSH detection later. To quantify GSH in these samples, we utilized fluorescent labeling of thiols with N-(4-(7-diethylamino-4-methylcoumarin-3-yl)phenyl)maleimide and a method described previously (38) that we adapted to our conditions. This highly sensitive, non-enzymatic assay allows detection of picomole levels of GSH. N-(4-(7-Diethylamino-4methylcoumarin-3-yl)phenyl)maleimide (60 M) reacts quickly (within seconds) and specifically with GSH (with GSSG remaining undetected), and the fluorescent product was monitored with the spectrofluorometer at ex ϭ 384 nm and em ϭ 475 nm. The rate of reaction of N-(4-(7-diethylamino-4-methylcoumarin-3-yl)phenyl)maleimide with GSH is linear within 10 -50 nM (20 -100 pmol of GSH). Additional controls showed that Ͼ85% recovery of GSH was possible when known amounts of the latter were subjected to the same procedure as the mitochondria. Previously, we measured the mitochondrial volume with a radioactive tracer method under state 4 respiration (ϳ2 l/mg of mitochondrial protein) (31). Using this value and the amount of protein from which GSH was extracted, we could estimate its concentration in the matrix in the range of 1-1.5 mM. This value is similar to those found in brain mitochondria with glutamate/malate (ϳ2 mM, calculated using the mitochondrial volume above) (31) and appears reasonable considering the 2.7 mM intracellular GSH determined before in ventricular cardiomyocytes (32). Mitochondrial AF Uptake Determined by Inductively Coupled Plasma Mass Spectrometry (ICP-MS)-Mitochondria were isolated and handled under the same conditions described for analyzing H 2 O 2 emission and the redox status of Trx2. Immediately after isolation, mitochondria (0.73 mg of mito-chondrial protein) were incubated in the absence (control) or presence of 1 M AF with assay buffer containing 5 mM glutamate/malate for 5 min at room temperature with gentle shaking. After incubation, mitochondria were centrifuged twice at 14,000 ϫ g for 2 min, and the two supernatants (from AF incubation) were kept for further analysis. The mitochondrial pellets of both control and AF-incubated mitochondria were resuspended in water and subjected in parallel to three freeze/ thaw cycles of 3 min each under the same conditions described above for preparing mitochondrial extracts. Thereafter, mitochondrial suspensions were centrifuged at 14,000 ϫ g for 2 min, and the supernatant and pellet from both control and AF-treated mitochondria were collected for further analysis. Samples (100 l) from the two washout supernatants and after freeze/thaw, obtained from the AF-treated mitochondrial pellets, were diluted 1:10 with deionized water to give a working volume of 1 ml for ICP-MS analysis. The two pellets after freeze/thaw (both control and AF-treated) were dissolved in 2 ml of 60% aqua regia at 80°C for 36 h and analyzed in full by ICP-MS. Gold analysis was carried out using a Thermo ICP-MS X7 series. The linearity in the response of the gold signal was calibrated with a set of gold standards at nanograms/ml, and the samples were analyzed four times with the ICP-MS system. We recovered 95% of the total gold (from AF) added initially (150 ng of AF). (The molecular weight ratio of AF to gold (679.5:197) is 3.45; thus, to determine the amount of AF, the gold in nanograms/ml is multiplied by 3.45.) Statistical Analysis-Data were analyzed with GraphPad Prism software (version 3). The statistical significance of the differences between treatments was evaluated by one-way analysis of variance using Tukey's multiple comparison test. The kinetic parameters of the rates of H 2 O 2 emission in the presence of inhibitors were determined through nonlinear regres- Materials-AF ((1-thio-␤-D-glucopyranosato)(triethylphosphine)gold 2,3,4,6-tetraacetate), DNCB, and DTNB were purchased from Sigma. CM-H 2 DCFDA, MitoSOX, tetramethylrhodamine methyl ester, AMS, N-(4-(7-diethylamino-4methylcoumarin-3-yl)phenyl)maleimide, and Amplex Red were obtained from Invitrogen. All other reagents were purchased from Sigma. RESULTS We first assessed the effect of inhibiting TrxR2 imparted by the two inhibitors (AF and DNCB) in heart mitochondria. The relative TrxR2 activity as a function of dose is shown in Fig. 1. AF strongly inhibited TrxR2 activity with an inhibition constant (IC 50 ) of 8.7 nM and noticeable effects starting at 1 nM. Conversely, the effect of DNCB on TrxR2 was much weaker; at 10 M DNCB, TrxR2 still had preserved Ͼ70% of its activity. Next, we tested the effects of inhibiting TrxR2 on H 2 O 2 emission. Freshly isolated mitochondria from mouse or guinea pig hearts were preincubated with varying concentrations of AF or DNCB, and the rate of H 2 O 2 release was determined (Fig. 2, A and C). In mouse mitochondria respiring under forward electron transport (FET; 5 mM each glutamate and malate), the presence of AF substantially increased ROS emission. The maximal rate of H 2 O 2 emission (V max ) and K 0.5 (i.e. the amount of AF at which 50% of V max occurs) were 707 Ϯ 65 pmol/min/mg of protein and 1.32 Ϯ 0.5 nM, respectively (Fig. 2). Similar results were observed in mitochondria from guinea pig (V max ϭ 994 Ϯ 49 pmol/min/mg of protein and K 0.5 ϭ 4.8 Ϯ 0.95 nM) (supplemental Fig. S1), where AF also produced a large increase in H 2 O 2 emission under reverse electron transport (5 mM succinate) (supplemental Fig. S2). As a highly lipophilic agent, AF should be easily taken up by mitochondria. To ascertain this idea, we utilized ICP-MS to detect the presence of gold in the membrane and matrix fractions from isolated mitochondria treated with AF (1 M). Following AF incubation, 80 Ϯ 1.5% of the gold (from AF) was taken up by mitochondria. After ICP-MS analysis, we were able to recover 95 Ϯ 2% of this gold fraction that was partitioned as follows: 14 Ϯ 1.2% in the mitochondrial matrix and 84 Ϯ 5% bound to membranes. Thus, due to its lipophilic nature, AF (probably through gold) remains largely attached to membrane, where it can inhibit TrxR2 vicinal to the membrane. In addition, a portion of it will enter the matrix, where it can interact with and inhibit the majority of TrxR2. DNCB also increased ROS emission from isolated mitochondria, albeit at concentrations much greater than for AF (Fig. 2C). Under FET conditions, DNCB produced Ϸ10-fold higher H 2 O 2 emission from mouse mitochondria with a K 0.5 ϳ3 orders of magnitude greater than AF (0.93 Ϯ 0.27 M versus 1.32 Ϯ 0.53 nM, respectively) (Fig. 2, B and D). Next, we tested whether, in addition to exacerbated ROS emission, inhibiting TrxR2 via AF may lead to impaired mitochondrial energetics and affect other scavenging systems, namely GSH. The increase in ROS release elicited by AF had no consequences for mitochondrial energetics or redox status. The dynamic responses of ⌬⌿ m (Fig. 3, A and B), NADH (Fig. 3C), and GSH (Fig. 3D) in guinea pigs (Fig. 3, A and D) or mice (Fig. 3, B and C) were unaffected by 5 nM AF. The observed increase in H 2 O 2 emission appears to be catalase-independent. Indeed, activation of the H 2 O 2 flux by AF was not affected by aminotriazole (supplemental Fig. S5), a well known catalase inhibitor (39,40), thus ruling out a major role for this enzyme. These findings support previous reports showing that catalase is present in the low nanomolar range in mitochondria (6), thus ϳ3 orders of magnitude below the concentration of Trx2 (3). Next, we tested the hypothesis that the redox status of Trx2 correlates with the energization level of the mitochondria and that this will be inhibited by AF. To this end, isolated mitochondria were treated with AF under non-energized (base line), state 4, or state 3 conditions. We then analyzed the redox status of the Trx2 pool in intact functional mitochondria with respect to controls run in parallel by redox Western blotting. This approach allowed us to set precise physiological conditions (e.g. energized versus non-energized) and to determine the effects on multiple variables related to mitochondrial energetics and redox balance. Because the reducing power of Trx2 is determined by the ratio of the oxidized to reduced form (TrxSS/ Trx(SH) 2 ), we calculated this ratio and expressed it as redox potential according to the Nernst equation (see "Experimental Procedures"). The redox potential of Trx2 decreased (became more reducing) from Ϫ322 mV at base line to Ϫ350 mV in state 4 and state 3 mitochondria (Fig. 4C). This corresponds to a 10-fold decrease in the ratio of oxidized to reduced Trx2 (Fig. 4A) and a 2.4-fold increase in the percentage of the Trx2 pool in the reduced form (Fig. 4B). In the mitochondrial matrix, Trx(SH) 2 rose in parallel with NAD(P)H and GSH as well as ⌬⌿ m after glutamate/malate addition (Fig. 3). Trx(SH) 2 remained as high in state 3 as in state 4, although NAD(P)H was more oxidized in state 3. Excessive H 2 O 2 emission resulting from TrxR2 inhibition would likely be due to oxidation of its downstream effectors. As such, we tested whether varying the concentration of AF or DNCB would affect the oxidation of Trx2 or Prx3 in respiring mitochondria. Furthermore, we also examined the level of GSH in these same samples. AF displayed a strong effect on the redox state of Trx2 (Fig. 5B) and Prx3 (Fig. 6A) but had no significant effect on GSH (Fig. 5A). However, the AF concentrations that resulted in these changes were higher than those that increased H 2 O 2 emission (Fig. 2). In contrast, DNCB did not affect Trx2 (Fig. 5E) or Prx3 (Fig. 6B) but significantly decreased GSH at a concentration of 1 M (Fig. 5D), the same concentration that produced half-maximal ROS emission (Fig. 2D). At this concentration, DNCB inhibited only ϳ20% of TrxR2 activity (Fig. 1). This is consistent with earlier reports showing that DNCB can inhibit TrxR (41) and deplete GSH via conjugation (42). Therefore, under the conditions used here, the impact of AF on H 2 O 2 emission was mediated by Trx2 and Prx3, whereas the effects of DNCB appeared to be largely mediated by GSH depletion rather than Trx2/Prx3 oxidation. The results with isolated mitochondria predict that increased H 2 O 2 emission from mitochondria will translate into an increase in H 2 O 2 in the cytoplasm. To confirm that TrxR inhibition would lead to higher intracellular ROS levels, intact cardiomyocytes were loaded with CM-H 2 DCFDA (a H 2 O 2 probe) and MitoSOX (a O 2 . sensor) and treated with AF. When these cells were imaged by two-photon laser scanning fluorescence microscopy, an expected increase in ROS levels was observed (Fig. 7). Also in agreement with the results in isolated mitochondria, AF had no effect on ⌬⌿ m or NAD(P)H levels in intact cardiomyocytes (Fig. 7B). DISCUSSION The main goal of this study was to demonstrate that the TrxR2/Trx2 system is a major controller of the rate of H 2 O 2 emission flux from mitochondria under physiological conditions in the mammalian heart. The second aim was to demonstrate that the energization status of mitochondria correlates with the redox status of Trx2. To this purpose, we examined the effect of inhibiting TrxR2 on mitochondrial H 2 O 2 emission and the redox state of Trx2 in respiring mitochondria. Two inhibitors, AF and DNCB, were used for this purpose. AF is an established anti-rheumatoid gold(I) drug that also exhibits anticancer activity (43,44). TrxR2 shares with TrxR1 the presence of selenium at the active site, although they have different sequences (45). Selenols bind more efficiently to heavy metals, and this fact explains the targeting of the selenocysteine of TrxR1 or TrxR2 by organic gold inhibitors such as AF (46,47). In contrast, DNCB is an alkylating agent that has been shown to inhibit TrxR irreversibly (41,48) or deplete GSH (49). We have demonstrated here that (at the concentrations used) the increased ROS due to AF is due to its action on the thioredoxin system, whereas ROS emission provoked by DNCB results largely from the depletion of GSH. Previous reports have shown that AF stimulates H 2 O 2 emission under a combination of forward or reverse electron transport and respiratory inhibitors in mitochondria isolated from rat liver (17) or heart (50). However, this was not observed under FET alone, which represents the physiological mode of respiration. Here, we have demonstrated that in the heart and under this mode of respiration, the impact of inhibiting the TrxR2 system with AF is larger than reported previously. This is in keeping with our initial hypothesis that due to the sustained oxidative metabolism of the heart, the effect of TrxR2 inhibition would be more pronounced. The concentrations of AF necessary to increase H 2 O 2 emission and inhibit TrxR2 activity in mitochondrial matrix extracts were very similar as shown by their respective IC 50 and K 0.5 . This is further supported by our in vitro activity measurements on purified TrxR from rat liver (supplemental Fig. S3), which showed an IC 50 of 5.6 nM AF, similar to the 4 nM determined for TrxR from human placenta (51) or the ϳ1 nM or 2.5 nM reported for TrxR1 or TrxR2, respectively (52). These data suggest that TrxR2 is the major antioxidant target of AF in the mitochondria. Recent crystallographic data obtained with thioredoxin-glutathione reductase from Schistosoma mansoni incubated with AF have shown that the mechanism of inhibition is via gold released from AF and that the role of selenium at the onset of inhibition by AF is catalytic (53). Consistent with this mecha- nism, we have shown herein that AF has no effect on the redox state of Trx2 under base-line conditions in which the mitochondrial NAD(P)H pool is oxidized and therefore unable to reduce the active site of TrxR2. In energized mitochondria, in which NAD(P)H was reduced, Trx2 was not completely oxidized when AF was added. Potential explanations of this result are that (i) TrxR2 inhibition was incomplete in the presence of 100 nM AF, (ii) Trx2 may be reduced by another system independent of TrxR2, or (iii) AF acted on a distinct submitochondrial pool of TrxR2 to increase ROS emission. At this stage, we cannot rule out a potential role of antioxidant systems within the intermembrane space, where TrxR2 (54), TrxR1 (55), and other cytoplasmic antioxidants such as the GSH system (56) and copper/zinc-superoxide dismutase (SOD1) (55) are represented. This certainly warrants future investigation. Regarding the redox status of Prx3 in response to AF and DNCB, earlier data obtained in cells showed that both AF and DNCB can inhibit TrxR and oxidize Prx3, but these results were obtained with micromolar concentrations of both inhibitors (57). Our results are in partial agreement with this report in that in the range of 5-30 M, DNCB severely depletes mitochondrial GSH (42). This is probably through GSH-DNCB conjugation catalyzed by glutathione S-transferase (49). However, our present data obtained in the nanomolar range show that whereas AF oxidizes Trx2 and Prx3 without affecting GSH, DNCB decreases GSH without affecting Trx2 or Prx3 redox state (Figs. 5 and 6). Thus, under the physiological mode of respiration used in our assay conditions, it was possible to sort out the different inhibitory mechanisms exhibited by AF and DNCB in the nanomolar range. A caveat to this interpretation is that it has been reported that DNCB-modified TrxR will develop NADPH oxidase activity that produces superoxide and H 2 O 2 in vitro (41). However, this is not likely to be the case under our conditions, as Ͼ70% enzyme activity still remained following DNCB treatment, and as such, only a small fraction of TrxR2 may have this activity. Taken together, these data deepen our understanding of the relevance of the TrxR2/Trx2/Prx3 system in controlling mitochondrial redox balance. Moreover, we have shown that the GSH levels did not change in response to the AF concentrations used herein. These results raise the question of the relative contributions of the GSH and thioredoxin systems in controlling the cellular and mitochondrial redox milieu. Previous studies have shown a clear link between GSH depletion and H 2 O 2 and O 2 . generation in mitochondria and cells (16,32,42). Apparently, a certain threshold of GSH depletion (Ϸ30 -40%) needs to be attained in heart mitochondria before H 2 O 2 emission can manifest (42). This value is in keeping with our present measurements. Decreasing GSH regeneration through inhibition of GSH reductase (16) (or via direct GSH depletion with DNCB (42)) does not increase H 2 O 2 emission under FET, a condition in which ROS generation is typically very low. These results reinforce our interpretation of the main result of this work: that the TrxR2/Trx2/Prx3 system exhibits high control over ROS emission from mitochondria. This is particularly striking due to the different abundance in mitochondria of the two antioxidant systems. In agreement with previously reported values (37), our measurements indicate that the mitochondrial GSH pool is 1-1.5 mM, whereas Trx2 has been estimated to be ϳ10 M (3). In conclusion, we have demonstrated here that the TrxR2/ Trx2/Prx3 system is essential for controlling H 2 O 2 flux from heart mitochondria under the physiological mode of respiration. This contributes to the maintenance of a redox environment compatible with signaling and energetics (16,58). A high level of energization is necessary for maintaining Trx2 in its reduced and active form. The results presented here suggest TrxR2/Trx2/Prx3 as a new potential target for rescuing proper mitochondrial function and, in turn, cellular redox balance, both of which are crucial for cell viability and function. Effect of acute addition of AF on ROS levels in intact cardiomyocytes. Cardiomyocytes isolated from mouse heart were simultaneously loaded with CM-H 2 DCF and MitoSOX or with CM-H 2 DCFDA, tetramethylrhodamine methyl ester (TMRM; ⌬⌿ m probe), and autofluorescence (NADH) and imaged by two-photon laser scanning fluorescence microscopy as described under "Experimental Procedures." The increase in whole cell fluorescence shown after acute addition of 10 nM AF was obtained from four different cells in the same microscopic field. Similar results were obtained in three independent experiments.
2018-04-03T05:23:47.217Z
2011-08-05T00:00:00.000
{ "year": 2011, "sha1": "4437abe88d433266df3b8a72bf4f99d4631932d9", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/286/38/33669.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "d7fa155a5578780f5f45709f7a0b55a5b393ae19", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
229207418
pes2o/s2orc
v3-fos-license
Research Status and Development Trend of Self-Compacting Concrete technology The definition of self-compacting concrete is briefly introduced. The research process of selfcompacting concrete in various countries in the world is studied, from the initial test to the final formation of the preliminary application specification, and it is constantly improved. The mix proportion design, working performance, testing method, application and research status of self-compacting concrete are discussed in detail, and the problems to be solved in the future and the development prospect are prospected. Introduction Self-Compacting Concrete (SCC) has high fluidity, nonsegregation, uniformity and stability, which can fill the formwork space uniformly under the action of selfweight, without external vibration [1] . SCC can improve concrete quality, save cement, reduce noise, improve production efficiency, save labor, speed up project progress, improve working environment and reduce project cost. After years of research and engineering practice, good results have been achieved in the mix design, evaluation method and engineering application of SCC [2][3][4] . The research on workability and other aspects of SCC has aroused great interest from researchers all over the world. The United States, Japan and other countries have carried out systematic research on this kind of concrete, including workability measurement and rheological properties of SCC, mix design, admixture and admixture of SCC, and durability and structural properties of SCC. In March, 2002, EFNARC published a guide on the design and application of SCC, which is the first design and application specification of SCC [5] . In the same year, American ASTM C09 Committee began to formulate the standard of SCC [6] . Central South University, Tsinghua University, Shandong Institute of Civil Engineering and Architecture, Suzhou Institute of Concrete and Cement Products, Fuzhou University, Wuhan Institute of Urban Construction and other scientific research institutions have studied SCC, but each has its own emphasis: Central South University mainly carried out research on admixture, workability and durability [7] ; In Tsinghua University, the engineering test of SCC with compressive strength of 80 MPa was carried out [8] ; Suzhou Institute of Concrete and Cement Products has conducted research on preparation methods [9] ; Fuzhou University has conducted research on mix design, etc. [10] ; On the basis of a large number of studies, China has also issued codes and standards for the design and application of SCC. In 2004, China civil engineering society issued the Guide for the Design and Construction of SCC, in 2006, China Engineering Construction Standardization Association issued the Technical Specification for the Application of SCC, and in 2012, it issued the national industry standard, Technical Specification for the Application of SCC. SCC has been used in many projects since it was developed. 240000 m 3 and 150000 m 3 25 MPa SCC were used for two anchorages of Akashi Strait Bridge in Japan [11] . In Taiwan's TC Tower, all 240 hollow columns below the 60th floor are poured with SCC from bottom to top to ensure the compactness of concrete columns [12] . The 62story concrete-filled steel tubular column of Shuanglian Square in Seattle, USA, adopts the 115 MPa SCC pumped from the bottom layer by layer, which ensures the pouring quality and integrity [11] . Due to the difference of raw materials and construction conditions, China can't copy the Japanese mix proportion. In 1995, Chen Enyi of Tsinghua University made an experiment with the raw materials on the market, and successfully poured C25 fluid concrete on the site of Qinghe602 residential district in Beijing for wall construction, and the pouring height increased from 2 m to 4 m [11] . In September, 1996, the mixing station of Component Factory of Beijing Urban Construction Group Corporation carried out the appraisal of SCC technology, in which C30 concrete was used in practical engineering, and the pouring capacity was 3000 m 3 . In 1996 and 1997, Beijing Second Construction Company respectively tried to pour columns, beams, floors and foundations with SCC in three projects [13] . After that, many scientific research institutions, enterprises and even universities in China put into research and development and application. Since 1995, the amount of irrigation has exceeded 40000 m 3 . It is mainly used for underground excavation, dense reinforcement, complex shape and other parts which cannot be poured or which are difficult to be poured. At the same time, it also solves the problems such as construction disturbing people, shortens the construction period and prolongs the service life of structures. The representative engineering examples include: the simplified wall of the new terminal building of Beijing Capital Airport, the reconstruction project of the commercial district on the east side of Xidan North Street, the construction project of nuclear waste containers in Daya Bay Nuclear Power Station, the protection project of historical buildings in Jimei, Xiamen, the diversion tunnel of several hydropower stations such as the Three Gorges of the Yangtze River and the diversion project of the left factory dam on the left bank, the construction project of Runyang Yangtze River Bridge and the tunnel project of Wansongguan in Fujian [14] , all of which have achieved good technical, economic and social benefits. In recent 10 years, with the continuous improvement of the technical specifications for the application of SCC in China, the application of SCC has entered a full-scale outbreak stage, and its application scope has been further expanded, involving almost all engineering categories such as nuclear energy, railway, water conservancy, municipal and civil use, etc. Besides underground excavation, dense reinforcement, complex shape and other parts that cannot be poured or are difficult to pour, it also includes various reinforcement projects, shield segments, centrifugal molding and other prefabricated components. SCC mixed with steel fiber and organic fiber, lightweight aggregate SCC, rockfill SCC, airport pavement SCC and limestone powder SCC, etc. [15] , and "sandwich", namely SCC-NMC-SCC sandwich construction system [16] , etc.; Performance is also constantly improved and diversified, such as no shrinkage or low shrinkage, low hydration heat (large volume) and early strength. Mix proportion design of SCC The existing methods of preparing SCC are mainly based on experience, and the mix proportion design is mostly based on the experience of Japan, Netherlands, France and Sweden. In the production of SCC, the mix proportion design should ensure that the concrete can achieve the new mixing hardening performance specified in advance, and the components should coordinate with each other to prevent segregation, bleeding and settlement. From the literature of SCC research at home and abroad, the calculation methods of mixture ratio can be generally divided into three categories: the first category is based on the selected solution method of sand and gravel volume content; The second type is the design method based on empirical parameters; The third category is the calculation method of direct reference to the mix design of high performance concrete, as shown in Table 1. The first category includes: (1) the method of fixed sand and gravel volume content: this method is adopted by European code [17] , Guide to the China Civil Engineering Society [18] and British standard [19] , and its principle is improved according to the characteristics of mix design proposed by Okamura; (2) Simple mix proportion design method: This method was put forward by Taiwanese scholar Nan Su et al [20] . Its basic principle is to fill loose aggregate gaps with cementitious material slurry. Compared with other volume methods, its innovation lies in putting forward the concept of compactness factor to control aggregate consumption in SCC, and then to control the fluidity and compactness of the mixture; (3) Four-layer system design method: This method was proposed by Ma et al [21] . By considering SCC as a four-layer system composed of solid and liquid phases, it is proposed to set the coarse aggregate content in concrete per cubic meter and the sand volume content in mortar, and put forward an improved theoretical calculation method of fixed sand volume content according to the principle of maximum packing compactness. The four-layer system design method is generally based on the preparation of medium and low strength SCC, and for high strength SCC, the volume content of coarse aggregate should be appropriately increased; (4) Aggregate specific surface area method: Wang [22] put forward the method according to the aggregate specific surface area method and the theoretical study of surplus pulp volume. The second category includes: (1) the parameter design method of Wu [23][24] : the basic idea of this mix design method is that concrete is a mixture of various raw materials, and several parameters can be defined as different influencing factors of different raw materials on concrete during mix design, and on this basis, the final consumption of various materials can be obtained by solving simultaneous equations; (2) Orthogonal test method [25][26][27][28] : The idea of adopting orthogonal test (or "factorial method") is to study the influence of different factors such as the total amount of cementitious materials, mineral material content, sand ratio, waterbinder ratio, slurry volume and admixture content on the workability and strength of concrete, determine the reasonable dosage range of each parameter, and then calculate the mix proportion according to the ordinary concrete mix proportion design method. Orthogonal test method is used to test the concrete mix ratio, and the results can objectively reflect the law of preparation and optimize the mix ratio conveniently. Using orthogonal table to arrange tests can also avoid blind tests, greatly reduce the number of tests and shorten the development time; (3) Empirical derivation method: pure trial matching method, that is, based on empirical data, the unit coarse aggregate consumption, water consumption and cementing material consumption are determined, and the unit fine aggregate volume is equal to the total volume minus the volume of other materials. On this basis, the initial mixture ratio is determined for trial matching, and the workability and compressive strength are tested, and then the final mixture ratio is obtained after adjustment. The third category includes: (1) Overall calculation method: firstly, through mathematical derivation from the assumed concrete volume model, the calculation formulas of water consumption per cubic meter of concrete and sand ratio are obtained; then, combined with the traditional water-cement ratio rule, and on this basis, through the design of composite superplasticizer mixture ratio, the admixture content is obtained. Finally, the amount of each component material in concrete can be obtained comprehensively and quantitatively, thus realizing the full calculation of the mix proportion design of self-compacting high-performance concrete from semi-quantitative to full-quantitative; (2) Improved overall calculation method [29] : when the overall calculation method of concrete mix ratio is directly used to calculate SCC, the calculated sand ratio and slurry collection ratio are both low, which is difficult to meet the requirements of self-compacting. The overall calculation method for SCC has its disadvantages and needs to be improved. According to the characteristics of SCC, combined with the method of fixed sand and gravel volume content, the improved overall calculation method is used to calculate the mixture ratio of SCC. The improved overall calculation method is a more scientific, reasonable and accurate design method of SCC mix proportion. working performance of Fiber Reinforced SCC (FR-SCC) He and Yan [30] studied the influence of polypropylene fiber on the workability of SCC. Through experimental tests, it was found that the workability of polypropylene SCC can meet the requirements of self-compacting workability when the volume ratio of polypropylene fiber reaches 0.1%. Under the condition of increasing the dosage of cementitious materials and water reducing agent, the maximum volume ratio of polypropylene fiber can reach 0.15%. Zhang et al. [31] studies the anti-segregation performance of concrete by adding three different lengths (15 mm, 20 mm and 30 mm) of wave-shaped steel fiber into SCC. The results show that when the length of steel fiber increases to 30 mm, the fluidity and expansion of SCC can no longer meet the workability requirements. With the increase of length and content, the dispersion coefficient of steel fiber in concrete decreases, and the phenomenon of agglomeration appears, which affects the uniform distribution of steel fiber, thus making the fluidity of steel fiber reinforced SCC worse. Three kinds of steel fiber SCC have a volume content of 1.25% for 15 mm steel fiber, 0.5% for 20 mm steel fiber, and a trace amount for 30 mm steel fiber. Hossain et al. [32] mixed polyvinyl alcohol fibers with different volume ratios in SCC, and found that when the volume ratio of polyvinyl alcohol fibers is less than 0.125%, the workability of FR-SCC can meet the requirements of self-compacting. Although adding fiber into plain concrete can improve the strength, it will correspondingly reduce the fluidity of concrete mixture and affect its selfcompacting performance. Therefore, in the research of FR-SCC, there must be a prerequisite which is to ensure its self-compacting requirements. As shown in Table 2, it is the maximum volume content that some different fiber types can add on the premise of ensuring the selfcompacting requirements of concrete. Working performance of SCC-filled steel tube The pouring quality of concrete filled in steel tube, that is, the compactness of concrete pouring, has great influence on the bearing capacity of concrete-filled steel tube columns. According to the obtained test analysis, when bearing long columns, the strength of mechanical vibration is 30% higher than that of artificial vibration, and the elastic stiffness is also 30% higher. Therefore, it is very necessary to configure SCC with higher strength and stiffness by controlling the mixture ratio of SCC in steel pipes [33] . For SCC filled with steel tubes, a slight exudation of concrete will form a considerable slurry layer at the top of components with high height or large span, which will affect the uniform stress of mixed soil. Therefore, it is necessary to control the exudation of concrete. However, if the configuration is too controlled, the SCC will be too sticky, and the excessively sticky mixture will sometimes be wrapped with more bubbles. Based on this, it is very important to test and study the mix proportion of SCC before the project of SCC with steel pipe. Zhou [34] takes the concrete wrapped with penstock in the dam section of the left bank powerhouse of the Three Gorges Project as an example, and configures SCC suitable for hydraulic structures. According to the existing experimental research, the working performance indexes of SCC should reach: slump 240~270 mm, expansion ≥600 mm, Orimet method running time 8~16 s, and slump middle edge height difference 20 mm. In the aspect of bearing capacity test, Yu Zhiwu of Central South University studied the influence of different concrete strength grades, whether there are small holes in the middle of steel pipes or transverse grooves with different heights, and different loading methods on the ultimate bearing capacity of SCC filled steel pipes through axial compression test of short columns [35] . However, the test does not completely focus on the change of bearing capacity of SCC compared with ordinary concrete, but mainly obtains the influence of grooving in the middle of steel pipe on its bearing capacity. Yao [36] has done a lot of research on the mechanical properties of SCC-filled steel tubular. Through 18 axial compression members and 20 compression-bending members, experimental research and theoretical analysis have been carried out, taking into account the vibrating mode, section form, diameter-thickness ratio and load eccentricity of core concrete. The research shows that the calculated values of concrete-filled round steel tubular and concrete-filled square steel tubular under different codes (whether for self-compacting or common) are basically the same as the experimental results. All regulations of concrete filled steel tube are suitable for the design and calculation of ultimate bearing capacity of SCC filled steel tube under axial compression and compression and bending, and the mechanical properties of selfcompacting high performance concrete filled steel tube under axial compression and compression and bending are basically similar to those of ordinary concrete filled steel tube. Yu [37] have also conducted a series of tests on self-compacting high-strength concrete filled steel tubes, considering the influence of parameters such as crosssection form, slenderness ratio and eccentricity of steel tubes on the bearing capacity of SCC filled steel tubes columns, and comparing the test results with those calculated by AISC, EC4 and DBJ 13-51-2003. It shows that local buckling occurs in square steel tubes, while shear failure occurs in round steel tubes. Compared with ordinary concrete, the ductility of SCC filled with steel tube is worse. The results calculated by each code are also close to or slightly higher than the test results. Test method for working performance of SCC Workability is an important content to ensure the performance of SCC. Some researches have been carried out on the testing methods of SCC at home and abroad, and corresponding standards have been formulated. At present, there are many methods to test the rheological properties of concrete at home and abroad. The commonly used methods are slump and slump fluidity methods, that is, to test the slump height and diffusion diameter of concrete mixture, and to test the outflow time of slump cone [38][39] . However, this method can not reflect the actual application of concrete, such as steel bar permeability and concrete filling. In order to conveniently and effectively evaluate the high fluidity, high stability and ability of SCC to pass through steel gap, some new testing methods were developed, such as inverted slump cone, L-shaped instrument, U-shaped box, V-shaped funnel, J-ring, filling box and so on. As each method has its advantages and disadvantages, a unified and mature detection method has not yet been formed. Comprehensive testing methods at home and abroad should be adopted to judge the workability of SCC, so as to judge the fluidity, clearance passing and segregation resistance of SCC. The fluidity and filling property of self-compacting concrete can be evaluated by slump spread and T500 flow time. V-shaped funnel can be used to judge the filling and viscosity of self-compacting concrete; The filling height of L-shaped meter and Ushaped meter can judge the ability of self-compacting concrete to pass through steel bars; The segregation resistance of self-compacting concrete can be judged by the stability of the mixture jumping off the table [40] . Conclusions and Prospects Due to many advantages of SCC, its application prospect is very broad, but its development and application history is short. There are still some problems and contents that need to be further studied: (1) Early contraction problem. Because of the low water-binder ratio and high dosage of cementitious materials, the early shrinkage of self-compacting concrete is large, especially the early self-shrinkage. At present, the research mainly focuses on the influencing factors and degree of self-shrinkage, while the shrinkage mechanism, calculation formula and detection method of self-shrinkage need further study. (2) Mix proportion design method. SCC requires high workability, and there are many factors involved in the calculation of mix proportion, so far there is no unified design and calculation method. With the popularization of computers, on the basis of a large number of tests, considering the influence of various factors on the workability, mechanical properties and economy of SCC, it will be possible to use computers to optimize the mix proportion design. (3) Understanding of physical and mechanical properties and durability. The construction performance of SCC has been fully studied. However, whether the physical and mechanical properties and durability of SCC have changed and their changing rules are not well understood at present after adding a large amount of superplasticizer. (4) Seismic performance of SCC. This is an important problem in the design of concrete structures, which deserves further study. If fiber reinforced materials are mixed into it to make FR-SCC, it will play an important role in structural seismic resistance. (5) Economic problems. The material cost of selfcompacting concrete is slightly higher than that of ordinary concrete, which also becomes the main obstacle to the application of SCC. However, SCC has excellent performance which can't be compared with ordinary concrete, so we should combine SCC with environmental protection, ecological protection and sustainable development to comprehensively investigate its economic indicators, and promote the wide application of SCC in China as soon as possible.
2020-10-29T09:07:41.372Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "9a47fdcf19172953726c72faa3869f88b19ac416", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/58/e3sconf_isceg2020_03001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7125380a0ca2e7320242e511251dca4a462b9534", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
32618816
pes2o/s2orc
v3-fos-license
Comparison of breast feeding practices among urban and rural mothers : A cross-sectional study Introduction: Exclusive breast feeding practice ranks fi rst among the most effective interventions to improve child health. Present study was undertaken to compare breast feeding practices among urban and rural mothers and the factors infl uencing these practices. Materials and Methods: One year long community based crosssectional study was done at villages namely Vantamuri, Kakati (A and B), Honaga, and Bhutramanahatti; and urban area Khasbag which are the fi eld practice areas of Department of Community Medicine, J. N. M. C., Belgaum. By random sampling, 380 rural mothers and 400 urban mothers having 1-year-old child were selected. Information on sociodemographic variables, breast feeding practices was recorded. Results: In the present study, majority of urban (65.00%) as well as rural mothers (64.21%) were between 20 and 24 years of age and were literates (90.25 and 77.89%, respectively). Majority of the mothers in both urban and rural areas gave prelacteal feeds (54.25 and 57.11%, respectively). Many mothers in both rural and urban areas discarded the colostrum, (14.75% in urban vs 25.79% in rural). Initiation of breast feeding after delivery was delayed by 24.50% of mothers in urban and 33.68% of mothers in rural areas. As many as 67.89% rural mothers practiced demand feeding as opposed to 38.75% urban mothers. Age of the mother, education, socioeconomic status, type of family, place of delivery, and education about benefi ts of breast feeding infl uenced the breast feeding practices. Conclusions: Various inappropriate breast feeding practices are prevalent both in rural as well as urban areas. Elder’s advice played an important role in shaping the breast feeding practices. INTRODUCTION Infants, that is children in the age group of 0-1 year constitute 2.92% of the total population in India. [1]Health of these infants is quiet fragile with increased vulnerability to infections and malnutrition.Hence, the major responsibility of a mother is to maintain and improve her child's health.After all a well-nurtured healthy infant of today is the healthy workforce of tomorrow's nation.International MATERIALS AND METHODS A 1 year long cross-sectional study was carried out from January to December 2011 in urban fi eld practice area under Urban Health Center, Khasbag and rural fi eld practice areas namely Vantamuri, Kakati (A and B), Honaga, and Bhutramanahatti under Primary Health Center, Vantamuri.There were 17 villages under Primary Health Center, Vantamuri.Among these, by using simple random sample method, above named 5 villages were chosen.The urban and rural areas where the study was conducted were the fi eld practice areas of J. N. Medical College, Belgaum. Multiindicator coverage survey conducted jointly by United Nations Children's Fund (UNICEF) and Government of Maharashtra in all districts of the state showed that the prevalence of exclusive breast feeding in urban area was 49.00% and in rural area was 37.00%. [2]bsolute error of 5.00% was considered and by using the formula n = 4 pq/d 2 sample size was worked out as 400 in urban and 380 in rural areas.Mothers in the above mentioned study areas having child aged 1 year were included in this study. In urban area, information regarding the births between January and December 2010 were collected in January 2011 from 39 Anganwadis.There were a total of 664 mothers having a 1-year-old child.By simple random sample method, using random number tables, 400 mothers were selected.They were interviewed in the month in which their infants completed 1 year in order to minimize recall bias. In the rural area, information regarding the births between January and December 2010 were collected in January 2011 from the birth registers of the subcenters of the above mentioned villages.There were 679 mothers and 380 mothers were selected using random number table.They were also interviewed in the month in which their infants completed 1 year.Residential addresses of these mothers were collected from Anganwadi workers in urban and female health worker of the subcenters in rural areas.Mothers were interviewed using a predesigned, pretested questionnaire regarding sociodemographic factors and breast feeding practices.If the mothers were not present at the time of visit, they were revisited for a maximum of three times.Despite three visits if they were unavailable then next number in the random number table was chosen. The present study was approved by J. N. M. C. Institutional Ethics Committee on Human subjects' research. Analysis was done using rates, means, and Chi-square test using Statistical Package for Social Sciences (SPSS) version 18.0 software. RESULTS A majority of 260 (65.00%) urban as well as 244 (64.21%) rural mothers were in the age group of 20-24 years with a mean age of 23.45 ± 2.34 years in urban and 23.20 ± 2.64 years in rural area. Among the study participants, as many as 331 (82.75%) urban and 322 (84.74%) rural mothers were Hindus.Literates were more among urban participants with 361 (90.25%) urban and 296 (77.89%) rural mothers being literates.Whereas, occupation wise 44 (11.58%) rural mothers were employed in various jobs as opposed to 23 (5.75%) urban mothers.In the urban area, 130 (32.50%) mothers belonged to the families with per capita income less than Rs.950/-compared to 155 (40.79%) rural mothers.None of the urban participants had delivered at home; whereas, 32 (8.42%) rural mothers had done so [Table 1].Among those who had delivered at a hospital, as many as 370 (92.50%) urban and 294 (84.48%) rural mothers were told about the benefi ts of breast feeding in the hospital ( 2 = 27.327,DF = 1, P < 0.001). As many as 59 (14.75%) urban and 98 (25.79%) rural mothers discarded the colostrum [Table 2].Most of the mothers had discarded the colostrum as per the elders' advice (22.03% urban and 60.20% rural mothers). Inititation of breast feeding was delayed beyond 4 h by 98 (24.50%) urban and 128 (33.68%) rural mothers [Table 2].Most common reason quoted by urban mothers for delayed initiation of breast feeding after delivery was their physical inability like pain or tiredness (38.78%); whereas, in rural area it was because of elders who advised not to initiate breast feeding early (46.09%). Ghutti is a paste of almonds, dates, and other medicinal plants.Practice of giving Ghutti to their infants was more common in 2].Most of the urban as well as rural mothers did so as per the elder's advice (93.60 and 100.00%, respectively).As many as 295 (73.75%) urban and 286 (75.26%) rural mothers had given water alongside breast milk even before 6 months of age to their infants. Difference among urban and rural mothers in the practices of discarding colostrum, delaying initiation of breast feeding beyond 4 h, demand feeding, and giving Ghutti to the infant before 6 months of age was statistically signifi cant (P < 0.05). Indicators of breast feeding were calculated.Early initiation of breast feeding rate was 42.50% in urban and 42.89% in rural area. Exclusive breastfeeding rate under 6 months of age was 16.25% in urban and 15.26% in rural area.Continued breast feeding rate at 1 year was 100% in urban and 99.21% in rural area. In the urban area, practice of giving prelacteal feeds was signifi cantly associated with religion, age of the mother, educational status of the mother, socioeconomic status, type of family, and mother receiving information about benefi ts of breast feeding in the hospital; whereas, in the rural area factors associated with this practice were place of delivery and information given to the mother regarding benefi ts of breast feeding in the hospital (P < 0.05). A significant association was found between the practice of discarding colostrum and educational status of the mother, socioeconomic status, and being informed about the benefi ts of breast feeding in the hospital in the urban area.However, in the rural area the only factor having signifi cant association with this particular practice was the information given to the mother regarding benefi ts of breast feeding to the mother in the hospital (P < 0.05). In the rural area, no sociodemographic factors had signifi cant association with the practice of delaying the initiation of breast feeding beyond 4 h after delivery.Nevertheless, more number of illiterate mothers (38.46%) had delayed initiation of breast feeding beyond 4 h compared to literate mothers (21.37%) in the urban area and this association was statistically signifi cant (P < 0.05). DISCUSSION Purpose and benefi ts of breast feeding has been stressed all over the world by various health organizations and community-based programs and approaches.The present study was carried out to compare urban and rural areas to know which has better breast feeding practices and also to know other sociodemographic factors infl uencing the breast feeding practices. A national survey [3] had showed that rural area is better than urban area in breast feeding practices summarized as exclusive breast feeding being 48.3% in rural and 40.3% in urban area.In the present study, the observation was contrary to what was observed in the national survey.Urban area appeared better in all the aspects of breast feeding than rural area.However, breast feeding practices were still suboptimal in both the areas. The current study observed that prelacteal feeds were given by almost same proportion of 54.25% mothers in urban and 57.11% mothers in rural areas [Table 2].On the contrary, in a study done by Qiu et al., [4] as many as 62.00% mothers in urban area and 39.00% mothers in rural area gave prelacteal feeds.Probably, a strong custom of sweetening the newborns' mouth prevailing in the entire district can be held responsible for almost same proportion of study participants giving prelacteal feeds to their infants in urban as well as rural study areas. The practice of giving prelacteal feeds was signifi cantly associated with religion, age of the mother, educational status of the mother, and socioeconomic status indicating that a woman should bear a child at an appropriate age only when she understands the correct infant feeding practice and its importance.Apart from that women education and income generation raising her socioeconomic status are also equally important.Mother receiving information about benefi ts of breast feeding in the hospital was signifi cantly associated with her practice of giving prelacteal feeds in urban as well as rural area in addition to place of delivery in the rural area.This observation stresses on the fact that hospital deliveries are always preferable and also it is important for the hospital personnel to emphasize the benefi ts of breast feeding to the mothers in order to develop and sustain appropriateness of feeding practices.Consistently, several studies [5][6][7][8] have showed statistically signifi cant association of age, educational level, socioeconomic status, place of delivery, not being informed about exclusive breast feeding with practice of giving prelacteal feeds (P < 0.05). Practice of discarding colostrum was signifi cantly different between urban and rural mothers with 14.75% urban and 25.79% rural mothers discarding the colostrum (P < 0.05) [Table 2].Majority of the mothers had done so on elders' advice.Similarly, in a study carried out by Yadav and Singh [9] in Bihar it was seen that 62.50% urban and 66.40% rural mothers discarded colostrum and also had done so on elders' advice.Hence, it is advisable to health educate not only mothers but also the elders of the family. Signifi cant association was observed between educational status of the mother, socioeconomic status, and being informed about the benefi ts of breast feeding in the hospital in the urban area and with information given to the mother regarding benefi ts of breast feeding to the mother in the hospital in the rural area.Yet again, the observation imposes the importance of women education, fi nancial status, and health education given in the hospital. Inititation of breast feeding was delayed beyond 4 h by 24.50% urban and 33.68% rural mothers which was a statistically signifi cant difference (P < 0.05) [Table 2].Most commonly, urban mothers had delayed initiation of breast feeding after delivery due to their physical inability like pain or tiredness (38.78%); whereas, in rural area it was because of elders who advised not to initiate breast feeding early (46.09%).This observation is supported by a study carried out by Gupta et al., [10] in urban slum of Lucknow which revealed that only 36.60% mothers initiated breast feeding within 1 h of delivery and most common reasons given for delayed initiation were family custom/belief (52.10%), no secretion of breast milk (31.00%), and discomfort in the mother (16.90%).Hence, mothers need appropriate physical and mental support in the hospital, post-delivery by breast feeding support groups encouraging them to breast feed more, and more along with apt health education to mothers and most importantly care giving elders of the family. Though demand feeding which actively involves the infant in controlling the breast milk intake is desirable, only 67.89% of rural mothers practiced it.Among the urban mothers it was even lesser at 38.75% [Table 2].Difference in the practice of demand feeding among urban and rural mothers was statistically signifi cant (P < 0.001).It has been observed that more rural mothers were working compared to urban mothers.Hence, probably it was not possible for them to practice scheduled feeding as done by urban mothers.Apprehension among the urban mothers regarding the growth of the infant may also be the reason for their time bound feeding. On the contrary, a study on infant feeding practices by Parekh et al., [11] in Parel, Mumbai showed that feeds were given on demand by as many as 73.68% mothers and yet another study by Panda et al., [12] in Cuttack showed that 90.10% of mothers fed their infants on demand. Nearly one-third of urban mothers compared to only 1.58% rural mothers gave Ghutti to their infants before 6 months of age [Table 2].This difference probably was due to the high cost of ingredients of Ghutti, limiting rural mothers from using it.Since Ghutti is made of ingredients with high nutritive value, its health effects should be studied before labeling it as inappropriate infant feeding practice.However, it can be dangerous if not prepared hygienically spreading gastroenteric infections and predisposing the infant to the vicious infection -malnutrition cycle.For the same reason, giving water to the infant to drink should also be discouraged and mothers should be made aware that breast fed infant does not require water to drink as breast milk gives enough hydration.Extra fl uids displace breast milk, and do not increase overall intake. [13]clusive breast feeding rate at 6 months was as low as 16.25% in urban and 15.26% in rural area.Majority of mothers practiced predominant breast feeding in both urban and rural area (77.25 and 77.89%, respectively) [Table 2]. Women education and improvement in socioeconomic status which can be achieved by income generating activities are to be encouraged.Institutional deliveries must be scaled up to 100% through involvement of grass root level workers who can reach the corner of a village and promote institutional deliveries.IEC campaigns (information, education, and communication) also involving grass root workers and local leaders must target not only the mothers, but also the entire family particularly elderly women of the family. In the present study, since only mothers having children aged 1 year were included in order to minimize the recall bias regarding breast feeding practices; and hence, it was not possible to know the proportion of mothers who continued breast feeding till recommended 2 years of age. Further studies are needed to study in depth the cultural factors forcing the mothers to adopt faulty breast feeding practices for which targeted interventions can be planned, piloted, and tested.In addition, follow-up studies can be done to know the breast feeding continuation rate at the age of 2 years. CONCLUSION Present study revealed that various inappropriate breast feeding practices are prevalent in both urban and rural areas though urban mothers had more favorable practices compared to rural mothers. of delivery, and receiving information in the hospitals about benefi ts of breast feeding infl uenced the breast feeding practices. Elders' advice played an important role in shaping the breast feeding practices. Journal of Medicine and Public Health | Jan-Mar 2014 | Vol 4 | Issue 1 Mothers' education, her socioeconomic status, place International Journal of Medicine and Public Health | Jan-Mar 2014 | Vol 4 | Issue 1 Table 1 : Distribution of study participants according to sociodemographic variables Sociodemographic variables Urban (%) N = 400 Rural (%) N = 380 APL= Above poverty line, BPL = Below poverty line International Journal of Medicine and Public Health | Jan-Mar 2014 | Vol 4 | Issue 1 mothers of urban area.As many as 125 (31.25%) urban mothers compared to six (1.58%) rural mothers gave Ghutti to their infants before 6 months of age [Table Table 2 : Distribution of study participants according to various breast feeding practices DF = Degrees of freedom International Journal of Medicine and Public Health | Jan-Mar 2014 | Vol 4 | Issue 1
2017-10-10T23:28:09.073Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "d3224de045ae9fb2bf838aa3f855dac267b984fa", "oa_license": "CCBY", "oa_url": "https://ijmedph.org/sites/default/files/IntJMedPublicHealth_2014_4_1_120_127172.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d3224de045ae9fb2bf838aa3f855dac267b984fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267010577
pes2o/s2orc
v3-fos-license
Reactive Arthritis After mpox Vaccination Autoimmune inflammatory reaction after vaccination is a rare clinical entity. Reactive arthritis has been described after various vaccinations, but not after mpox vaccination. Here we present a case of recently diagnosed reactive arthritis after mpox vaccination that presented in the context of unrelenting fever and diarrhea complicated by migratory arthritis and anterior uveitis. We have reported this case to the Vaccine Adverse Event Reporting System (VAERS). Background T wo smallpox vaccines (ACAM 2000 and JYN- NEOS) are approved by the U.S. Food and Drug Administration for prevention of mpox. 1 ACAM2000 is administered using a two-pronged (bifurcated) needle to prick the skin several times with a droplet of vaccine.It should not be injected by intradermal, subcutaneous, intramuscular, or intravenous route.JYNNEOS is administered in two doses of 0.5 cc each, spaced 4 weeks apart, and can be administered either subcutaneously or intradermally. 2 JYNNEOS is a live, attenuated, non-replicating orthopox viral vaccine with only rare reports of side effects, the most common being injection site discoloration and scarring. 2There have been no reported cases of reactive arthritis after mpox vaccination. 3 Case report A 51-year-old man presented to an infectious disease clinic in the northeastern United States with continuous fever for 2 weeks and watery diarrhea for 1 week.Medical history was notable for asthma, right rotator cuff tear status post right arthroscopic rotator cuff repair 3 months ago and HIV with undetectable viral load for 5 years.His family history was remarkable for premature coronary artery disease in his father.His home medications included gabapentin, emtricitabine/tenofovir, dolutegravir, and albuterol inhaler.He stated that he had never missed his HIV medications since being diagnosed with HIV 10 years ago. The patient received the mpox vaccine (JYN-NEOS) 2 weeks before presentation, and 2 h after vaccination he developed continuous fever with associated rigors, despite taking acetaminophen every 4 h.Two days later, he developed severe watery diarrhea and mild discomfort during urination, which lasted for about a week.He was using bismuth subsalicylate for diarrhea and ibuprofen for fever.He denied nausea, vomiting, mucocutaneous rash, increased urinary frequency/urgency, recent travel, or sick contact. On presentation, the patient was noted to have a fever of 100.9 Fahrenheit and sinus tachycardia (110e120 beats per minute) with normal respiratory rate and oxygen saturation.Physical examination was unremarkable except for slight tenderness in the right shoulder joint at the site of the previous arthroscopy. Laboratory diagnostics demonstrated no leukocytosis, elevated acute phase reactants (C-reactive protein: 67.82 mg/L, reference range: 0e10 mg/L; erythrocyte sedimentation rate: 38 mm/h reference range: 0e30 mm/h).His HIV viral load was undetectable with normal absolute CD4 count (673 cells/ Ul, reference range: 404e1612 cells/UL).Blood, urine, and stool studies for infectious pathogens were negative.Cytomegalovirus PCR, respiratory viral panel, and tests for sexually transmitted infections, including nucleic acid amplification testing for chlamydia/gonorrhea, were negative.Chest Xray was unremarkable.His electrocardiogram showed sinus tachycardia with ventricular rate of 110 beats per minute, and echocardiogram showed left ventricular function of 60%e65% with no obvious valvular vegetation or pericardial effusion.Troponin T 5th-generation test was negative.A 7day cardiac monitor showed sinus rhythm with average rate of 104 beats per minute. The patient's diarrhea resolved within a week while the patient was under evaluation for tachycardia and fever, but he started having progressively worsening pain and swelling of the left knee and ankle joint, restricting mobility.On examination, his left knee was red, swollen, and warm with positive tenderness in the left medial knee and 1þ pitting edema extending from the left foot to the knee.A complete duplex ultrasound of the left lower extremity was negative for deep vein thrombosis.MRI left knee without contrast showed grade 2 tear to the origin of the medial collateral ligament with some underlying medial femoral condyle edema and knee effusion.He underwent aspiration of the left knee effusion, yielding 50 cc of clear fluid with slightly elevated white blood cell count and protein level suggestive of inflammatory arthritis (Table 1).Antinuclear antibody, rheumatoid factors, and HLAB27 were negative.Joint fluid crystals, cultures, and Lyme PCR testing were negative. The patient subsequently developed migratory arthritis (left elbow, right knee, bilateral temporomandibular joints) and redness in the left eye with floater spots and decreased vision.Eye examination (Table 2) showed normal fundus with reduced visual acuity, circum-corneal congestion, keratin precipitate, irregular pupil, and anterior chamber cells suggestive of anterior uveitis (Fig. 1).He was started on prednisone eye drops every 1 h and cyclopentolate twice a day.It was considered unlikely that his extensive left knee swelling with effusion and pitting edema distally was associated with the small meniscal tear.The patient's diarrhea, asymmetric arthritis, dysuria, and uveitis were believed to be due to reactive arthritis.After septic arthritis was ruled out, he was started on methylprednisolone for 2 weeks.His arthralgia improved when he was taking methylprednisolone, but it returned after he finished the course.He started taking NSAIDs three times daily with minimal benefit. Given the diminished visual field and migratory arthritis unresponsive to NSAIDs, he was re-started on prednisone 20 mg daily for 1 week followed by 10 mg daily with meloxicam 15 mg daily.One month after treatment with steroids and NSAIDs, his arthralgia was well controlled and acute phase reactants returned to baseline.His visual field defect persisted. Discussion Mpox is a highly contagious infectious disease with sporadic outbreaks outside its endemic areas of the African continent.It spreads through droplets, sexual contact, and direct or indirect contact with rash or contaminated articles.Higher numbers of mpox cases have been documented among people with multiple sexual partners and men who have sex with men. 4 Mpox affects people of any age and has three phases: incubation, prodrome, and eruptive stage.The incubation period ranges from 3 to 34 days followed by the prodromal phase that lasts for 1e4 days.Symptoms include fever, headache, fatigue, cervical lymphadenopathy, and centrifugal umbilicated rashes with complications such as cutaneous scars, respiratory tract infections, corneal ulcerations, and even death. 5The first suspected case of mpox in the US was reported on May 17, 2022, with a total of 30,505 probable and confirmed cases and 43 deaths by June 21, 2023. 6ne study found that patients with mpox can be co-infected with hepatitis C, HIV, or other sexually transmitted infections. 7The authors speculated that concomitant HIV infection alters the natural course of mpox infection due to the resulting immunosuppressed state. 7In a series of 528 mpox cases, 216 (41%) had concomitant HIV and 205 of those patients (95%) were on antiretroviral therapy. 8Vaccination, quarantine measures, screening of travelers from endemic countries, personal hygiene, and use of personal protective measures are the common approaches to reduce transmission and halt outbreaks. 1eactive arthritis is an inflammatory condition characterized by a constellation of symptoms including asymmetric arthritis, urethritis/diarrhea, and conjunctivitis/keratitis/uveitis. 9It usually follows infection with salmonella, shigella, campylobacter, yersinia, and chlamydia. 10In our case, diagnosis of enteritis-associated reactive arthritis was not likely because of the absence of recent hospitalization/sick contact, antibiotics usage, prior gastrointestinal infection or associated gastrointestinal symptoms (abdominal pain, nausea/vomiting, loss of appetite), and negative stool culture and toxin assay. HIV-positive patients have a high risk of developing rheumatic disease due to the twofold inflammatory effect of HIV on synovial tissue. 11The incidence of reactive arthritis in pre-combined antiretroviral therapy HIV patients ranges from 0% to 11%. 12 Our patient's HIV viral load was undetectable for five years and he was on daily antiretroviral therapy, for 10 years, which suggested low risk for reactive arthritis due to HIV. Autoimmune inflammatory reaction is an uncommon adverse event after vaccinations, occurring in less than 0.01% of people worldwide. 13This reaction has been associated with a combination of genetic, familial, and environmental factors, molecular mimicry between human and viral proteins, and positive HLA-B27.The JYNNEOS vaccine is a replication-deficient virus vaccine.Only 13 (1%) of the 1350 JYNNEOS reports to the vaccine adverse event reporting system (VAERS) are for severe side effects. 14We reported this case to VAERS because of rapid onset of symptoms within 2 h of receiving vaccination and because several clinical features (unrelenting fever, diarrhea accompanied by migratory arthritis and anterior uveitis, absence of rash on the body) were considered to be consistent with a diagnosis of reactive arthritis secondary to mpox vaccination. There is no evidence of the benefit of antibiotic therapy in the diminution of clinical course in the noninfective form of reactive arthritis.Sulfasalazine and NSAIDs may be used for chronic cases lasting more than 6 months, and other biological diseasemodifying antirheumatic drugs (DMARDs) or anti-TNF agents can be used in cases of poor response. 11n addition to other treatments, physical therapy and local measures like cold compression can be used as supportive modalities for joint pain. 13Uveitis is treated with topical steroids, mydriatics, and systemic corticosteroids in severe cases. 9Short courses of systemic corticosteroids (up to 4 months) are used for peripheral joint symptoms not responsive to NSAIDs. 15eactive arthritis has a good prognosis in general, with most people having a complete recovery within a year. 10However, there is a high recurrence rate of joint and eye inflammation, and 20%e70% of patients develop other joint problems such as ankylosing spondylitis. 10 Conclusion The advantage of vaccination in emerging outbreaks is considered to surpass the potential risk of autoimmune inflammatory response in predisposed individuals.However, this case of autoimmune inflammatory response after mpox vaccination shows the importance of vigilance for this reaction, continued study to identify individuals at higher risk of this complication, and persistent effort to report to VAERS to improve the available data from which vaccine formulators may derive identifiable biomarkers to further inform efforts to improve the safety of vaccine formulations in the future. Fig. 1 . Fig. 1.Left eye examination.(A) Gross eye examination demonstrates left eye circumciliary congestion (arrow).(B) Speculum examination demonstrates the same circumciliary congestion (arrow) in left eye.(C) Slit lamp examination of left eye demonstrates irregular pupil (arrow) due to presence of posterior synechiae.(D) Slit lamp examination of left eye after a release of posterior synechiae demonstrates pigments (arrow) on the anterior capsule of the left lens. Table 1 . Left knee synovial fluid analysis.
2024-01-17T16:35:00.317Z
2024-01-10T00:00:00.000
{ "year": 2024, "sha1": "7f517c02cef5142785e52e7c8e8ae78203a442ea", "oa_license": "CCBYNC", "oa_url": "https://scholarlycommons.gbmc.org/cgi/viewcontent.cgi?article=1287&context=jchimp", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a925489f930451ebf4d4e1750052e8bccf3fc4ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
271208092
pes2o/s2orc
v3-fos-license
Relationship between coagulopathy score and ICU mortality: Analysis of the MIMIC-IV database Objective Coagulopathy score has been applied as a new prognostic indicator for sepsis, heart failure and acute respiratory failure. However, its ability to forecast intensive care unit (ICU) mortality in patients with an acute cerebral hemorrhage (ICH) has not been assessed. The purpose of this study was to clarify the relationship between ICU mortality and early coagulation problem score. Methods Data from the Medical Information Mart for Intensive Care (MIMIC-IV) (v2.0) database were used in this retrospective cohort analysis. The association between the coagulation disorder score and ICU mortality was examined using multivariate logistic regression. Furthermore, the impact of additional variables on the results was investigated by a subgroup analysis. Results 3174 patients (57.3 % male) were enrolled in total. The ICU mortality reached 18.2 %. After adjusting for potential confounders, the ICU mortality of patients rose with the increase of coagulation disorder score. The ROC curve revealed the predictive accuracy of coagulation dysfunction score to mortality in patients with ICU. The coagulation disorder score had a lower AUC value (0.601, P < 0.001) than the SAPSII(AUCs of 0.745[95 % CI, 0.730–0.761]) and the combined indicators(AUCs of 0.752[95 % CI, 0.737–0.767]), but larger than single indicators platelet, INR and APTT. In the subgroup analysis, most subgroups showed no significant interaction, but only age showed significant interaction in the adjusted model. Conclusion The coagulopathy score and ICU mortality were found to be strongly positively correlated in this study, and its ability to predict ICU mortality was better than that of a single measure (platelet, INR, or APTT), but worse than that of the SAPSII score, GCS system. Introduction Acute cerebral hemorrhage(ICH) is a severe type of stroke worldwide.Although it accounts for only 10-30 % of strokes, it has an extremely high death and disability rate [1].The neuronal damage involved in the process of brain injury caused by intracerebral hemorrhage includes primary damage caused by the space occupying effect and secondary harm brought on by blood degradation products [2][3][4].In addition, as a regulator of pathological brain inflammation, thrombin dysfunction can not only induce the destruction of the blood-brain barrier(BBB) and the degeneration of neural cells by accelerating oxidative stress and inflammatory response, but also lead to further expansion of hematoma [5,6].Systemic coagulation dysfunction usually occurs within a few minutes after brain injury, indicating that it is induced by brain-derived substances, which can be quickly released into the whole body due to the mechanical destruction of the BBB, and increase the permeability of the BBB outside of the damaged region through secondary ischemia and inflammatory injury, resulting in uncontrolled passage of macromolecules through endothelial cells [7][8][9][10][11].These pathophysiological mechanisms often lead to adverse outcomes of cerebral hemorrhage. A recent retrospective study reported that the scoring system used to evaluate early coagulation disorders, consisting of PLT,APTT, INR parameters, was found to be a valuable factor for hospital mortality [12].Therefore, new biomarkers are still required to predict the development and severity of coagulopathy caused by brain injury in a timely, accurate and reliable manner in order to develop preventive measures and targeted therapies.We speculate that the coagulopathy score may serve as an effective biomarker for prognosis in patients with ICH.Our research aimed to describe the relationship between the coagulopathy score and the outcomes of individuals with ICH.We also sought to evaluate the coagulopathy score's capacity to forecast ICU death in patients with ICH in comparison to other markers. Population Who were diagnosed as intracranial hemorrhage or cerebral hemorrhage according to ICD-9/ICD10 criteria were included.At the same time, the following patients were excluded: (1) patients under 18 years old at the time of first admission; (2) 1685 patients were not admitted to ICU; (3) only the first admission data were extracted from patients with intracranial hemorrhage or cerebral hemorrhage admitted to ICU for several times; (4) Fig. 1). Data extraction All data were selected from the MIMIC-IV database, a large publicly available critical care database that includes inpatient information from 2008 to 2019 at Beth Israel Deacon Medical Center, which was approved by the Massachusetts Institute of Technology (Cambridge, MA) and Beth Israel Deaconess Medical Center (Boston, MA).NavitePremium(version 16.1.7)software and Structured Query Language (SQL) was used to collecte data for this investigation.The following information were collected:Demographic characteristics, vital signs, diagnosis, comorbidities including lung disease, heart failure, liver and kidney disease, laboratory indicators, treatment, SAPSII, etc.Our agency's Institutional Review Board (IRB) approval is exempt since this study involves an analysis of anonymized publicly accessible third-party databases and has been approved by the local IRB.The actual identity data in the database of the patient is hidden.Therefore, there is no need to obtain the patient's informed consent. Statistical analysis The baseline characteristics are expressed as the mean ± standard deviation (SD) of normal distribution quantitative data.Skewed data is expressed as median [quartile range (IQR)], and classified data is a number (percentage).The normality of the variables was assessed using the Kolmogorov-Smirnov test.Independent-sample T-test, chi-square test and Mann-Whitney U test were used to compare the characteristics of patients and ICU survival status.The risk factors of ICU death were analyzed by Logistic regression.The results were displayed as a 95 % confidence interval (CI) and odds ratio (OR).The related confounding variables were adjusted and multi-factor Logistic analysis was carried out.Model II was adjusted for Age, Gender, Race, GCS, SAPSII, White blood cell, Hemoglobin, Hematocrit, Blood nitrogen urea, Creatinine, Sodium, Glucose, Diabetes, Mean blood pressure, Heart rate, Respiratory rate, temperature, Congestive heart failure, Sepsis3.0,Mechanical ventilation, Myocardial infarct, Antibiotics, Vasoactive agent, Chronic kidney disease.The ROC curves were drawn, and the AUCs of SAPSII and coagulopathy score were compared.Logistic regression was used for the subgroup analysis to investigate the relationship between the coagulopathy score and ICU mortality; Models I and II were represented on a forest map, and the P for interaction was computed, which vividly shows the results of subclass analysis.P < 0.05 was considered to be statistically significant.The software SPSS, GraphPad Prism, and MedCalc were used for all data analyses and computations. Patient characteristics This retrospective analysis comprised 3174 participants in total according to the final survival status of ICU.The baseline characteristics are presented in Table 1.There were 1343 women and 1831 males among the patients, with a high median age(68.2years).And 72.3 % of the patients in the study were white.Compared to the survival group, the patients in non-survival were older and had higher heart rate, body temperature and respiration, and more complications, such as Myocardial infarct, congestive heart failure, diabetes and kidney disease, sepsis, and received more treatment, including vasoactive agent such as milrinone and dopamine, mechanical ventilation, antibiotics and so on.The survivor group had lower white blood cells, creatinine, neutrophils, blood nitrogen urea, INR, APTT and PT, and higher hemoglobin, hematocrit and platelet count.And patients were less likely to receive heparin anticoagulation therapy.Patients in the survivor group scored higher on the GCS and lower on the SOFA and SAPSII. Association between coagulopathy score and coagulopathy endpoints Overall, the ICU mortality rate was 18.2 %.Higher Plt, INR, and APTT values were strongly correlated with ICU mortality in the survival group.Moreover, the results demonstrated that the ICU mortality rose as their coagulopathy score increased, with a maximum death rate of 46.2 observed for those with a coagulopathy score of 6 (see Table 2). Table 3 shows an adjusted analysis of ICU mortality in patients using a binary logistic model.Model I(unadjusted), and the model II (adjusted).When used as a classification variable, the risk of final ICU mortality rose with the increase of the coagulopathy score in According to ROC curve, the predictive accuracy of coagulopathy score to ICU mortality of patients was displayed in Fig. 2, and the AUC value of coagulopathy score was 0.601(P<0.001),which was less than SAPSII(AUCs of 0.745[95 % CI, 0.730-0.761]),GCS(AUCs of 0.678[95 % CI, 0.661-0.694])and the combined indicators(AUCs of 0.752[95 % CI, 0.737-0.767]),but larger than that of the indicators of platelet, INR and APTT. Subgroup analysis Subgroup analysis was utilized to evaluate modelI(unadjusted) and II (adjusted).Adjustment factors included Age, Gender, race, GCS, SAPSII, White blood cell, Hemoglobin, Hematocrit, Blood nitrogen urea, Creatinine, Sodium, Glucose, Diabetes, Mean blood pressure, Heart rate, Respiratory rate, temperature, Congestive heart failure, Sepsis3.0,Mechanical ventilation, Myocardial infarct, Antibiotics, Vasoactive agent, Chronic kidney disease.Subgroup analysis were performed to test whether the relationship between coagulation score and ICU mortality was affected by baseline characteristics.Fig. 3 shows the risk ratio OR values and Pforinteraction of coagulation score and ICU mortality in each subgroup.The results showed that the OR values of all subgroups were more than 1, which meant that the mortality in ICU increased significantly with the increase of blood coagulation score.Overall, there was no significant interaction between most groups.In addition, we noticed that age showed a significant interaction in the model II (adjusted).(P for interaction = 0.007), which may affect the relationship between coagulation score and ICU mortality.Therefore, we re-analyzed the relationship between coagulation function score and ICU mortality in different age groups, as shown in Table 4.The results showed that coagulation scores in both groups of patients could increase ICU mortality. Discussion This retrospective study assessed the relationship between coagulation disorder and ICU mortality in patients with ICH.The results of this study indicated that a higher coagulopathy score had associations with an increased risk of ICU mortality in ICH patients.The predictive ability of coagulopathy score to ICU mortality in patients with ICH was better than that of single index platelet, INR and APTT, but inferior to SAPSII.Even after adjusting for mixed risk factors, coagulation disorder scores were still closely related to ICU mortality.In the subgroup analysis, we discovered that in most subgroups, ICU mortality in patients with ICH increased significantly with the increase of coagulopathy score.Therefore, for clinical doctors, the coagulopathy score may be a promising decision-making tool and may be an independent risk factor for critically severe patients with ICH. As a new indicator in recent years, coagulopathy score was initially utilized to forecast the prognosis of patients with cardiopulmonary dysfunction [12].With the deepening of scientific research, the clinical value of coagulation dysfunction has been gradually excavated.Li et al. found that coagulopathy score was associated with poor prognosis in critically ill CHF patients, include higher rates of adverse cardiac events, longer hospital stays in the intensive care unit, and shorter-term death [14,16].Our results suggest that although the predictive power of ICH patients is inferior to that of SAPSII and GCS, the AUC of coagulation disorder score is larger than that of a single coagulation indicator, which has a certain predictive potential for ICU mortality in ICH patients. Compared with the complex SAPSII scoring system, this scoring system can quickly judge the patient's blood coagulation system only through the blood parameters obtained by routine admission, and the effect is also better than that of a single coagulation index.SAPSII is composed of up to 17 variables and takes the worst value within 24 h as the judgment index.Compared with ICH score, SAPSII has a partial timeliness, and comprehensive consideration of multiple systems such as vital signs, serum test indicators, GCS score, surgical selection, etc., so it has higher sensitivity and specificity.However, its scoring process is complicated, which is not conducive to rapid clinical evaluation.The GCS score has been widely used to assess the consciousness of patients with cerebral hemorrhage.Although it has the advantages of being fast and convenient, its score is highly subjective.Furthermore, in the intensive care unit, it is difficult to use GCS to accurately assess the consciousness of intubated and sedated patients.The coagulation function score is obtained by rapid serum index test.A large number of studies have confirmed the correlation between coagulation dysfunction and increased cerebral hemorrhage and poor prognosis [17][18][19][20].In addition, an increasing number of patients are taking long-term oral anticoagulants and antiplatelet drugs for cardiovascular and cerebrovascular diseases.Such patients often suffer rapid deterioration of their condition due to coagulation dysfunction [21,22].As we all know, the lethal triad includes hypothermia, acidosis, and coagulopathy.Therefore, clinicians cannot ignore the correction of coagulation disorders in patients with cerebral hemorrhage.This study also confirmed the correlation between coagulation function score and poor prognosis, aiming to comprehensively evaluate the coagulation function of ICH patients so that clinicians can take intervention measures as early as possible, such as fresh frozen plasma transfusion, reversal of anticoagulant drugs, Vitamin K, prothrombin complex, etc. Notably, the subgroup analysis results are shown in Fig. 3, and Model II (adjusted) shows a significant interaction between age and coagulopathy score and ICU mortality.Older age is closely related to changes in coagulation status, as confirmed in the study by Ochi et al. [23][24][25] Therefore, age may affect the relationship between coagulation score and mortality.In addition, the risk of ICU death increased with the rise of coagulopathy score in both groups. Furthermore, our study also found that the baseline white blood cell level and the incidence of sepsis were significantly higher in death group.Coagulation activation is almost a common event in the initial stage of sepsis and deteriorates rapidly with the progression of the disease.Once DIC occurs, the mortality rate of sepsis increases significantly because DIC is a systemic hypercoagulable reaction that hinders normal tissue circulation and leads to multiple organ failure [26].The release of inflammatory mediators, endothelial injury and platelet disorders also promote the progression of thrombosis [27].Hemostatic imbalance is one of its main manifestations.In sepsis, even if the fibrinolysis is hyperactive, it is still far from excessive hypercoagulable [28].This may also be one of the reasons for the increase in ICU mortality in patients with ICH.Therefore, blood coagulation needs to be monitored frequently during sepsis.Many studies have shown that coagulation disorder in patients with ICH is closely related to vascular endothelial cell injury, tissue factor release, platelet dysfunction, Microvascular failure and other factors [29,30].It is usually characterized by the over-activation and consumption of coagulation factors, the interaction between coagulopathy and inflammatory system, the formation of microthrombus, and then reduce the blood perfusion of brain tissue, and finally reduce the survival rate of patients [31][32][33][34][35]. Sepsis-related DIC has a high mortality rate, which is a dynamic process that starts with coagulation dysfunction and can develop into sepsis-induced coagulation dysfunction (SIC).Therefore, to provide an early and valuable prediction of coagulopathy, PLT, INR, and APTT were selected as the main coagulation indices for evaluating early coagulopathy with reference to the definition of SIC and coagulopathy.And the results of this study showed that the ability of coagulation score to predict ICU mortality is better than a single indicator (platelet, INR or APTT).The main advantage of this research is to confirm that the increased coagulopathy score is an independent risk factor for the higher death rate of severe patients with ICH in the United States.Coagulopathy score has convenience and potential clinical predictive value in the process of clinical decision-making.However, our study also had several limitations.In light of this retrospective investigation.There may be other potential residual confounders.Consequently, its necessary to conduct further high-quality prospective studies and dynamically monitor the changes of blood coagulation.This endeavor may improve the prompt identification of patients with severe ICH in a clinical setting by leveraging convenient biomarkers, thereby making it possible to use specialized treatment strategies meant to increase patient survival. Conclusion In summary, we found that the coagulopathy score was significantly positively associated with ICU mortality, and its ability to predict ICU mortality was better than that of a single measure (platelet, INR, or APTT), but worse than that of the SAPSII score system and GCS.It is important for clinicians to conduct personalized management of severely ICH patients according to their early coagulation conditions. Ethics declarations Review and/or approval by an ethics committee was not needed for this study because the MIMIC database is a public, anonymized database that has been approved by the Institutional Review Boards of the Massachusetts Institute of Technology and Beth Israel Deaconess Medical Center and therefore did not require approval from our agency's ethics committee.The actual identity data in the database of the patient is hidden.Therefore, there is no need to obtain the patient's informed consent. Table 4 The relationship between coagulation disorder score and ICU mortality in different age groups. Fig. 2 . Fig. 2. (A) The ROC curves for the prediction of ICU mortality of coagulopathy score and SAPA II and combined indicator.(B) The ROC curves for the prediction of ICU mortality of coagulopathy score, platelet, INR, and APTT. Fig. 3 . Fig. 3. Subgroup analysis of the association between coagulopathy score and ICU mortality. Z . Xie et al. 180 patients with missing clinical data such as platelet, INR and APTT within 24H of ICU were excluded, (5) 892 patients with ICU hospitalization time < 24 h were excluded.Finally, this study comprised 3174 patients (see Z. Xie et al. Table 1 Baseline characteristics of the patients. Data were mean ± SD or median (IQR) for skewed variables or numbers (in percentage)) for categorical variables.WBC white blood cells; BNU blood urea nitrogen; SBP systolic blood pressure; DBP diastolic blood pressure; MBP mean blood pressure; SpO2 saturation of peripheral oxygen; INR, international normalized ratio; PT, prothrombin time; APTT, activated partial thromboplastin time; COPD, chronic obstructive pulmonary disease; SOFA sequential organ failure assessment score; SAPS II simplifed acute physiology score II; RR, respiratory rate; HR, heart rate; GCS, Glasgow Coma Score.Z.Xie et al. Table 2 Association of the coagulopathy score with ICU mortality.I and model II.When the coagulopathy score = 6, the OR values of the two groups were 7.098(95%CI, 2.355-21.393)and 4.394 (1.130,17.084),respectively.When used as continuous variables, both groups of models showed a higher risk of ICU death, and the unadjusted and adjusted OR values were 1.339(95%CI, 1.249-1.435)and 1.252(1.140,1.375),respectively. INR, international normalized ratio; APTT, activated partial thromboplastin time.Z.Xie et al.model Table 3 Association between the coagulopathy score and ICU mortality.
2024-07-16T15:04:30.675Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "cfe82c2f99e055694fcf0a033f6fb0b25710cabd", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9301a41e2d422f24bd8b6a3dad3a224161d4558f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17187638
pes2o/s2orc
v3-fos-license
Open Research Online Changing tutor roles in online tutorial support for open distance learning through audio-graphic SCMC The use of synchronous computer-mediated communication (SCMC) tools such as audio-graphic conferencing to provide tutorial support in Open Distance Learning (ODL) settings brings with it changes in roles and relationships between tutors and students which need to be researched to gain an insight into the learning experience of those teaching and being taught through the medium. In this paper we will report on a study of language tutors new to audio-graphic conferencing after they spent a year providing online tutorial support for a new beginners’ Spanish distance learning language course. We will present how the tutors compare the online and face-to-face environments and how they perceive their roles as online tutors. We conclude that the success of tutorials in an audio-graphic environment depends on the individual tutor: their personality, warmth, and ability to communicate and manage learning become more relevant in the new environment; and that therefore training that goes beyond the technical and pedagogical and includes these aspects is required. arated from their communication partners" (Levy & Stockwell, 2006, p. 94). Audio conferencing systems now can feature additional tools such as whiteboards, text editors, text chat facilities and web browsers, which can aid communication and interaction. The inclusion of these visual, verbal and written elements into audio-graphic conferencing means that the tools become multi-modal environments which provide opportunities for language learners to interact with a tutor, other learners or with native speakers in the target language. This collaborative learning environment affords relationships between learner/learner or tutor/ learner which fit the principles of social constructivism, where learning occurs through interaction: audio-graphic SCMC "is an ideal medium for collaborative learning through social interaction both with tutors and with peers" (Hampel & Hauck, 2004, p. 68). In the context of Open and Distance Learning (ODL), audio-graphic conferencing has been praised for its potential for "removing the distance from distance learning" (Kötter & Shield, 2000, p. 16) and ODL providers such as Högskolan Dalarna University in Sweden, or the UK Open University, have developed their own audio-graphic conferencing systems. But issues such as coping with the multimodality of the tools, the inclusion of contextual information, the narrowing of the range of symbolic cues, and the increased possibility of ambiguity (Erben, 1999) are still part of the research agenda of mediated interaction through audio-graphic conferencing. Colpaert (2004) argues that the following criteria of usefulness should be employed when evaluating online interactive language courseware: usability, usage, user satisfaction and criteria for optimizing didactic efficiency. The adoption of audio-graphic conferencing brings with it changes in roles and relationships for tutors and students alike, and these have an effect on usage, user satisfaction, and the learning process (didactic efficiency). Therefore these need to be researched to gain an insight into the learning experience of those teaching and being taught through the medium. Tutors' experiences of online environments are a key element in the Computer-Assisted Language Learning (CALL) research agenda (Warschauer, 1997) as "the teacher's point of view provides us with another vital perspective (…) and it is a view that must be carefully acknowledged if CALL is to be successful" (Debski & Levy, 1999, p. 10). The relevance of the tutor's role in audio-graphic learning en-the tutor's role in audio-graphic learning en-tutor's role in audio-graphic learning environments has been acknowledged throughout. Early studies already established the tutor as a key figure in the favourable outcome of their project (Kötter et al., 1999). More recent research, such as the case of a study of online tuition using OnLive Traveller, also identified "the close relationship which is created between the teachers and the students" (Eklund-Braconi, 2005) as one of the success factors. There is a call for research into the human side of teaching with audio-graphics to identify and share best practice: "This can be achieved by conducting research into tutor attitudes and teaching styles, tutors' use of the online media and tutors' awareness of the different interaction patterns of online and face-to-face communication -to name but a few of the areas where further investigation would benefit the development of best practice in online tuition" (Hampel & Stickler, 2005, p. 323). In a study of tutor attitudes towards teaching with the Open University's Lyceum audio-graphic software, Rosell-Aguilar (2006a) found that tutors, despite technical problems, had a very positive teaching experience and liked using the tool; however some experienced technical problems and believed that these affect the learning experience. The audio-graphic tutor: changing roles and skills "A clearer understanding of the roles and skills required by online tutors will assist those already in the field who wish to improve their practice, and help those new to online teaching" (Cornelius, 2000, p. 1). The role of the tutor in CMC learning environments has been part of the research agenda since the late 90s: "What is the right role for teachers to play in the computer-mediated learning environment? How can teachers make the effective transition from 'sage on the stage' to 'guide on the side' (Tella, 1996, p. 6) that online education entails?" (Warschauer, 1997, p. 478). With continuing technological advances, the issue continues to be important: "instructors have an important role in technology enhanced learning environments, especially those that incorporate complex learning paradigms involving constructivist or whole language principles (Stepp-Greany, 2002, p. 174). Goodfellow (1999), in the context of written CMC, noted the increased workload for tutors and the fact that the new context brings new expectations from the tutors. Most of the published research on changing roles for online tutors refers to written asynchronous CMC, and focuses on the moderator and management roles that the medium requires (see McPherson & Nunes, 2004, for a recent review). One of the first findings of the research into audio-graphic environments was that tutors needed to adapt their teaching style to the learning environment to take into account the delay in response time, and other limitations of the medium (Kötter, Rodine, & Shield, 1999). The pressures on tutors, and the roles that they are asked to perform, change and grow with the addition of the CMC factor: the capable 21 st century teacher should be "able to move beyond the basic competence (knowledge and skills) towards a flexibility (coping with present twists and turns) and an adaptability (coping with uncertain futures) in a manner that demonstrates potential and professionalism" (Cairns, 1998, p. 49). The roles of ODL language tutors are very different from those of traditional tutors, but there is "little research that focuses directly onto the role of the distance language tutor" (Shelley et al., 2006, p. 2). When referring here to the face-to-face tutor it is within the ODL setting, where most of the teaching is done through supplied materials and the tutor has other more prominent roles. These include providing support for self-study, offering opportunities to practise what has been learnt at home, as well as other responsibilities also in the domain of traditional teaching: dealing with queries, helping with management of learning, stimulating, boosting confidence, developing learning strategies, encouraging community building. In addition, the learning environment created by tutors in ODL "is a key factor in student retention" (Tait, 2004, p. 97). The audio-graphic environment places ODL tutors in a new context. Context plays a very important role when defining tutor roles and activities (Cornelius, 2000) and "the teacher not only has to attend to the needs of the students in a CALL environment, but his or her choices will also be governed largely by the conditions set by the local context, especially the technological resources, levels of access to computers, technical support, and the institutional, educational and cultural priorities" (Debski & Levy, 1999, p. 10). In the case of ODL language tutors, the ways in which their attributes and expertise change needs to be explored, especially "as they enter new environments, particularly online environments and virtual support networks" (Shelley et al., 2006, p. 12). Early research into changing tutor roles in asynchronous text conferencing divided tutor roles into pedagogical, social, managerial, and technical (Berge, 1995). Later research defined two tutor types: the "social tutor", who focuses on fluency and allows social interactions, and the "cognitive tutor", who focuses on accuracy and is subject-knowledge oriented (Lamy & Goodfellow, 1999). In synchronous audio environments, it was found that the tutors "became 'co-ordinators or managers of learning events' rather than tutors in the 'traditional' sense" and also that "the tutor's presence proved to be invaluable both as monitor and facilitator" (Hauck, Hewer, & Shield, 1998, p. 4 of printout). Similarly, Vetter (2004) separated the roles of tutors using an audio-graphic conferencing tool into those of animator, who sets in motion the situations for oral interaction, and facilitator, being available to help students produce the task outcomes. However these classifications limit tutor involvement to the pedagogical side of the many roles that distance tutors carry out. The role of the tutor is much wider. Hauck and Haezewindt (1999) argued that the audiographic tutor needs to have: confidence when operating the software, • skills to adapt their teaching style to the audio-graphic environment, • strategies to encourage students to take more active roles in the learning process. • The last two skills are also referred to by Bennett and Marsh (2002) who state that, beyond the technical level, tutors need to identify the differences between the online and face-to-face contexts and identify the strategies and techniques that can facilitate online learning. It is not only the differences but also the similarities that must be identified: "it is the tutor's responsibility to amalgamate the forum, acknowledging the interest of those who take part freely and directly addressing the quiet ones, as the tutor would normally do in a face-to-face setting" (de los Arcos & Arnedillo Sánchez, 2006, p. 88). Hampel and Hauck (2004) identified five roles which applied to the audio-graphic environment, based on Dias's (1998) "ten teacherroles": "teacher as confidant", bringing to the students an insight into the rationale of the delivery; "teacher as nervous parent", coping with the new technology; "teacher as trouble shooter", to provide technical advice; "teacher as student", teaming with the students in the learning process; and "teacher as human being", getting to know the students. These roles were described as fluid, and changed throughout the activities. Shield, Hauck & Hewer (2001) state that these roles are mostly social, except for the troubleshooter role and specify that the roles are not only fluid, but that the tutor may be filling more than one at the same time. They separate audio-graphic tutoring styles into three categories: cognitive, social, and administrative, and associate specific roles (some roles to more than one) to each style (Table 1). Their summary of the overarching tutor role is that of a "responsible adult" and "the lynchpin holding the whole enterprise together" (2001, p. 82-83). In a later study, Hampel and Stickler (2005) developed a pyramid of skills required for online tutoring, with basic ICT competence at the bottom, and other skills built on top of it: specific technical competence for the software, dealing with constraints and possibilities of the medium, online socialization, facilitating communicative competence, creativity and choice, and finally the tutor's own style at the top. Tutors have limited affordances within an audio-graphic environment and another factor that has been identified as an issue that affects online tutorials through audio-graphics is multimodality. Audio-graphic tutors (and students) need to cope with the multimodality of the medium: moving the mouse, writing and reading text in the chat section, talking, listening, opening and closing modules, placing images, or moving objects. All these additional roles place considerable demands on the tutor's time (Kötter, 2001;Hampel & Hauck, 2004) and require a good disposition from the tutor: "the best strategy to handle the enmity of the system in such terms is patience in capital letters: for it is also a grateful medium for carrying exceptional quantities of good manners and humour" (de los Arcos & Arnedillo Sánchez, 2006, p. 90). The study In this section we first present the context of a study of data collected from 12 tutors after spending a year providing online tutorial support for a new Spanish beginners' ODL course. We will then identify the roles they perceive they are playing as audio-graphic tutors and examine the differences they report between providing tutorial support in the audio-graphic medium and in face-to-face. Context: course, software, and methodology The tutors who took part in this study teach LZX194 Portales, the Open University beginners' Spanish distance course. To complete the course, students are expected to work independently through books, audio CDs, study guides, and assessment materials in approximately 300 hours of study (which include 21 voluntary hours of contact with a personal tutor). They are given access to a course website where they can find the electronic versions of many of the course materials, such as the main teaching and assessment books on e-book PDF format, or the audio files for the listening component of their assessment, as well as the course calendar, online resources and an asynchronous text CMC conference. There are two versions of the course. In one, tutorials and the end of course oral assess-. In one, tutorials and the end of course oral assess-n one, tutorials and the end of course oral assessment are face-to-face, whereas for the other students audio-graphic conferencing is used instead. Attendance to tutorials is voluntary, but the end of course assessment is compulsory. In the first year the course was offered (2003-4), 1694 students signed up for the face-to-face strand and 536 for the online strand. The audio-graphic conferencing software used at the UK Open University for language courses is an in-house developed programme called Lyceum, which allows multiple users to meet online for plenary or small group work and includes synchronous audio conferencing, whiteboards, a text editor, text chat, and a voting facility among other tools. The software is available to all Open University students and many use it beyond tutorial time to meet socially or form study and revision groups, hence making the most of the affordances of the tool as a medium that allows them to collaborate and take responsibility for their own learning. A taster website is available (http://lyceum-taster.open.ac.uk). The tutors were all experienced face-to-face teachers but, as this was their first experience of teaching with the audio-graphic environment, they received three training sessions before the start of the course. These focused on technical and pedagogical training, including how to promote peer work, community building, and strategies for inclusion of all students. A fourth training session took place before the end of course assessment, to ensure that tutors were familiar with the format, the marking criteria, how to use the recording facility and send the assessment recordings to the examinations office. As this was the first time a Spanish course offered online tuition at the Open University, the tutors had experience as language teachers but had no experience of teaching using audio-graphic conferencing when they started tutoring. Because of the technical and pedagogical challenges of the medium, online tutors were provided with materials which they had the choice to use, modify, or not use at all (for more details on the course, software and the materials provided see Rosell-Aguilar, 2005). It was up to each tutor to manage their tutorials according to their teaching style and the group of learners in their group, so the learning experience was different in each case. In all, 26 tutors taught the online strand of the course, with 15-20 students per tutorial group (although many tutors chose to teach more than one tutor group). All 26 online tutors were contacted by email at the end of the course and asked to take part in the study. They were asked two questions: What do you think your roles as an online tutor are? • What do you think the differences between being an online tutor and a face to face • tutor are? and asked to send their comments, however brief. Twelve of them replied and their responses are presented below. Although the sample is relatively small, it represents 46% percent of the total number of tutors teaching the course. Results & Discussion The responses gathered are both about differences in the roles that tutors perceive to perform and about differences between face to face and online tutorials. They are presented here in two sections, each divided into parts to reflect the issues discussed above, and illustrated with quotes from the responses received. Together they provide a picture of the online tutorial and the skills required to deal with tutoring in the medium. a) Differences in perceived roles The roles the tutors perceive to perform can be mapped within the previously identified cognitive, social, and administrative roles. The main role, identified in the responses of eleven out of the twelve tutors, is the traditional cognitive role: offering language support to students by providing communicative activities and opportunities for practice. To this, some add monitoring performance and of- Rosell-Aguilar: Changing tutor roles in online tutorial support . . . fering feedback of pronunciation and accuracy, and developing materials. They also list some of the general roles associated with being a distance tutor, such as "helping them organize their independent learning," or "providing opportunities for reflection on their own learning, providing tools and resources for increasing autonomy". Within the social role, the respondents mention mentoring students and more affective roles: "to build up a more personal relationship with students," and "to make them feel at ease during the lesson and to help give them confidence" or "source of emotional support, to ease the anxiety of having to speak Spanish in a medium which in itself can be very cold; the tutor has to create a warm atmosphere, making a bigger effort than in a face to face classroom, precisely for the lack of visual information." Similarly, another tutor feels that he needs to enhance the medium and make up for its deficiencies: "Since there is no personal contact, body language, etc. e.g. in tutorials, the students have to feel that the medium is worth the experience, and that they wouldn't be better off in a face-to-face course. So, in my opinion, one of our roles is that of 'enhancer' of the medium." To one tutor, however, the lack of visuals is not a problem: "I never get to meet physically with my students, nor will I ever see them. This can be discouraging from some students, especially those new to distant education, but I don't find it is a problem." She is a very experienced OU tutor and compares the new medium with the traditional alternative to face to face tutorials before the adoption of the audio-graphic software: "Before Lyceum I had telephone tutorials, which I think worked well, but had issues and difficulties of their own. Lyceum has meant a tremendous improvement. It is definitely the best alternative to face-to-face tutorials and has some advantages, such as a not having to move from your home to attend them and that students may be more ready to take risks." The tutors also refer to administrative issues related to teaching the course, such as booking rooms, or sending emails with reminders. Software troubleshooting is grouped within these administrative roles, and it includes training the students to use the software, and providing technical support, for which some state that they need to be confident: "we must be familiar and to a certain extent confident with the use of the technology involved (…) I find that students who join the tutorials do not need too much technical help from me. They pick up any new features during the lessons." One tutor, however, feels she is not competent enough to advise on technical matters, although she does refer them to the helpdesk; and another tutor states that it was time consuming to practise new features of the tools. Two tutors comment on the different management of students in the environment: "in terms of moving from one activity to the other, deciding who goes with whom to work in a breakout room"; "I do feel like a conference moderator," states another. b) Differences between face to face tutorials and online tutorials: Comparing online environments has been mostly restricted to studies of asynchronous text-based CMC (see Creanor, 2002 for a comparison between two online courses, or de Freitas & Roberts, 2004 for a comparison of online and face-to-face versions of the same course). Although some consider that "teaching a language over an audio-graphic conferencing system is a completely different experience from teaching a language in a face-to-face environment; any attempt at comparing the two should be disregarded as futile" (de los Arcos & Arnedillo Sánchez, 2006, p. 91), an initiative was undertaken by the Department of languages at the UK Open University to compare the online and face-toface tuition modes of the beginners' courses. A study of the possible differences between face-to-face and online students looked at their personal details, attendance to tutorials, perceived benefits of attending tutorials, assessment scores, and dropout rates and found that there were not many differences between online and face-to-face learners except in course results. The responses suggested that despite some technical problems, the software was generally liked and perceived to provide a good, convenient learning environment (Rosell-Aguilar, 2006b). In the study that this paper reports on, the tutors' responses are quite varied, but some point out that although the differences are "many on the surface," there are not that many differences "on the very basics: offering support and helping each student's learning process." Five main aspects can be extracted from the responses. Two of these are software limitations: the lack of paralinguistic clues, and the limitations that arise from only being able to have one person speaking at a time when everyone is in the same room. The other three are linked to one another and all have an effect on the tutors' roles and the tutorials: teacher talk, the atmosphere in tutorials, and contact with students. A) The lack of paralinguistic clues means tutors are unsure of the students' reactions to what they are saying: "in face2face we tend to be very theatrical when using the target language but all that is lost in Lyceum," "it can be more difficult to appreciate what really is going on with the students (signs of boredom, puzzlement…)." B) The limitation of one person being able to speak (technically more than one can, but in practice the utterances can become unintelligible) affects feedback: "[it is] difficult to correct or advise discreetly as [it is] not possible to have a quiet word with one student." C) Teacher talk is affected by the previous two factors. Because of the lack of visual clues, tutors comment "I spend more time talking / explaining than I would in face to face," and "my "teacher talk" has to be more controlled than it would in a classroom (as I communicate with gestures)." One tutor thinks that "this makes it particularly difficult to make a student feel at ease because we can't see reactions. Even the most competent of students still feel nervous in tutorials." Similarly the speaking limitations mean that "the tutor has to moderate the speakers in a more formal/rigid way. Sometimes I feel that I use too much of their talking time." Another tutor thinks that this means that "students are not so likely to interact independently." D) The knock-on effect continues to the atmosphere in tutorials and the sense of community among the students. Several tutors say they find it "more difficult to create a relaxed enjoyable atmosphere." One of them qualifies that "because your only tool is your voice, you cannot go into the classroom with a big smile on your face because nobody can see you, so that warmth has to come from the way you deal with the other people in the conference. Also Lyceum requires more patience from tutor and students alike, and a lot of understanding and respect for your fellow participants." Another tutor adds: "The online tutor must of course, provide these feelings of belonging and trust without physical contact." With regards to the sense of community, one tutor misses the degree of intimacy that socialising after the tutorial can bring, but another says: "I actually felt I 'knew' my on-line students better because we 'met' fortnightly and we did not have the long gaps between sessions." This is because face-to-face tutorials are usually longer and they are scheduled every five weeks, whereas most online tutorials take place every two weeks. She clarifies that "before and after the tutorials, the atmosphere was more relaxed, whereas the tutorial itself was very structured." Students are encouraged to meet online outside tutorials, and one of the tutors thinks this helps develop a learning partnership. E) Finally, several tutors comment that their online students communicate outside tutorials via e-mail more than their face to face students. They hypothesise that this is because they are used to communicating with them via the computer. One tutor finds this has a beneficial effect: "Sending an email is much easier than making a phone call and I find that [online] students do send lots of emails, and respond with a short answer to emails sent to them. They use them for comments on forthcoming events, travels, and they drop a line on themselves, dog has died, on the weather... Many of them try to do this in Spanish, even at beginners' level; I am sure they wouldn't try in a letter or by phone. So this is the apparent contradiction I have experienced: online tutors may find that they actually have a much more fluid and friendly relationship with their students than face-to-face tutors relying on tutorials or the phone for contact." In a comment that serves to provide an overall picture, a tutor who is new to ODL says "I have been teaching the same module in face-to-face format and on its online version for about 18 months now, and I can't see online learning as an issue of one medium being better or worse than the other, online worse than face-to-face or the other way around, but rather as an issue of learning how to get the best of both mediums. In the best of the Open Uni ethos, both students and tutors are together in that process." One very experienced OU language tutor compares the roles using an entertainment metaphor: "I see the face to face tutor like a stand up show person, or musician with a live audience whereas the online tutor is running the show from a radio station." This is a suitable metaphor-and it is similar to that of the online teacher being an orchestra conductor (Felix, 2003)-where the face to face tutor is present on the stage, the online tutor is only a voice, but like a radio presenter it is the familiar voice that brings humanity and warmth to the environment, accompanies the students in their learning journey, carries the programme and is in charge of the content, focus and pace based on an original plan but always shaped by the contributions of those listeners that become active and take part. In summary, the tutors agree that different skills are needed and new demands are placed on them, but what seems to arise from the comments is that the differences between the online and face-to-face environments are not substantial: "I do not believe online tutoring to lack in anything, I just think that the tutor needs to adapt to the medium and makes the most of it. And when one does, s/he finds s/he gets to know their students well, can support them effectively and even create a group feeling among 'all'." Other considerations for the use of audio-graphic conferencing in distance language learning: One key issue is that even though audio-graphic tutoring may not involve many more or different skills than tutoring face-to-face, and tutors who were inexperienced at using the tool report being able to adapt with relative ease, audio-graphic tuition is conceived as an alternative, and not a replacement, for face-to-face tuition. In the context presented it has replaced telephone tuition: it is immensely more popular in numbers than telephone tuition was, and with the visual modules it can offer better stimuli for group interaction, with the exception of the possibility of technical and sound problems. These, however, should be no more of a deterrent to using audio-graphic SCMC than a crackling line should have put people off using the telephone years ago. A matter that was referred to above is the multimodality that is inherent to the audiographic environment: whilst managing all the different modes and signals (such as hands help up, votes, text chat, and objects, on top of voice), often at the same time, is a characteristic of the environment that online tutors need to get used to or a skill that they need to develop, this was not mentioned in the respondents' replies. One possible explanation is that coping with multimodality is no longer limited to the realm of the computer, but something that is becoming commonplace. Let's consider television as another multimodal tool: 24 hour TV News channels show a newsreader making announcements while statistics, graphics, and images appear on the screen and main headline news scroll as text. Users are asked to contribute by sending in photos, videos, and comments by email or SMS, which scroll or pop up onscreen. Many television viewers use shopping channels, where a demonstration is featured while further details and prices are on the side of the screen, and updates on stock amounts scroll whilst on the corners we usually find the logo of the channel and the telephone number or URL required to make a purchase. Digital television viewers have access to additional content in many channels. On Children's television, programmes like the BBC's Level up present a completely multimodal approach to television, where presenters, text, images, and the children's contributions via their webcams form only part of a whole that incorporates a website, email contributions, and the viewers' blogs. Whilst not every user/ viewer/learner will like this, multimodality is a part of modern life and therefore its impact on a learning tool may not be as big an issue as had been anticipated, at least for students, who, although involved in the tutorial, are not part of its management. The tutor, how-, although involved in the tutorial, are not part of its management. The tutor, how-although involved in the tutorial, are not part of its management. The tutor, however, could potentially be overwhelmed by the amount of actions required whilst online, although in a previous study of tutor impressions of teaching with audio-graphic software, the multimodal aspect of it was only perceived as a benefit of the tool, not a challenge or problem (Rosell-Aguilar, 2006a). To address the issue of increased workload in the context of written CMC, Thorpe and Twining (2001) proposed separating the roles of the tutor and conference moderator. This is feasible in the written CMC context with one nation-wide moderator who does not have to reply immediately, but in the synchronous audio-graphic context it would be unmanageable or very expensive to staff. One option that might lighten the load for tutors would be to work in tandem. Tutorial groups with low attendance could be merged and the two tutors could have clear roles deciding beforehand who leads each activity, for example. This way while one talks the other could be managing different modules, or typing new words into the chat, they could separate into different breakout rooms and do more personalised feedback, among other possibilities. This option has been suggested to tutors, but there is no information available on the take up or success of the proposal. Finally, research into audio-graphic tools has mainly been exclusive to researchers in the few institutions that have access to these tools, as is the case of Lyceum at the Open University. Although that research is of interest to the wider research and teaching community, its applicable value has, because of that exclusivity, been quite limited. The Open University intends to phase out Lyceum in favour of a new Moodle-based open-content audio-graphic synchronous conferencing tool, expected to be available in 2008. The open content initiative means that the new software (which is expected to provide audio -and video -conferencing as well as other tools) should be available for use by other education providers. With this development, the insight into teaching with such tools becomes more valuable to other language learning professionals and institutions. Conclusion In this paper we have presented the data collected about tutor roles in the audio-graphic environment, and the differences between teaching face-to-face and online tutorials. The roles that the participants report as audio-graphic tutors match those previously identified for tutors of higher level language tutorials. However, due to the fact that the participants taught a beginners' course, some issues reported by previous research-such as preparation work between tutorials, or eliciting higher-level language-did not apply. Instead tutors rose to the challenge of engaging the learners in the limited language they could use, since, for many of the students, this was the first time they had studied or used the foreign language. Tutors do need specific skills to operate the audio-graphic conferencing software, know its strengths and limitations. However there is no need to go beyond "average" computer literacy skills and the available training to achieve this. The tutors did not need to develop their own materials, but developed the sense of what will work with the group and within the environment and therefore made the necessary changes to the proposed lesson plans to fit their particular tutorials. But as this survey was undertaken after the tutors had been teaching the online course for only one year, their skills with the tool and their views may have changed as they gained more experience of using it. Their management skills, including booking rooms, or sending emails with the time of the tutorial, need not be any different from those of a face-to-face tutor, indeed aside from setting procedures and differences in teacher talk they did not find tutoring online differed much from the traditional face-to-face environment. The tutors' own style appeared at the top of Hampel and Stickler's pyramid of online tutoring skills. Whilst this is true of face-to-face teaching as well, it seems to become even more essential in an online environment. No matter how many tools, affordances, or opportunities for communication the software and environment provide, it is the tutor who will make the experience a failure or success. Eklund-Braconi (2005) came to the conclusion that in an audio-graphic learning environment a success factor is the teacher who has to "fill the gap in the [virtual] space." But the tutor's role goes beyond "filling the gap" to bringing humanity and warmth, having the ability to communicate and manage the envi-ronment, keeping the momentum and also the focus, much like a radio DJ would. For this reason, tutor training and staff development that go beyond the technical and focus on the social aspects of tutoring are essential in the provision of audio-graphic online tuition.
2016-01-11T18:29:14.669Z
2007-08-31T00:00:00.000
{ "year": 2007, "sha1": "05e44b54a8e41659040a8fa906d2f6c5c75b73c3", "oa_license": "CCBY", "oa_url": "https://journal.jaltcall.org/storage/articles/JALTCALL%203-1%20&%202-81.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ee5f93a5ef4313ce0836c9a470344469fed42bec", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
86860492
pes2o/s2orc
v3-fos-license
Vibration protection of sensitive components of infrared equipment in harsh environments This article addresses the principles of optimal vibration protection of the internal sensitive components of infrared equipment from harsh environmental vibration. The authors have developed an approach to the design of external vibration isolators with properties to minimise the vibration-induced line-of-sight jitter which is caused by the relative deflection of the infrared sensor and the optic system, subject to strict constraints on the allowable sway space of the entire electrooptic package. In this approach, the package itself is used as the first-level vibration isolation stage relative to the internal highly responsive components. It was predicted analytically, and confirmed experimentally, that the proposed vibration isolation system would be capable of a sixfold reduction of the dynamic response of the infrared sensor as compared to the case of rigid mounting of the entire package. Introduction Infrared (IR) imagers enhance tremendously the ability to detect and track ground, sea and air targets, and also to navigate at nighttime [2,6].Their operating principle is based on that simple fact that warmer objects radiate more and cooler objects radiate less.Since their noise figure strongly depends on the operating temperature of the IR detector, a high-resolution imager requires cryogenic cooling down to 80 K and a high level of optic stabilisation. Modern sophisticated airborne thermal imagers, which require compact design, low input power and long life-times, often rely on closed cycle cryogenic coolers.Stirling coolers are especially suitable for such applications.Compared to Gifford-McMahon and Joule-Thomson cycles, Stirling offers more than twice the cooling performance in the cooling power range 1-100 W. The application of new technologies allows the life-time figures for Stirling coolers to be well beyond 40000 hours [5]. Stirling coolers, which may be of both split and integral types [6,11], typically comprise two major components: a compressor and an expander.In a split cooler these are interconnected by a flexible gas transfer line (a thin-walled stainless steel tube of a small diameter) to provide for maximum flexibility in the system design and to isolate the IR detector from the vibration interference which is produced by the compressor.In the integral cooler these components are integrated in a common casing. The reciprocating motion of a compressor piston provides the required pressure pulses and the volumetric reciprocal change of a working agent (helium, typically) in the expansion space of an expander.A displacer, which is located inside a cold finger, shuttles the working agent back and forth from the cold side to the warm side of the cooler.During the expansion stage of the thermodynamic cycle, heat is absorbed from the cold finger tip (cold side of a cycle), and during the compression stage, heat is rejected to the ambient from the cold finger base (warm side of a cycle) [6,11]. It is a modern tendency to mount the IR sensor directly upon the cold finger tip.Such a concept, which is known as Integrated Dewar Cooler Assembly (IDCA), allows for practical elimination of the integration losses and better temperature uniformity across the IR sensor, as compared, for example, with the old-fashioned sleep-on design [6]. The typical layout of an IDCA, which relies on the integral Stirling cryocooler RICOR model K508A, is shown in Fig. 1. Figure 2 shows the schematics of the integrated electro-optic package containing the integral cryogenic cooler 1 which carries the IR sensor 2 upon cold fin- ger tip 3 .Both cold finger and IR sensor are located inside the vacuum dewar envelope 4 .The cryogenic cooler, along with the appropriate optics 5 , is mounted upon the single rigid structure (optic bench) which is required for proper optic alignment and stabilisation and also for placement of accompanying electronics.Figure 3 shows a typical electro-optic device which relies on the integral RICOR model K508A cryogenic cooler. The well-known drawback of the electro-optic devices which rely on the IDCA concept is their high sen-sitivity to external broadband random vibration.This is because, since it is desirable to decrease the heat conductivity of the cold fingers, they are typically thinwalled and manufactured of low-conductive alloys such as stainless steel or titanium.A cold finger which carries an IR sensor may be treated as a cantilever beam with an end lump mass.As a result of the low stiffness and damping intrinsic in such a structure, the IR sensor behaves as a lightly damped dynamic system the principal natural frequency of which falls into the frequency range of external disturbances.Wideband random excitation, therefore, may give rise to a quasiresonant dynamic response of the sensor relative to the rest of the optic system.As the level of line-of-sight jitter becomes comparable with the spatial resolution of a particular sensor, the vibration components contaminate the video signal causing the drastic degradation in performance of the IR imager. Figure 4 shows the experimentally measured universal relative transmissibility of the IR sensor in a typical IDCA design.From curve-fitting based identification, the natural frequency is 816 Hz and the loss factor is 3.2%.Such a low loss factor and natural frequency explains high vulnerability of the IDCA design to external random wideband vibration, the excitation spectrum of which usually contains essential frequency components up to 2000 Hz. The known approaches to ruggedizing of vibration sensitive components involve different combinations of stiffening and damping treatments.These methods aim to either reduce the resonant amplitudes by damping, or avoid them altogether by increasing the relevant resonant frequencies to above excitation frequency. However, the traditional methods of ruggedizing become inadequate for the vibration control of the cold finger of a cryogenic cooler.An increase in the wall thickness of a cold finger and application of additional supports for extra stiffness lead to an excessive growth in heat loading through the increase in conductivity and shuttle losses.Since outgassing inside the high vacuum envelope is a concern, the application of the typical polymer materials for damping becomes practically impossible. To combat the problem of excessive response of the IR sensor the designers are now looking at using vibration isolators.As the static and dynamic alignment of the IR sensor relative to the rest of the optical system is a concern, the vibration isolation of the entire electrooptic package is the only option.It is important to note now that large dynamic response of the entire isolated package involves "solid body" modes of motion and takes place at typically low frequencies.Since the deflection of the IR sensor relative to the optic system is not involved in such motion, the quality of the IR imaging of the remote targets is not affected.Figure 5 shows the schematics of such an isolated IR device. Vibration isolation is the simplest, most widespread and well-studied method of vibration protection [4,7].As is known, the best isolation in the typical high frequency span may be achieved when the natural frequency and loss factor of the vibration isolator are low.Unfortunately, such an isolator is only feasible in such applications where the intensive shock, random excitation and constant accelerations (g-loads) are not typical.As the equipment containing such a low frequency and lightly damped vibration isolator involves an exposure to the aforementioned harsh environmental conditions, which are typical for airborne applications, the problem of excessive deflections becomes a serious concern.These conditions encounter wideband random excitation (e.g.flight through turbulent flow) and high g-loads which are experienced by the airborne vehicle at take-off, climb, high-speed turn, speedup, etc.It was recently revealed that the newly designed jet fighters (e.g.Eurofighter) might develop accelerations up to 12 g. Vibration isolators As a result, plenty of free "rattle space" must be allowed around the equipment, and the vibration isolators, thermal and electrical interfaces need to be of special design.Such an approach complicates the entire system and makes it both unreliable and costineffective. An increase in the natural frequency and loss factor of an isolator allows for the close control of the above deflections.However, even where analysis of the isolation system is carried out, it is traditionally the response of the entire package that is optimised [1,4,7].Such a design eventually requires the application of a heavily (critically) damped vibration isolator.It is widespread opinion, which is supported by the leading manufacturers of vibration isolators, that only highly damped isolation materials provide the only choice for adequate protection in electronic equipment [1]. Such a concept completely misses the purpose of using isolators -to protect the sensitive internal components, and eventually calls for the application of in- adequate highly damped vibration isolators with poor vibration isolation in the high frequency range which typically contains the natural frequencies of the above critical components. The novel approach developed in this paper is the use of vibration isolators with properties to minimise the relative dynamic response of the internal sensitive components (IR sensor relative to optic system, in this instance), subject to the restraints imposed on the peak deflections of the entire IR package.Such a design approach uses the existing equipment as the first level vibration isolation stage with respect to the internal sensitive components, and is based on the authors' ideas [8][9][10] in application to vibration protection of critical components in electronic equipment. In this article, the authors develop the mathematical model of the two-degree-of-freedom (TDOF) vibration protection system and the procedure of its optimisation. The results of numerical analysis are backed up by experiment.Under the "white noise" harsh random vibration test (12 g rms; 10 to 2000 Hz) the dynamic response of the IR sensor was reduced sixfold (from 14.4 µ rms to 2.3 µ rms) as compared with the case of rigid mounting of the entire package.Such a level of sight-of-jitter meets the customer specification of 3 µ rms. The use of such an isolation system is very attractive across the industry, as no design is required in altering the sensitive internal components of existing equipment.An additional benefit is that a considerable vibration protection of the accompanying sensitive electronics and optics may be attained. Experimental study of dynamic properties of cold finger The experimental rig shown in Fig. 6 was created to study the dynamic properties of the system. The cryogenic cooler 1 (RICOR, model K508A) which carries the IR sensor dummy upon the cold finger tip is mounted over the vibration exciter 2 (Ling Dymanic Systems, model V550).The control accelerometer 3 (Bruel & Kjaer, Type 4393) is mounted on the fixture.Its signal is fed through the charge amplifier 4 (Bruel & Kjaer, Type 2635) to the dual-channel vibration analyser 5 (Signal Calc Ace, Data Physics Corporation) and simultaneously to the vibration controller/power amplifier 6 (Ling Dymanic Systems, model DVC 48/PA550L).The dual-beam fibre laser vibrometer 7 (Polytec, model OFV 502) measures the velocity of the IR dummy relative to the cold finger basis.In the experiments, the vacuum envelope covered the cold finger, and measurements were taken through the transparent quartz window. Figure 7 shows the layout of the experimental rig (a) with and (b) without the vacuum envelope. The cryogenic cooler was subjected to the "white noise" harsh random vibration test (12.4g rms, 10 to 2000 Hz).It is important to note that prior to the experiment the cryocooler was pre-cooled to simulate the actual damping properties intrinsic to the cold finger.The experimentally measured relative transfer function (transmissibility) of the cold finger is shown in Fig. 4.This curve indicates that the dynamic system under investigation behaves as a single-degree-of-freedom (SDOF) system in the frequency range encountered.From curve-fitting, the modal natural frequency Ω and loss factor ζ were estimated to be Ω 2π = 816 Hz and ζ = 0.032, respectively. Figure 8 shows the actual PSD of dynamic deflection of the IR sensor relative to the base and this indicates an overall level of 14.4 µ rms.From the spatial resolution of the typical IR sensor, only 3 µ rms of the overall relative deflection of the cold finger tip are allowed for smooth IR imaging. Mathematical model of cold finger As the cold finger with the mounted IR sensor behaves as SDOF system in the frequency range encountered, the mathematical description of its dynamic behaviour under the wideband random excitation may be carried out using complex universal absolute transmissibility [3,4]: and universal relative complex transmissibility where ω is the angular frequency and j = √ −1 is the imaginary unity. The PSD of the relative deflection of the IR sensor which is defined by the function Z(t) may be calcu-lated [3,4] in the form: where S Ÿ (ω) is the single-sided PSD of the base acceleration, given by the function Ÿ (t). The RMS of relative deflection may be derived by the integration [3,4]: By making use of the values of the natural frequency and loss factor which are obtained from curve-fitting and the numerical value of the excitation PSD S Ÿ (ω) = 0.08 g 2 Hz @ 10-2000 Hz, we calculate the PSD of relative deflection and the RMS value of relative deflection to be 14.9 µ which is fairly close to experimentally measured value. Model of vibration protection system and general relationships Figure 9 shows the model of a TDOF vibration protection system, where the primary sub-system has the modal natural frequency Ω 1 and loss factor ζ 1 and represents the vibration isolated IR package.The secondary sub-system has the modal natural frequency Ω and loss factor ζ and represents the cold finger with the mounted IR sensor.The base vibration is given by the function Y (t).The absolute deflection of the primary and secondary subsystems are X 1 (t) and X(t).The relative deflection of the primary subsystem to the base is Z 1 (t) = X 1 (t) − Y (t), and relative deflection of the secondary subsystem to the primary system is Z It is important to note that the mass of the secondary subsystem is negligibly small as compared to the primary subsystem.In this particular case the mass of the sensor and effective mass of the cold finger was approximately 200 times lighter than the entire system.Therefore, the dynamic response of the primary system may be considered to be independent of the secondary subsystem.It may also be thought of as a vibration input to the secondary subsystem. The absolute and relative complex transmissibilities of the primary subsystem are derived similarly to (1) and (2) in the form: The PSD and RMS of absolute acceleration of the primary subsystem are calculated in the form [3,4]: Similarly, the PSD and RMS of relative deflection of the primary subsystem are obtained in the form: In applying the 3σ rule [7], which provides for the instantaneous level of normally distributed deflection to be less then 3σ with a probability of 99.73%, and accounting for the additional quasi-static relative deflection due to the g-loading, the total peak relative deflection may be derived by means of the expression: where G is the specified level of g-loading. The vibration of the primary subsystem may be thought of as the excitation to the secondary subsystem, therefore the PSD and RMS of relative deflection may be calculated as follows: and Statement and solution to optimal problem As stated above, even large motion of the entire IR package, which does not give rise to the excessive displacement of an IR detector relative to the optic system, does not affect the quality of imaging of the remote targets.Therefore, the major objective of optimal design of such a vibration protection system is the minimisation of the relative deflection of the secondary subsystem to the primary one, subject to constraints of the limited peak deflections of the primary subsystem to the base.These constraints are typically imposed by the 1.E-10 1.E-09 1.E-08 1.E-07 1.E-06 1.E-05 design of the electro-optic device enclosure, electrical harness and thermal interfaces, etc. Mathematically this may be expressed in the form: where ∆ is the allowable peak deflection of the primary subsystem. Since the natural frequency and loss factor of the cold finger are not supposed to be altered, we consider that Ω2 2π = 816 Hz and ζ 2 = 0.032 (as obtained above from the curve-fitting).The PSD of the excitation is S Ÿ (ω) = 0.08 g 2 Hz in the frequency range 10-2000 Hz.The maximum level of g-loading is 12 g.The allowable peak deflection of the electro-optic package is ∆ = 0.5 mm. The remaining two variables, namely, the natural frequency and loss factor of the primary subsystem Ω 1 and ζ 1 , may be manipulated to meet the conditions of (14).The expressions (1), ( 2), ( 5), ( 7)-( 14) were involved in the design of an appropriate MSExcel worksheet.The procedure of numerical optimisation relies on the application of the standard Solver add-in procedure. As a result of the solution to the optimal problem (14), the "optimal" primary vibration isolator was determined to have the parameters: Under the condition that the primary vibration isola- tor is chosen in accordance with (15), the overall level of vibration experienced by the electro-optic package is σ Ẍ1 = 5 g (attenuation factor 2.9, as compared with the base overall vibration level).The peak deflection of the entire package relative to the base is z peak 1 = 0.5 mm, while the dynamic component is 0.253 mm and quasi-static component is 0.247 mm. At the same time the overall level of relative dynamic deflection of the IR sensor to the cold finger base is σ Z = 2.35 µ rms which indicates a sixfold attenuation as compared to the case of rigid mounting of the entire IR package. Figure 10 compares the PSD of excitation acceleration and that of the vibration isolated package.Figure 11 shows the PSD of the relative deflection of the electro-optic package to the base showing the overall level of 0.084 mm rms. Figure 12 shows the PSD of relative deflection of the IR sensor to the cold finger base showing the mentioned overall level of 2.35 µ rms (compare with Fig. 8). Analysis of sensitivity Since the vibration protection system relies on an optimised vibration isolator, it is important to determine the sensitivity of the dynamic response of the IR sensor to the properties of the primary isolator. Figure 13 shows, for example, the variation of the dynamic responses of both primary and secondary sub-systems in response to the variation in the loss factor of the primary isolator the natural frequency of which remains constant, Ω1 2π = 110 Hz.An analysis of this Figure indicates the low sensitivity of the dynamic responses of both primary and secondary subsystems to relatively large deviations of the loss factor from its optimal value. Choice of vibration isolator In the experiments we used standard, commercially available Shock Tech 1 Cable Mounts (see also Enidine,2 Barry Controls, 3 Aeroflex International [12]) providing for the desired loss factor and natural frequency.Such cable mounts are of all-metal design, constructed out of stainless steel cable and aluminium bars, and especially intended to withstand the severe environmental conditions while demonstrating no outgassing and ageing, long fatigue life and persistence of parameters in a wider temperature range, as compared with the polymer isolators.The wire rope cables in these isolators are inherently damped through internal wire flexure hysteresis, thus providing for the loss factor to be in the range of 30% (treated cable) [12].Since the quality of the vibration protection system depends strongly on the parameters of the primary suspension, this feature is, probably, the most critical for the choice of a proper vibration isolator. Test rig Figure 14 shows the schematics of the experimental rig.In general, the notations are similar to those in Fig. 6.The cryogenic cooler is suspended from the vibration exciter table by means of two Shock Tech Cable Mounts 8 . Additional accelerometer and charge amplifiers are used for measuring the dynamic response of the cryogenic cooler, which, in this case, is definitely different from the motion of the vibration exciter.During the experiments the vacuum envelope was mounted to cover the cold finger and measurements were taken through the transparent window. Results of measurements Figure 16 shows the experimentally measured absolute transmissibility of the primary subsystem (label Experiment).From curve-fitting, the natural frequency and loss factor are estimated as 117.5 Hz and 0.26, respectively.These values are only slightly different from the optimal values (15). Figure 17 compares the PSD of excitation and acceleration of the primary subsystem.The obtained results are in a close agreement with analytical prediction (see Fig. 10). Figure 18 shows the experimentally measured PSD of the relative deflection of the primary subsystem showing the 0.24 mm peak deflection (the 3σ rule was applied to the 0.08 mm rms value).These experimental results are also in good agreement with the analytical prediction in Fig. 11. Finally, Fig. 19 shows the experimentally measured PSD of the relative deflection of the cold finger tip which indicates the overall level of 2.26 µ rms (compare this with 2.35 µ rms from the analytical prediction).The closeness of the obtained results is evident. Conclusions In this article the authors suggest that a relatively heavy electro-optic device should be used as the firstlevel vibration isolation stage relative to the sensitive internal components of the IDCA package.For this purpose they developed an approach to the optimal design of the vibration isolators with properties chosen to minimise the response of the IR sensor relative to the rest of the optic system, subjected to strict constraints on the allowable sway space of the entire electro-optic package. It was predicted analytically and confirmed experimentally, that the proposed vibration protection system would be capable of a sixfold reduction in the relative dynamic displacement of the IR sensor as compared with the case of its rigid mounting. The proposed approach may provide benefit across a wide range of applications.The vibration protection arrangement described may be applied widely when the high-quality rugged and inexpensive dynamic protection of sensitive IR equipment is required.The use of such an isolation system is very attractive across the industry, as no design is required in altering the sensitive internal components of existing equipment.Additionally, considerable vibration protection of the accompanying sensitive electronics and optics of the IDCA package may be achieved. Fig. 10 . Fig. 10.Excitation and dynamic response of the optimised vibration isolator. Fig. 16 . Fig. 16.Experimentally measured absolute transmissibility of the primary vibration isolator and estimation of its modal parameters by curve-fitting.
2019-01-04T15:30:33.042Z
2001-01-01T00:00:00.000
{ "year": 2001, "sha1": "447e4d92bdf0df1d88c852bea5d9a954dd5c5fe2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sv/2001/501572.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "447e4d92bdf0df1d88c852bea5d9a954dd5c5fe2", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
254129723
pes2o/s2orc
v3-fos-license
Memory matching features bias the ensemble perception of facial identity Introduction Humans have the ability to efficiently extract summary statistics (i.e., mean) from a group of similar objects, referred to as ensemble coding. Recent studies have demonstrated that ensemble perception of simple objects is modulated by the visual working memory (VWM) task through matching features in VWM. However, few studies have examined the extending scope of such a matching feature effect and the influence of the organization mode (i.e., the way of combining memory matching features with ensemble properties) on this effect. Two experiments were done to explore these questions. Methods We used a dual-task paradigm for both experiments, which included a VWM task and a mean estimation task. Participants were required to adjust a test face to the mean identity face and report whether the irregular objects in a memory probe were identical or different to the studied objects. In Experiment 1, using identity faces as ensemble stimuli, we compared participants’ performances in trials where a subset color matched that of the studied objects to those of trials without color-matching subsets. In Experiment 2, we combined memory matching colors with ensemble properties in common region cues and compared the effect with that of Experiment 1. Results Results of Experiments 1 and 2 showed an effect of the VWM task on high-level ensemble perception that was similar to previous studies using a low-level averaging task. However, the combined analysis of Experiments 1 and 2 revealed that memory matching features had less influence on mean estimations when matching features and ensemble properties combined in the common region than when combined as parts of a complete unit. Conclusion These findings suggest that the impact of memory matching features is not limited by the level of stimulus feature, but can be impacted by the organization between matching features and ensemble target properties. Introduction: Humans have the ability to efficiently extract summary statistics (i.e., mean) from a group of similar objects, referred to as ensemble coding. Recent studies have demonstrated that ensemble perception of simple objects is modulated by the visual working memory (VWM) task through matching features in VWM. However, few studies have examined the extending scope of such a matching feature effect and the influence of the organization mode (i.e., the way of combining memory matching features with ensemble properties) on this effect. Two experiments were done to explore these questions. Methods: We used a dual-task paradigm for both experiments, which included a VWM task and a mean estimation task. Participants were required to adjust a test face to the mean identity face and report whether the irregular objects in a memory probe were identical or different to the studied objects. In Experiment 1, using identity faces as ensemble stimuli, we compared participants' performances in trials where a subset color matched that of the studied objects to those of trials without color-matching subsets. In Experiment 2, we combined memory matching colors with ensemble properties in common region cues and compared the effect with that of Experiment 1. Results: Results of Experiments 1 and 2 showed an effect of the VWM task on high-level ensemble perception that was similar to previous studies using a low-level averaging task. However, the combined analysis of Experiments 1 and 2 revealed that memory matching features had less influence on mean estimations when matching features and ensemble properties combined in the common region than when combined as parts of a complete unit. Conclusion: These findings suggest that the impact of memory matching features is not limited by the level of stimulus feature, but can be impacted by the organization between matching features and ensemble target properties. KEYWORDS ensemble perception, visual working memory, memory matching feature, high-level ensemble, Gestalt principle Introduction Humans have developed a crucial ability called ensemble perception, in which a group of stimulus properties, ranging from low-level features (e.g., orientation, size, and color) to high-level properties (e.g., facial expression, face identity), are rapidly extracted to form summary statistics, such as a mean or a variance of stimuli (Ariely, 2001;Chong and Treisman, 2005;Haberman and Whitney, 2012;Whitney and Yamanashi Leib, 2018). Past studies have shown that this ability has a mutual effect along with visual working memory (VWM) and have found that summary statistics interact with remembered items within the memory representation (Brady and Alvarez, 2011;Bauer, 2017;Corbett, 2017;Corbin and Crawford, 2018;Utochkin and Brady, 2020;Williams et al., 2021). With respect to the impact of ensemble coding on items stored in VWM, discoveries are that estimates of the memorized item were readily biased toward the mean of a subset in the same color as this item (Brady and Alvarez, 2011), or a subset in conformity with Gestalt principles (Corbett, 2017), or a group of homogeneity of items (Utochkin and Brady, 2020). Notably, recent work has focused on the influence of VWM on the process of ensemble coding (Bauer, 2017;Epstein and Emmanouil, 2017;Dodgson and Raymond, 2020;Williams et al., 2021;Jia et al., 2022). Several papers have explored whether VWM tasks affect ensemble coding and have come to various different ideas of conclusions regarding its impact (Epstein and Emmanouil, 2017;Dodgson and Raymond, 2020;Williams et al., 2021). For example, Epstein and Emmanouil (2017) showed the precision of mean estimations remained when irrelevant items were remembered in a VWM task. However, a few studies have found the bias effect of average estimates emerged when a VWM task (Williams et al., 2021) or a learning task (Dodgson and Raymond, 2020) was done prior to the averaging task and shared features with a part of the stimuli in an ensemble group. Williams et al. (2021) provided an essential explanation for their aforementioned results that VWM influences perceptual averaging through a memory matching feature, while an averaging task without such matching features is unaffected by the memorized object. In the study of Williams et al. (2021), observers were asked to estimate the mean orientation of a stimulus set containing two sets of similar line subsets in different colors while memorizing the colored irregular object. They found that average estimations of all lines were biased toward a subset with the same color as that of the irregular-colored item, suggesting the possibility that the VWM task affected mean estimations through matching features (Williams et al., 2021). However, participants may deliberately devote more attention to memory matching objects and then bias the ensemble estimations. To rule out this possibility, Williams and colleagues compared ensemble biases between a brief duration (i.e., 150 ms) and a long duration (i.e., 500 ms) of the ensemble display in their experiment 3 (Williams et al., 2021). Participants should have reduced bias in brief duration condition because voluntary attentional allocation would be difficult for short presentation durations. The results showed no ensemble bias difference between short and long duration conditions and then excluded the possibility of cuing of attention. Furthermore, the importance of memory matching features in the influence of working memory on ensemble coding was further emphasized by the experiment of Epstein and Emmanouil (2017), who found no connection between the VWM task and the averaging task at the stimulus level without the presence of matching features. Results of their study showed that the accuracy of the size averaging task was unchanged according to levels of working memory load (i.e., remembering zero, two, or four items), nor was it affected by the VWM task. Considering these findings together, we can conclude that the existence of an inter-task shared feature is crucial for the influence of a memorized item on averaging estimations for a group of properties (Epstein and Emmanouil, 2017;Williams et al., 2021). Overall, these findings aligned well with the amplification hypothesis of perceptual averaging (Kanaya et al., 2018), which stated that physically salient elements are involuntarily and automatically weighted more than less salient elements in the contribution of average estimations (Kanaya et al., 2018;Iakovlev and Utochkin, 2021). More importantly, Williams's study (2021) expanded extension of the amplification hypothesis by showing that memory matching items gained more attentional resources and became more salient compared to nonmatching items, which in turn weighted more in average estimations. All the aforementioned studies have discussed or confirmed the influence of memory matching features on low-level ensemble coding (Epstein and Emmanouil, 2017;Williams et al., 2021). However, few studies have focused on high-level ensemble coding. Thus, little is known whether the matching feature effect from low-level ensemble coding extends to VWM tasks' influence on high-level perceptual averaging. As for ensemble perception and object memory, the discrepancy in properties of simple and complex stimuli leads to a hierarchical structure, including both low-level and high-level features (Haberman et al., 2015;Christophel et al., 2017;Ding et al., 2017;Whitney and Yamanashi Leib, 2018). High-level and low-level features differ in their functional roles in our environment (Ding et al., 2017;Whitney and Yamanashi Leib, 2018). Low-level features, such as color, orientation, spatial location, and motion, form a cornerstone for object recognition as well as for taking in an understanding of a scene (Oliva and Torralba, 2006). With respect to high-level features, these are instrumental in offering significant social and emotional information (Cavanagh, 2011;Whitney and Yamanashi Leib, 2018). Having access to ensemble perception of high-level properties is integral to adapting in society, from identifying potential threats to perceiving the emotions of groups of individuals. Furthermore, Haberman et al. (2015) revealed direct evidence that different feature levels might have dissimilar ensemble perception mechanisms, finding that the correlation of summary statistics between high-level (i.e., emotion, face identity) and low-level (i.e., color, orientation) ensemble perceptual tasks was significantly lower than the correlation between those from within the same level. That is, there is not a single, domain-general mechanism supporting all ensemble representation types, and inversely, multiple domainspecific mechanisms work for various feature levels, suggesting a Frontiers in Psychology 03 frontiersin.org hierarchical structure for levels of ensemble perception. Accordingly, in consideration of this hierarchical structure, matching features through the VWM task would lead to different effects in the highlevel ensemble perception. However, this prediction seems contradict to previous literature in which a similar amplification effect driven by physical saliences was evidenced in both low-level (e.g., orientations of lines; Williams et al., 2021) and high-level (e.g., facial expressions; Goldenberg et al., 2021;Goldenberg et al., 2022;Yang and Baek, 2022) ensemble coding. This suggests that the amplification effect is independent of ensemble perception levels. Therefore, we proposed that matching features in VWM would exhibit a similar bias effect on high-level ensemble coding as that on low-level perceptual averaging in Williams's study (2021). Additionally, past research has shown that memory matching features are presented as task-irrelevant attributes of ensemble stimuli for the averaging task (Epstein and Emmanouil, 2017;Dodgson and Raymond, 2020;Williams et al., 2021). Thus, it seems that the amplification effect could be driven as long as matching features and averaged properties belong to an object. It is not clear, however, whether having these two types of features belonging to a physical object is essential for the occurrence of the matching feature effect or these two types of features could be perceived as one object based on the Gestalt principle (e.g., common region). It is well-established that Gestalt principles in a bottom-up manner help individual objects to appear together as an integrated unit within VWM, integrating the distributed discrete items into coherent visual information (Xu, 2002;Woodman et al., 2003;Xu, 2006;Xu and Chun, 2007;Hollingworth et al., 2008;Peterson and Berryhill, 2013;Gao et al., 2016;Kalamala et al., 2017;Montoro et al., 2017). In such a visual process, the VWM stored grouped items with Gestalt principles as one object without the single elements (Woodman et al., 2003;Xu and Chun, 2007) and, therefore, elements grouped by Gestalt principles are remembered more easily than elements united without Gestalt principles (Xu, 2002;Woodman et al., 2003;Xu, 2006). For example, Xu (2002) found that the memorization of two properties combined as parts of one object was as effective as remembering two features seemingly belonging to two individual but connected parts whose combination conforms with the Gestalt principle (i.e., connectedness), indicating that the latter combination can be regarded as a complete unit. Based on aforementioned influence of Gestalt principles over VWM, memory matching features and ensemble properties would be united together perceptually with Gestalt principles (i.e., common region), even though physically organized with two individual parts. As a result, it is reasonable to presume that a similar memory matching feature effect would be observed as the condition in which the matching feature stands as a part of the ensemble stimuli. To test these hypotheses about matching features, and following the experiment of Williams et al. (2021), we used a dual-task paradigm composed of a VWM task and a mean estimation task. In each trial of Experiments 1 and 2, participants were asked to memorize one colored object in a memory display, and then to estimate the mean of the averaging task when remembering the colored object's properties. Additionally, a common color between the memorized object and a subset of the ensemble display was set up as the memory matching feature. Numerous studies have demonstrated that the color can be the basis of a grouping principle and help to form the hierarchical structure of ensemble representations, allowing for better observation of the bias effect of mean estimations if the memory matching feature functions (Brady and Alvarez, 2011;Luo and Zhao, 2018). The purpose of Experiment 1 was to explore whether the VWM task would influence high-level ensemble coding through the inter-task common feature. We selected face stimuli as the high-level ensemble materials. Face stimuli contain many high-level features used as stimulus properties for the averaging task, such as face similarity (Peng et al., 2021), face identity (Haberman and Whitney, 2009;Bai et al., 2015), and facial expression (Haberman and Whitney, 2007). Among these properties, facial identity was an appropriate choice for the perceptual averaging for our experiments and could be scaled to a 360°circular space. To combine color features (i.e., memory matching features) and ensemble properties belonging to an object, the facial skin color was also considered as the task-irrelevant property matching the color with the VWM object. In addition, in order to make the biasing dimension (i.e., the color of faces) independent of the estimation-task dimension (i.e., the identity of faces) as in previous studies (Iakovlev and Utochkin, 2021;Williams et al., 2021), the ensemble display set was divided into two facialidentity subsets with the mean of each subset either clockwise or counterclockwise from the global mean in circular identity space. Then we manipulated the memorized color to be matched with one of the two subsets (clockwise or counterclockwise), or mismatched with neither subset. Consequently, an amplification effect would be manifested if the estimated mean was biased toward the mean of either clockwise or counterclockwise subset depending on the memory matching color. In Experiment 2, the crucial manipulation was to transform an object which combined the matching feature and the ensemble property as two individual parts with physically separated features, but regarded as a unit with a common region, in such case, connected to each other within an identical space. More specifically, grayed-out faces with the identity information were placed inside boxes colored the same as that of the memorized individual. We predicted a strong bias effect on mean identity estimations, as seen in previous findings, even though matching features were physically separated from the identity face. Furthermore, we expected that our findings would support the assumption that Gestalt principles could be applied to the effect of the VWM on ensemble perception. Experiment 1 Method Participants Forty-two college students from Zhejiang Normal University were recruited for this experiment and given financial Frontiers in Psychology 04 frontiersin.org compensation for their participation. Five participants were excluded due to poor performances on the mean identity estimation task (i.e., overall bias <2.5 standard deviations below the group mean) and one participant for poor performance on the VWM task (i.e., overall accuracy <50% below guess rate). The final sample comprised 36 college students (four men; M = 20.14 years, SD = 1.85; age range = 18-25 years). All participants were righthanded and had normal or corrected vision. The sample size was determined by an a priori power analysis using G*Power software (Version 3.1) with a 0.05 criterion of statistical significance, power of 0.90, a 0.5 correlation between repeated measures, and an effect size (f) of 0.2. We used a conservative effect size of 0.25 because different and more complicated stimuli (i.e., face identity) were used in the present study. Stimuli and apparatus All stimuli were generated using MATLAB (MathWorks, Inc., Natick, MA, United States) with the Psychophysics Toolbox (Version 3 extension) and presented on a 21-inch LCD monitor with a resolution of 1,920 × 1,080 pixels and a refresh frequency of 60 Hz. All stimuli were shown on a uniform black background (RGB = 0,0,0). Each participant sat approximately 57 cm away from the computer monitor with their heads on a desk-mounted chin rest. At this viewing distance, 1° of the visual angle on the display was approximately 36.25 pixels. The experiment consisted of the irregular object memory task (i.e., the VWM task) and the mean identity estimation task. For each of the stimuli presented in a stimulus display, there were six distinctly separated colors selected from the RGB color space (i.e., blue = 63,108,151; purple = 142,80,141; red = 151, 61,87; brown = 148,85,47; olive drab = 102,110,52; dark green = 57,114,105). In the VWM task, stimuli were irregularly shaped 2D objects generated for each trial according to the following restricted conditions. Following the experiment of Cohen and Singh (2007), first, irregular objects were constructed with 12 evenly spaced angles from 1 to 360°. These angles were then used to generate the irregular object's vertices at random distances subtending 1.1° to 2.2° from the center. In the mean identity estimation task, four colored face morphs were presented in an ensemble display and 1 gray face was presented in an adjustment display (i.e., ensemble probe). The face morphs comprised a set of 360 face identities for each predefined color and in grayscale by morphing (MorphAge 3.0; Abrosoft Software Corporation) among three distinct neutral female faces from the NimStim Set of Facial Expressions (Tottenham et al., 2009), which are represented with schematics in Figure 1 (A-B-C-A). The face identity of these face morphs were circular stimulus spaces with 360°. To minimize the difference in the faces' physical features, the face morphs were scaled with luminance normalized using the SHINE toolbox (Dal Ben, 2019) in MATLAB. In the ensemble display, four face morphs were each subtended 4.30° × 6.00° of the visual angle, occupying each quadrant 5.18° from the fixation point (2 × 2 grid). The entire set of four faces was separated into two distinct equally-numbered identity subsets by two predefined colors around the mean identity of the whole set (−36, −12, +12, +36 relative to the global mean of the four faces). The identity values of one subset were clockwise to the global mean while those of the other subset were counterclockwise to the global mean. In an adjustment display, a randomly selected gray face from the 360 identities, and was subtended at 4.30° × 6.00° of visual angle at the center. Procedure The VWM task and mean identity estimation task were combined in a single trial. Participants were asked to study the form or color of an irregularly-shaped 2D object for the VWM task and to report the mean identity of a set of four faces for the mean estimation task. The experiment consisted of matching and mismatching trials. The irregular object's color in the VWM task matched the color of the clockwise subset for half of the matching trials (i.e., clockwise matching condition, CM) while the color of the VWM object matched that of the counter-clockwise subset for the other half of the matching trials (i.e., counter-clockwise matching condition, CCM). In the mismatching trials, neither of the two subset colors matched the color of the VWM object (i.e., mismatching condition, MM). Figure 2 illustrates the procedure of a single trial and identity faces shown in the figure were represented with schematics. Each trial began with a fixation central cross (0.5° × 0.5°) presented for a randomly varied interval of 800 to 1,200 ms. A memory display then followed, presented for 500 ms. Participants were asked to memorize the form and color of the irregularly-shaped object positioned at the center of the display. The color of the VWM object was randomly selected from the predefined colors. After a 1,000-ms blank screen, a set of four faces appeared in an ensemble display for 1,000 ms. Following a 900-ms blank interval, a gray face with a random identity selected from the 360 possible facial identities appeared centrally in an ensemble display. Participants Frontiers in Psychology 05 frontiersin.org were asked to place the face in a perceived mean of an identityface set by pressing the "left" or "right" arrow keys to tilt the face clockwise or counter-clockwise in the identity space, and to press the "space" key to lock the estimated identity as their answer. After the mean identity adjustment, recall of the studied individual item was tested. Participants were asked to judge whether the second object displayed in a memory probe was identical to or different from the object studied in the VWM display of the trial and to report their answer by pressing the "S" (same) or "D" (different) keys. The same trials and different trials occurred at an equal frequency. For different trials, either the color or the form could differ from the studied object (but both would never change simultaneously) with both options occurring at an equal frequency. Once the participant gave their answer by pressing the relevant key, the trial ended with a 500 ms blank inter-trial interval. Participants were instructed to be as accurate and as quick as possible, but there was no time limit for their response. Prior to starting the experiment, all participants completed a practice block consisting of 12 trials (four trials from each condition) to familiarize them with the VWM and identity estimation tasks. The experiment comprised six blocks, each composed of 36 trials with an equal trial number for each condition, with a short break after every second block. Data analysis The purpose of both Experiments 1 and 2 was to measure whether mean identity estimations would be affected by the shared color of a matching subset when a colored irregular object was held in VWM, and further to explore the extending scope and organization mode of feature-matching. First, we analyzed the accuracy of the VWM task across three matching conditions (i.e., CM, CCM, MM), measured as the proportion of correct responses to the VWM object question (i.e., VWM accuracy). Analyses of mean identity estimations were limited to trials in which participants correctly recalled the first colored object that had been displayed in the VWM display. Following previous studies (Iakovlev and Utochkin, 2021;Williams et al., 2021), in each trial, we calculated response errors measured as the smallest difference between the participant's selected face in an ensemble probe and the actual global mean (Response error = estimate response -actual mean) in the circular space of face identities. The resulted error distribution across trials were further employed to estimate two important indicators of ensemble perception via the CircStat toolbox using MATLAB (Bays et al., 2009;Berens, 2009): the circular standard deviation (CSD) of error distribution as the estimated precision and the mean of error distribution (ensemble bias) as the tendency of the estimated mean. Accordingly, the positive bias would reflect a tendency to estimate the mean value toward the mean of a clockwise subset while the negative bias would reflect a tendency to estimate the mean value toward the mean of a counterclockwise subset. Participants were excluded if their proportion of correct responses in the VWM task and/or response errors in the mean identity estimation task were greater or lower than 2.5 SDs above the overall mean. Participants' trials were excluded if their performance in the VWM task and/or the mean identity estimation task was greater or lower than 2.5 SDs. Lastly, classical and Bayesian statistical analyses were conducted using Jamovi (Version 2.2.5.0; The Jamovi Project, 2021) developed by R (retrieved from https://www.jamovi.org) and JASP (Version 0.10.0.0; The JASP Team, 2022). We employed within-subject one-way analyses of variance (ANOVAs) to compare the performances in the three matching conditions of VWM accuracy, CSD, and bias, and then employed the Bonferroni correction to compare these conditions with each other. Results and discussion Visual working memory accuracy The studied irregular objects were recalled correctly and judged accurately on 78.74% of the trials (M = 0.787, SE = 0.013, ranging from 0.588 to 0.935). The performance on the VWM task differed significantly between the CM, CCM, and MM conditions (see Figure 3A), F(2,70) = 8.069, p < 0.001, η p 2 = 0.187, BF inclusion = 40.981. The accuracy of the control condition Ensemble circular standard deviation For the mean identity task, there was a significant difference on the ensemble CSD among the matching conditions (see Figure 3B Ensemble bias The critical question was, What happens to mean identity estimation when participants have an accurate representation of the studied object in the VWM? After calculating the bias parameter, participants were found to perform significantly differently across the different conditions (see Figure 3C Overall, our data and the comparisons between the three matching conditions indicated that there was a significant effect of the memorized objects studied in the VWM task on the mean identity estimations. Performances in the VWM task and the mean estimation task were consistent with the findings of previous studies on the effect of a low stimulus level on the ensemble coding task. Therefore, the influence of the matching feature does extend from low-level ensemble coding to high-level ensemble perception, suggesting that the inter-task shared feature effect is not limited by the stimulus feature level. As expected, although different ensemble levels might affect the precision and efficiency of summary statistics with distinctive visual processes (Haberman et al., 2015), our findings suggest that the amplification effect caused by memory matching features is independent of ensemble perception levels, and different ensemble levels might be identical in terms of their pattern of information storage and extraction. Additionally, we also found that the recall precision of a studied object improved Results of Experiment 1 for the three matching conditions on (A) VWM accuracy, (B) the ensemble circular standard deviation (CSD) of the mean identity estimations, and (C) the ensemble bias parameter of the mean identity estimations. CM, clockwise matching condition; MM, mismatching condition or control condition; CCM, counterclockwise condition. *p < 0.05, **p < 0.01, ***p < 0.001. Frontiers in Psychology 07 frontiersin.org due to the memory matching feature. This is consistent with previous studies showing that perceptual averaging influenced the representation of individual items in VWM (Brady and Alvarez, 2011;Corbett, 2017;Utochkin and Brady, 2020). For example, Brady and Alvarez (2011) found that the reported size of the memorized circle was biased toward the mean size of previously presented circles that matched the memorized circle in color. Experiment 2 Method Participants Thirty-eight college students from Zhejiang Normal University were recruited for this experiment and received financial compensation for their participation. Two participants were excluded due to poor performances on the mean identity estimation (i.e., overall bias <2.5 standard deviations below the group mean). Therefore, the final sample contained 36 participants (8 men; M = 21.72 years, SD = 2.44), and a sample size that was the same as that of Experiment 1. Selection criteria and procedure was the same as in Experiment 1. As in Experiment 1, all participants were right-handed and had normal or corrected vision. Stimuli and procedure As shown in Figure 4, the task used in Experiment 2 was identical to that of Experiment 1, except that the stimuli in this experiment were identity face morphs in grayscale presented in a colored box in the common region, rather than being face morphs in a colored mask. Each color box subtended 7.15° × 7.50° of the visual angle with the outlines of 0.14° width. A grayscale face was positioned in the center of each color box with a quadrant 5.18° from fixation. An ensemble display contained four items distributed equally in a set. Participants were asked to complete the task in the same manner as in Experiment 1. Ensemble bias We tested for any matching feature effect on the bias parameter when facial items and matching colors were combined using the Gestalt principle of common region. Results showed that the main effect of the bias parameter was significant in the matching conditions (see Figure 5C Combined analysis of Experiment 1 and Experiment 2 We explored whether there was a significant effect between organization according to Gestalt principles (Experiment 2) and the combination of matching colors and facial identities on an identical face (Experiment 1). We tested whether there would Sequence of the display in Experiment 2, with the identical procedure and task requirements as Experiment 1. Frontiers in Psychology 08 frontiersin.org be differences in VWM accuracy, ensemble CSD, and bias between Experiments 1 and 2. As such, we used 2 (experiment: Experiment 1 vs. Experiment 2) × 3 (matching condition: CM vs. CCM vs. MM) repeated measures ANOVAs on these parameters, and Figure 6 show these comparison between Experiment 1 and Experiment 2. Visual working memory accuracy Results showed a small main effect of experiment, Ensemble bias Finally, and of greatest importance, there was a significant effect found on ensemble bias. The bias parameter differed between Experiment 1 (M = 0.064, SE = 0.027) and Experiment 2 (M = −0.015, SE = 0.027), F(1,70) = 4.359, p = 0.040, η p 2 = 0.059, BF inclusion = 1.101. Specifically, the bias effect of Experiment 1 was more significant than that of Experiment 2. Additionally, there was a difference on the matching conditions, F(2,140) = 20.674, p < 0.001, η p 2 = 0.228, BF inclusion = 1.535 × 10 6 . Estimates of the CM In these analyses, we found the bias effect still was strong in Experiment 2 but weaker than that of Experiment 1. From these results, we can conclude that the VWM task can affect mean estimations through inter-task shared features when matching colors and ensemble identities are combined according to Gestalt principles. Nonetheless, it is non-negligible that organization modes between matching features and the averaged properties modulated the extent of the memory matching feature effect on the estimated bias of perceptual averaging. In contrast to the stimulus organization in Experiment 1, the combination of the Gestalt principle in Experiment 2 weakened the bias effect of mean identity estimations but improved estimated precise on mean estimations. Moreover, in Experiment 2, the VWM task performance seemed unaffected by the matching color, and the accuracy of the matching conditions improved at an integral level. These results indicate that the effectiveness of the memory matching feature may reduce somewhat when facial identities and matching colors are physically separated, even when integrated according to the Gestalt principle. General discussion Although the impact of VWM on summary statistics was varied (Bauer, 2017;Epstein and Emmanouil, 2017;Williams et al., 2021), memory matching features appeared to play a meaningful and functional role in the influence of a single studied item over average estimations. Based on this, Experiment 1 explored the scope of the matching feature from a low-level feature (i.e., orientation) to a high-level property (i.e., face identity). The results of Experiment 1 developed the findings of Williams et al. (2021), revealing that the influence of the matching feature could extend from mean estimations of low-level orientation to the perceptual averaging process of high-level facial identity, concluding that the memory matching feature effect occurs across all levels of ensemble perceptions. That is, when an irrelevant individual shares common information about stimulus features with the latter averaging task even that is detrimental to the task's goal, this information led to average estimations which deviated from the summary statistics of a set, but biased toward the mean of a part with the same feature. According to results of Experiment 1, shared features stored in VWM largely explain the bias effect of mean estimations, which can be observed in a wide range of properties. Experiment 2 verified whether the way stimuli are combined between matching colors and ensemble properties modulated the influence of the memory matching feature on the averaging task. Results showed a similar effect as in Experiment 1, in that estimates were biased toward averages of a subset in the same Results of the combined analysis of Experiments 1 and 2 for the matching conditions on (A) VWM accuracy, (B) the ensemble circular standard deviation (CSD) of the mean identity estimations, and (C) the ensemble bias parameter of the mean identity estimations. CM, clockwise matching condition; MM, mismatching condition; CCM, counterclockwise condition. *p < 0.05, ***p < 0.001. Frontiers in Psychology 10 frontiersin.org color as the memorized object if the matching colors were integrated with ensemble identities under the common region cue, but non-overlaid on face stimuli. Such results extended the "matching features" to a broader context regarding the influence of matching feature maintained in VWM over mean estimations. Moreover, the bias effect of mean estimations by features in VWM was unstable and varied according to the specific stimulus integration for the averaging task. Taken together, the results of Experiments 1 and 2 support the notion that VWM tasks can influence average estimations by the matching features, modulated by the stimulus combination between task-irrelevant colors in VWM and ensemble target properties. Our findings provide novel evidence that the impact of VWM on the process of ensemble coding can be explained by matching stimulus features between tasks from the stimulus level, which supports the idea that averaging processing can be guided by top-down memory of a prior task or past experience in the visual field (Dodgson and Raymond, 2020;Talcott and Gaspelin, 2020;Ramgir and Lamy, 2022). In other words, participants remembered the irregular object's color in a goal-directed manner, and estimates were biased toward a subset in the same color as the studied object by a way of amplifying the properties highlighted by VWM. Previous studies have shown that this way of the averaging phase aligns with the amplification hypothesis, stating that physically salient items are more largely weighted than less salient items in the determination of summary statistics (Kanaya et al., 2018). In our study, memory matching colors led to the amplification effect on mean estimations. Nonetheless, matching feature effect can also be explained by the feature-weighting account (Maljkovic and Nakayama, 1994) or the episodic retrieval model (Thomson and Milliken, 2013), both of which stress the importance of shared features on consecutive phases. Both accounts refer to feature inter-trial priming whereby attention may be automatically captured by the color or positioning remembered from the previous trial, resulting from prior experience (Lamy and Kristjansson, 2013;Ramgir and Lamy, 2022). In the present study, these accounts applied to continuous tasks (i.e., the VWM task and the mean estimation task) with the same color feature. The feature-weight account suggests that the bias effect is due to the fact that estimates are easily biased toward a memory matching subset that becomes more activated due to being shared by the relevant feature from the previous target. Episodic retrieval model supports a lasting influence of shared features, specifically, and highlights that remembered features in VWM are stored as episodic memory traces, and impair average estimates if these memory traces match parts of identifiable features in an ensemble display. In addition, our results also reveal that low-level memory features of a VWM task, as a non-negligible part of scene information, have an impact on perceived high-level properties when low-level and high-level features were concurrently present. In the field of visual categorization and recognition, combining low-level features with high-level ones has been shown to have a strong effect in terms of a visual perception hierarchy (Schindler and Bartels, 2016;Stoddard et al., 2019). Accordingly, in the present study, the matching colors were irrelevant and meaningless with regards to face identity recognition, but the results indicated that these task-unrelated low-level color features were included rather than neglected or separated in VWM when participants perceive high-level face identities. From the neural mechanism at play, these low-level and high-level features are not separated from each other in the earlier visual processing task (Yang et al., 2019). Thus, low-level features cannot be ignored when predicting scene visual information where the focus is on high-level features (Gelbard-Sagiv et al., 2016;Ibarra et al., 2017). Our findings offer evidence for the influence of low-level features of VWM extending to the ensemble perceptual task. In addition, these results do not object to but rather supplement the findings of Haberman and colleagues (Haberman et al., 2015). That is, there might be a central mechanism corresponding to the observed matching feature effect in ensemble perception of both low and high-level visual features although the precision and efficiency of summary statistics were different across various levels of visual features (Haberman et al., 2015). Based on our findings, disparate features mutually affect each other when the connection of low-level properties and high-level properties is constructed in the same physical environment. Another important detail of our study is the reconstruction between memory colors and ensemble identities. As the results of Experiment 2 mentioned, the way that features matched with the VWM object were integrated with the averaged properties in the common region, referred to as the Gestalt principle, and led to a similar common feature effect as seen in Experiment 1 which presented faces with different skin colors in an identity display. These findings are consistent with previous studies that have shown that the Gestalt principle helps combine memory stored features and ensemble properties as an integrated unit (Xu, 2002(Xu, , 2006Xu and Chun, 2007;Peterson and Berryhill, 2013;Luna et al., 2016;Kalamala et al., 2017;Montoro et al., 2017). Such integration as a whole unit is essential for the influence of memory matching features on perceptual averaging performance. It is worth noting that a smaller memory matching feature effect was found in Experiment 2 than in Experiment 1, which was found in the combined analysis. The decline in this effect might be due to the distinctly perceived integration of the ensemble display in the two experiments. Such a combination in Experiment 2 might have seemed incomplete, but the Gestalt principle seemed to play a part in classifying and connecting different features distributed across different physical spaces. Another possibility could be that less attention is spread out to the focal boundaries (i.e., on the identity faces) when a matching color is an attribute of the boxes positioned around the identity faces in the averaging task, as was done in Experiment 2. This would be in line with the findings of Nishina et al. (2007) that awareness of task-irrelevant visual features depends on the task distance, with an increase of required attention as the spatial distance between them grows. It is worth noting that there were a few inconsistent results of the CSD and the ensemble bias in both Experiments 1 and 2. For Frontiers in Psychology 11 frontiersin.org the CSD, there was only a significant difference between CCM and MM conditions in Experiment 1 while no difference was evidenced for the rest of the comparisons in Experiments 1 and 2. Although significant differences were observed between CCM and MM conditions, the effect size (Cohen's d = 0.304) of these differences was close to the effect size (Cohen's d = 0.233) of differences between CM and MM conditions. Moreover, CSDs of CCM were not different from those of CM conditions. Therefore, CSD may not be a sensitive indicator to reflect the amplification effect due to matching colors in VWM. Indeed, the findings of Williams' study (2021) also reported significant CSD differences in only one comparison (clockwise vs. control conditions) in Experiment 1. Similarly, a recent study by Iakovlev and colleagues (2021) found that the CSD did not show any difference among clockwise, counterclockwise, control conditions even when the set size increased from 4 to 16. Taken together, inconsistent CSD results between the CCM and CM conditions in our study might be due to the insensitivity of the CSD to the amplification effect. For the ensemble bias, there were mutually significant differences among CM, MM, and CCM conditions in Experiment 1. However, the difference between MM and CCM conditions became insignificant in Experiment 2 while all other differences remained significant as in Experiment 1. By taking a close look, the bias in the CCM condition did not show significant differences between Experiment 1 and 2 (t (70) Conclusion This study demonstrates that VWM tasks influence mean identity estimations through the memory matching color. We found that shared features learned from the VWM task were activated for the mean estimation task and impaired performance of the high-level averaging perceptual task. The impact of the VWM task on average estimates was unstable and variable, affected by the integration of the memory matching feature and perceptual averaging. Data availability statement The data and experimental paradigms from all experiments are available on the Open Science Framework (https://osf.io/dc4qr/). Ethics statement The studies involving human participants were reviewed and approved by the research ethics committee of Zhejiang Normal University (ZSRT2022071). The patients/participants provided their written informed consent to participate in this study. Author contributions TP and JW designed the experiments and drafted the manuscript. TP contributed to the data acquisition. TP, JW, and FL contributed to the data analysis and data interpretation. TP, JW, ZZ, and FL revised the manuscript. All authors contributed to the article and approved the submitted version.
2022-12-02T15:31:37.532Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "c7cef507434d3bcd99d0b78e53256497ff8db5f4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "c7cef507434d3bcd99d0b78e53256497ff8db5f4", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
1507718
pes2o/s2orc
v3-fos-license
BMC Bioinformatics BioMed Central Methodology article Regularized gene selection in cancer microarray meta-analysis Background In cancer studies, it is common that multiple microarray experiments are conducted to measure the same clinical outcome and expressions of the same set of genes. An important goal of such experiments is to identify a subset of genes that can potentially serve as predictive markers for cancer development and progression. Analyses of individual experiments may lead to unreliable gene selection results because of the small sample sizes. Meta analysis can be used to pool multiple experiments, increase statistical power, and achieve more reliable gene selection. The meta analysis of cancer microarray data is challenging because of the high dimensionality of gene expressions and the differences in experimental settings amongst different experiments. Results We propose a Meta Threshold Gradient Descent Regularization (MTGDR) approach for gene selection in the meta analysis of cancer microarray data. The MTGDR has many advantages over existing approaches. It allows different experiments to have different experimental settings. It can account for the joint effects of multiple genes on cancer, and it can select the same set of cancer-associated genes across multiple experiments. Simulation studies and analyses of multiple pancreatic and liver cancer experiments demonstrate the superior performance of the MTGDR. Conclusion The MTGDR provides an effective way of analyzing multiple cancer microarray studies and selecting reliable cancer-associated genes. Background Microarrays are capable of profiling human tissues on a genome-wide scale and have been used extensively in cancer studies, where expressions of thousands of genes are measured along with clinical outcomes. A major goal of such studies is to identify a subset of cancer-associated genes that can be used as biomarkers for cancer diagnosis and prognosis and as targets for therapy. Early studies have shown that gene signatures identified from the analysis of individual cancer microarray experiments often have low reproducibility. There are several reasons for this. A main one is that the sample size of a single micro-array experiment, which is usually in the hundreds, is much smaller than the number of genes, which is usually in the tens of thousands. Within the field of clinical investigation, meta analysis has emerged as the gold standard for the comparison and combined analysis of clinical studies. It is generally accepted that only meta analysis can circumvent the problems inherent to studies with low statistical powers due to low sample sizes [1]. With meta analysis, it is usually not the intention of researchers to analyze any new datasets. Rather, it provides an effective way of pooling and analyz-ing multiple existing datasets and generating results more reliable than those from the analysis of each individual data set. Meta analysis of cancer microarray data is made possible by the many experiments conducted independently to measure the same set of genes and the same cancer clinical outcomes. As shown in [2][3][4][5], the meta analysis of cancer microarray data has achieved considerable successes by identifying relatively reproducible, biologically meaningful gene signatures. We refer to [6] for more discussions of the merits of meta analysis in genomic studies. Meta analysis of cancer microarray data is challenging because (1) microarray experiments usually measure a small number of samples and a large number of genes, with only a subset of those genes associated with cancer clinical outcomes. Gene selection is needed along with estimation; (2) the meta analysis of cancer microarray data and the identification of cancer-associated genes often require the use of original expression measurements. For this reason, the type of analysis conducted in this article has also been referred to as "integrative analysis". Such analysis differs significantly from conventional meta analysis, where the analysis is based on summary statistics (such as p-values) from each individual experiment; and (3) different platforms may be used in different experiments. Arrays that hybridize one sample at a time (e.g., synthesized oligonucleotide arrays) measure gene expression based directly on the signal intensity of each probe set. In contrast, spotted cDNA arrays hybridized with fluorescent-labeled targets typically measure the ratio of the signal from a test sample to the signal of a cohybridized reference sample. It has been shown that data from Affymetrix GeneChip oligonucleotide microarrays correlate poorly with the data from custom-printed cDNA microarrays [7]. We note here that comparability of different platforms can be achieved by the transformation of the expressions. However, as noted in previous studies (such as [8]), such transformation needs to be conducted on a case-by-case basis. Several approaches have been proposed to analyze the marginal effects of genes using data from multiple microarray experiments. Examples of this include Fisher's approach (with application to breast cancer [9]); an intensity approach that transforms and directly integrates gene expressions [5]; a penalization approach [3]; a random effect model based approach [10]; a robust gene ranking approach [11]; and a Bayesian approach [12]. In light of the fact that cancer development and progression are caused by the effects of multiple genes, the following studies (which can account for the joint effects of genes) have been conducted. A majority voting (with impact factors) approach has been proposed by [13]. Gene shaving approaches based on random forrest and Fisher's linear discrimination are applied in [14]. And a computationally intensive Bayesian approach is proposed in [15]. We note that the focus of those studies has been predictive model building, not gene selection. On the other hand, there is rich literature for the analysis of a single cancer microarray data and gene selection. Examples include the parameterized classifier design approach in [16]; the penalization approaches in [17,18]; the Threshold Gradient Directed Regularization (TGDR) approach [19][20][21]; and the support vector machine approach [22]. We refer to [23] for more discussions of gene selection approaches with individual microarray datasets. We note, however, that those approaches have been designed to analyze a single dataset, and cannot be used to analyze multiple, heterogeneous datasets. The literature review suggests that (1) genes identified from analysis of a single cancer microarray data may suffer from low reproducibility because of the small sample size. Meta analysis pools multiple datasets, increases statistical power, and provides an effective way of improving reproducibility; (2) existing meta analysis approaches focus on either the investigation of the marginal effects of genes or the construction of predictive models with multiple genes; and (3) approaches exist that can select genes with joint effects on cancer in the analysis of a single dataset. However, these approaches cannot be used to analyze multiple, heterogeneous data. Thus, there is a critical need for approaches that can select genes with joint effects on cancer in the meta analysis of multiple microarray data. In this article, we propose the Meta Threshold Gradient Descent Regularization (MTGDR) approach for gene selection in cancer microarray meta analysis. The MTGDR takes advantage of recent developments in regularized gene selection with a single microarray dataset. Compared to such single-dataset gene selection methods, the MTGDR has the desired flexibility of accommodating multiple experiments with different setups. And in comparison with the available meta analysis methods, the MTGDR can effectively select a subset of genes with joint effects on cancer. Simulation study We conduct simulation studies to investigate the performance of the proposed MTGDR. We generate M = 3 datasets. For dataset m = 1, 2 and 3, we generate n m samples and expressions of d genes. Gene expressions are generated in a way that all expressions have marginally normal distributions with unit variance, and the correlation between the expressions of genes i and j is 0.4 |i-j| . In each dataset, the first 20 genes are associated with the cancer outcome. Specifically, for genes i = 1, ..., 20, the mean expressions of the n m /2 cases (outcome Y m = 1) are generated randomly from Uniform [l, u]. The mean expressions for the genes of the controls (outcome Y m = 0) are zero. The mean expressions for the genes not associated with the outcomes are zero. The simulation setting here corresponds to the logistic regression models for all three datasets. The regression coefficients for the cancer-associated genes vary across studies, which corresponds to different experimental setups (for example different platforms) in different studies. We consider combinations of the following simulation settings: (1) We employ the proposed MTGDR, and tuning parameters are selected via the 3-fold cross validation. For comparison, we also consider the following two alternative approaches: (1) the pooled TGDR approach. Other than the differences in regression coefficients (shifts of mean expressions), the three datasets are generated in a comparable manner. We pool all three datasets together, treat them as if they were from a single experiment, and analyze them with the TGDR approach; and (2) the meta analysis approach based on individual TGDR analysis. We first analyze each dataset using the TGDR approach. We then search for genes identified in all three studies. This corresponds to the meta analysis approach where each dataset is analyzed separately using the TGDR and the results are combined via a voting approach. We note that other alternative approaches exist. For example, it is possible to replace the TGDR approach with the penalization approaches discussed in [23]. Early studies have established the comparable performance of the TGDR with alternative approaches [19][20][21]. Since the proposed MTGDR shares a similar thresholding paradigm with the TGDR, we focus on the aforementioned two alternatives. In Table 1, we show the mean (standard deviation) of the number of identified genes and the number of true positives based on 200 replicates. We can see that (1) the proposed MTGDR is capable of identifying the majority of the genes truly associated with the outcome and has very small false positive rates; (2) the performance of the pooled analysis is less satisfactory but still acceptable. We note that the three simulated datasets are more comparable than those encountered in practical studies. The regression coefficients differ across datasets, though the differences are small. This comparability explains the reasonable performance of the pooled analysis and should not be expected in general with practical data; and (3) the "individual TGDR + voting" meta analysis approach has inferior performance, which is caused mainly by the small sample size and the subsequent lack of reproducibility of each individual dataset. We have also conducted simulations under other settings and drawn similar conclusions (results not shown). Pancreatic cancer study Data Pancreatic ductal adenocarcinoma (PDAC) is a major cause of malignancy-related deaths. Apart from surgery, there is still no effective therapy, and even resected patients die usually within one year postoperatively. Several experiments have been conducted using microarrays to identify pancreatic cancer genomic markers. In our study, we gather and analyze four studies, which are first reported in [24][25][26][27]. These four datasets have also been analyzed by [28], and it has been argued that the clinical settings in the four studies are comparable. Thus, it is reasonable to conduct meta analysis with such data. We show the data descriptions in Table 2. Two of the four studies use cDNA arrays, and two use oligonucleotide arrays. Cluster ID and gene names are assigned to all of the cDNA clones and Affymetrix probes based on Uni-Gene Build 161. The two sample groups considered in our analysis are PDAC and normal pancreatic tissues. Data on chronic pancreatitis are available for [25,27], but will not be used in our analysis. For each dataset, data processing (including normalization) has been separately conducted by researchers in each individual study. We identity a consensus set of 2984 Uni-Gene IDs. We remove genes with more than 30% missingness in any of the four datasets. There are 1204 genes remained for downstream analysis. For each data separately, if Affymetrix is used, we first add a floor of 10 and make log2 transformations of the expressions. We then fill in missing values with medians across samples and standardize each gene expression to have zero mean and unit variance. MTGDR analysis In the MTGDR analysis, tuning parameters are chosen via the 3-fold cross validation. Fifteen genes are identified as being associated with the risk of developing pancreatic cancer. We show the gene IDs and corresponding estimates in Table 3. We can see that if a gene has a nonzero coefficient in one dataset, then it has nonzero coefficients in all datasets (which indicates that this gene is identified in all studies). We also note that the estimated coefficients for one gene can be different across studies. This is the extra flexibility allowed by the MTGDR over the pooled analysis, which naturally accommodates differences among experimental setups in different studies. Furthermore although the estimated coefficients may be different for one gene across experiments, their signs are the same. The same signs lead to similar biological conclusions (i.e., whether up-regulations of the genes are positively or negatively associated with the risk of developing cancer). We evaluate the biological implications of selected genes by surveying [29] and other public databases. Among the 15 genes, several have been previously identified in independent studies. Specifically, gene Hs.107 (Fibrinogen-like 1) is a member of the fibrinogen family. In large scale proteomic analysis of serum samples, certain members from the fibrinogen family have been found to be overexpressed in pancreatic cancer samples [30]. Gene Hs.12068 (Carnitine acetyltransferase) is a key enzyme in the metabolic pathway in mitochondria, peroxisomes, and endoplasmic reticulum. CRAT catalyzes the reversible transfer of acyl groups from an acyl-CoA thioester to carnitine and regulates the ratio of acylCoA/CoA in the subcellular compartments. In addition, CRAT has been found to be significantly under-expressed in PDAC samples [31]. Gene Hs.169900 (PABPC4) is localized primarily in the cytoplasm. It may be necessary for the regulation of stability of labile mRNA species in activated T cells. It is one of the pancreatic cancer biomarkers identified in [26], where it is down-regulated at least four-fold in four or more PDAC specimens. Gene Hs.180920 (RPS9 ribosomal protein S9) encodes a ribosomal protein that is a component of the 40S subunit. The protein belongs to the S4P family of ribosomal proteins. Crnogorac-Jurcevic et al. [32] was the first to identify the association between the dysregulated expression of PRS9 and PDAC. Gene Hs.287820 (fibronectin 1) encodes fibronectin, a glycoprotein present in a soluble dimeric form in plasma and in a dimeric or multimeric form at the cell surface and in the extracellular matrix. Fibronectin plays an important role in maintaining the structural integrity of the pulmonary epithelium and endothelium. Decreases in serum fibronectin and increases in pulmonary leukocyte margination during acute pancreatitis may compromise the integrity of the airblood barrier and also increase the pulmonary uptake of circulating pathogenic materials. Gene Hs.317432 (BCAT1) encodes the cytosolic form of the enzyme branched-chain amino acid transaminase. This enzyme catalyzes the reversible transamination of branchedchain, alpha-keto acids to branched-chain, L-amino acids essential for cell growth. It is one of the pancreatic cancer markers broadly identified [33]. Gene Hs.5591 (MKNK1) belongs to the MAPK pathway, which has been identified to be associated with the development of multiple cancers. Protein encoded by gene Hs.62 (PTPN12) is a member of the protein tyrosine phosphatase (PTP) family. PTPs are known to be signaling molecules that regulate a variety of cellular processes including cell growth, differentiation, mitotic cycle, and oncogenic transformation. As has been pointed out by [28], gene Hs.75335 (GATM) has been identified as a pancreatic cancer marker in multiple independent studies. Gene Hs.78225 (NBL1) is located at chromosome 1p36. Deletion of material from this region is common in ineuroblastoma. It is possible that a tumor suppressor gene is present in this region. Ideally, statistical evaluations of the MTGDR should be based on independent data, though it is often unavailable. As an alternative, we conduct evaluations using the following Leave-One-Out (LOO) approach, which has been adopted extensively in cancer microarray studies. We first remove one subject from the dataset. With the reduced dataset, we compute the MTGDR estimate. We note that, to get a relatively fair evaluation, a new set of tuning parameters needs to be computed for the reduced dataset. With the MTGDR, we are able to obtain one regression model for each individual dataset. Then using the model for the dataset that the removed subject belongs to, we are able to predict the probability and class membership (by dichotomizing the predicted probability at 0.5) for the removed subject. We repeat this procedure over all subjects and compute the classification error. With the LOO approach, the MTGDR misclassifies 2 subjects in data P3; otherwise, it achieves perfect classification. Analyses With Alternative Approaches To facilitate a more comprehensive understanding of the MTGDR approach and the pancreatic study, we conduct the following additional analyses. ANALYSIS WITH THE POOLED TGDR APPROACH As in the simulation study, we ignore the fact that the four datasets are from different studies that use different plat-forms. We pool the four datasets and analyze them using the TGDR approach. The sample size of the pooled dataset is 56. A total of 22 genes are identified using this approach. Specifically, this approach identifies 13 of the 15 genes identified by the MTGDR and misses genes BCAT1 and NBL1. As discussed in the above section, both of those two genes have important implications in pancreatic cancer development. (More detailed information on gene identification using this approach is available upon request.) We also evaluate performance of the pooled approach using the LOO. Two subjects (1 in P3 and 1 in P4) are not properly classified. META ANALYSIS BASED ON INDIVIDUAL TGDR We first analyze each dataset using the TGDR approach and then search for genes identified in multiple studies. This is a voting-based meta analysis approach. For the four datasets, TGDR identifies 7 (P1), 10 (P2), 6 (P3), and 1 (P4) genes, respectively. The numbers of overlaps with genes identified using the MTGDR are 1, 1, 2, and 0, respectively. There is only 1 gene identified with both P2 and P3. Otherwise, there is no overlap between genes identified with the four datasets. Genes identified in one study cannot be used to satisfactorily predict subjects in other studies. For example, we use genes identified in P2 and the corresponding logistic model to make predictions for the rest of the three datasets. Four (P1), 6 (P3), and 4 (P4) subjects cannot be properly classified. META ANALYSIS OF MARGINAL EFFECTS With the MTGDR and the two alternative approaches, we search for genes with joint effects on pancreatic cancer development. To provide a more comprehensive analysis of the pancreatic data, we conduct the following analysis of marginal effects. Since the pancreatic data have the "normal versus cancer" binary setup, for each dataset and each gene, we conduct the two-sample comparison of expressions of normal versus cancer samples using the t- test and compute the p-value. For each gene, we combine the p-values across four studies using the Fisher's approach [1]. We then rank genes using the p-values from the meta analysis. Genes with smaller combined p-values have smaller ranks. We note that this is the conventional meta analysis approach for data with binary outcomes. With this approach, we investigate the marginal associations between each individual genes and the cancer outcome. We show the ranks for the MTGDR-identified genes in Table 3. We can see that several MTGDR-identified genes have very low ranks. Specifically, genes with marginal ranks 1-7 are identified using the MTGDR. However, there are also MTGDR-identified genes with very high ranks. For example, genes Hs.317432, Hs.5591, and Hs.62 have ranks 144, 56, and 50, respectively. Our analysis suggests that meta analysis and identification of genes with joint effects cannot be replaced with meta analysis of marginal effects. Liver cancer study Data Gene expression profiling studies have been conducted on hepatocellular carcinoma (HCC), which is among the leading causes of cancer deaths in the world. We conduct meta analysis using the four liver cancer microarray datasets described in [2]. Detailed data information is provided in Table 4, where the four datasets are referred to as D1-D4, respectively. The four datasets were generated in three different hospitals in South Korea. Although the studies were conducted in a controlled setting, Choi et al. [2] "failed to directly merge the data even after normalization of each dataset." In studies D1-D3, expressions of 10336 genes were measured. In study D4, expressions of 9984 genes were measured. We focus on the 9984 genes measured in all four studies. For each dataset, the within-print-tip-group normalization is first carried out. We then process the data as follows: (1) Un-supervised screening: (1.1) if a gene has more than 30% of missingness in any dataset, it is removed from downstream analysis. In total, 3122 out of 9984 genes pass this screening. (1.2) if a subject has more than 30% missing expressions for the 3122 genes, then this subject is removed. Eight subjects are removed, leading to an effective sample size of 125. We show the number of subjects actually used in the analysis in Table 4. (2) For each dataset, we fill in missing expression values with medians across samples. (3) Supervised screening: for each dataset, we compute the two-sample t-statistic for each gene. We then assign a rank to each gene based on the t-statistic. The overall rank for one gene is defined as the sum of ranks across all four datasets. One thousand genes with the lowest ranks are selected for downstream analysis. This rank-based screening shares similar spirits as the one in [11]. (4) For each dataset, we normalize each gene expression to have zero mean and unit variance. Gene screening is conducted to exclude genes which are very unlikely to be cancer-associated. Similar procedures have been adopted in [20] and others. MTGDR analysis We employ the MTGDR approach with optimal tuning parameters selected using the 3-fold cross validation. Thirty-four genes are identified as being associated with the risk of developing liver cancer. We provide information and corresponding estimates for identified genes in Table 5. We draw similar conclusions from Table 5 as from Table 3. We note that, for a very small number of genes, the signs of the four estimates are different. For example, for gene 15.4.E1/Rab9 effector p40, three out of four estimated coefficients are positive, and one is negative. The negative coefficient has a small absolute value and can be caused by random variations. Different signs may suggest conflicting biological conclusions. Without having access to the original experimental setup or a gold standard, we are unable to make further explanations of the conflicting signs. Although those genes have been identified with the MTGDR, they should be interpreted with extreme caution because of those conflicting signs. We search public databases for independent evidence of associations between identified genes and liver cancer development. Among the identified genes, gene KIAA0406 is one that constitutes the predictor of PI3 kinase activation. The PI3 kinase signaling pathway is emerging as a promising therapeutic target in a number of cancers as well as inflammation and heart diseases. It has been found in a rat experiment that the mRNA and protein levels of Cyt19 are higher in the liver than in other tissues. Gene Rab9 belongs to the RAS oncogene family, which is activated in multiple cancers. ATPases are a class of enzymes that catalyze the decomposition of adenosine triphosphate (ATP) into adenosine diphosphate (ADP) and a free phosphate ion. This dephosphorylation reaction releases energy, which the enzyme (in most cases) harnesses to drive other chemical reactions that would not otherwise occur. RalGDS is an oncogene and can induce transformation and gene expression by activating Ras, Ral, and Rho mediated pathways. The combination of TPI and an antitumor nucleoside, FTD, not only enhances the antitumor efficacy and decreases the toxicity of FTD, but it also suppresses TP-induced angiogenesis. Protein encoded by ADFP is a major constituent of the globule surface. Increases in mRNA levels are one of the earliest indications of adipocyte differentiation. The Human G protein-coupled receptor has been found expressed in lung, heart, and lymphoid tumor tissues. MEN-1 is a cancer predisposition gene and has been found to be activated in pancreatic, ovarian, and male breast cancers. Polyspecific organic cation transporters in the liver, kidney, intestine, and other organs are critical for the elimination of many endogenous small organic cations as well as a wide array of drugs and environmental toxins. Gene SLC22A1 is one of three similar cation transporter genes located in a cluster on chromosome 6. Mutations of gene TUBB have been found in breast and non-small cell lung cancers. Gene H2AFZ encodes a replication-independent member of the histone H2A family that is distinct from other members of the family. Studies in mice have shown that this particular histone is required for embryonic We conduct statistical evaluations using the LOO approach described above. The MTGDR misclassifies 6 (D1), 8 (D2), 4 (D3), and 2 (D4) subjects, respectively, which leads to an overall classification error of 0.16. We note that, supervised screening has been conducted prior to analysis. To make a fair evaluation, in the LOO procedure, we carry out the supervised screening for each reduced data (with one subject removed) separately. The possibility of overly optimistic evaluation can be minimized. Analyses with alternative approaches As for the pancreatic study, we conduct the following analyses using alternative approaches. ANALYSIS WITH THE POOLED TGDR APPROACH We pool the four datasets, which have a combined sample size of 125, and analyze with the TGDR approach. This pooled approach identifies 24 out of the 34 genes identified by the MTGDR, misses 10, and identifies 10 extra genes not identified by the MTGDR. (Detailed information on gene identification using this approach is available upon request.) We also evaluate the performance of this pooled approach using the LOO. Six (D1), 13 (D2), 11 (D3), and 6 (D4) subjects are not properly classified, which leads to an overall classification error of 0.29. META ANALYSIS BASED ON INDIVIDUAL TGDR We analyze each individual dataset using the TGDR approach and then search for overlaps of identified genes. For the four datasets, the TGDR identifies 27 (D1), 10 (D2), 20 (D3), and 6 (D4) genes. The numbers of overlaps with genes identified using the MTGDR are 4, 4, 3, and 1. Among the identified genes, one is identified in three datasets, another one is identified in two datasets, and the remainder are identified in only one. Genes identified using one dataset cannot be used to make satisfactory predictions for other datasets. For example, when genes identified with D1 and the corresponding logistic regression model are used to predict subjects in the rest of the three datasets, 20 (D2), 8 (D3), and 6 (D4) subjects cannot be properly classified. META ANALYSIS OF MARGINAL EFFECTS We conduct meta analysis of the marginal effects as described in the pancreatic cancer study. In Table 5, we show the marginal ranks of the MTGDR-identified genes. A few MTGDR-identified genes also have very strong mar-ginal effects. Specifically, genes with marginal ranks 1 and 3 are identified with the MTGDR. On the other hand, there are several MTGDR-identified genes with very high marginal ranks. Conclusion For many types of cancers, multiple microarray experiments have been independently conducted to search for genes associated with the same clinical outcomes. Early studies have suggested that genes identified from the analysis of a single cancer microarray dataset may have low reproducibility. Among the several possible causes are the small sample sizes and lack of statistical power. A cost effective solution is to pool multiple existing datasets with similar study designs and conduct meta analysis. The merits of meta analysis with cancer microarray data have been established in many early studies and summarized in [6]. In this article, we have developed a new gene selection method in the meta analysis of multiple cancer microarray data. In terms of methodology, the MTGDR differs significantly from existing approaches. Compared to most existing meta analysis approaches, the MTGDR focuses on the selection of genes with joint effects on cancer and embeds gene selection in estimation. Thus, it can complement existing meta analysis of marginal effects and help to provide a more comprehensive description of the effects of genes. When compared to pooled analysis, the MTGDR allows for experiment-specific regression coefficients. Such a strategy shares similar spirits as the random effects approaches in conventional meta analysis. However, existing random effects approaches are designed for data with a small number of covariates and do not have builtin gene selection mechanisms. The MTGDR advances from such approaches by incorporating gene selection in modeling. It can automatically accommodate different experimental setups, especially different platforms. Compared to intensity approaches that seek for transformations of gene expressions, the MTGDR does not need be conducted on a case-by-case basis. In comparison to classic meta analysis approaches, the MTGDR pools and analyzes raw data instead of summary statistics and can be more informative. In addition, the MTGDR puts more emphasis on gene selection. Our simulation studies suggest that the MTGDR outperforms the meta analysis approach based on an individual dataset gene selection method. More specifically, it is capable of identifying the same number or more of the true positives with a lower false positive rate. In addition, performance of the MTGDR is relatively insensitive to the increase of the number of genes. Analyses of pancreatic and liver cancer studies suggest that (a) the MTGDR is capable of identifying a small number of genes that show relatively consistent effects on cancer outcomes across multiple studies; (b) many of the identified genes have been confirmed in independent studies. The LOO evaluation generates small classification errors; (c) the gene sets identified by the MTGDR can be considerably different from those identified by alternative approaches. Alternative approaches have inferior performance in terms of inconsistency of identified genes across multiple studies and larger classification errors; and (d) genes identified using the MTGDR may differ significantly from genes with low ranks in the meta analysis of marginal effects. Despite its significant advancements over existing approaches, our study may have the following limitations. First, in the analysis of the liver data, inconsistent signs for a small number of genes are observed. Such inconsistency is not observed in the pancreatic data analysis or the simulation. It is possible to modify the MTGDR algorithm and force the signs to be the same across multiple studies. For example, for a specific gene, suppose that one gradient is small and negative, and the other three gradients are large and positive. We can add an additional thresholding and set the negative gradient to be zero. We choose to allow inconsistent signs, which may help raise an alarm on the comparability of data and the applicability of the proposed approach when such inconsistency is observed. Second, in our data analysis, we are able to provide partial interpretations of the identified genes. Many of these have been confirmed in independent studies. However, for the liver cancer data, detailed information on several identified genes is not available. Since the focus of this study is to develop a new meta analysis approach, we do not further pursue the biological implications of the analysis results. Third, in the analysis, we evaluate the performance of the MTGDR using the LOO approach. With properly utilized cross validation, the evaluation and comparison with other approaches are expected to be reasonably fair. In standard logistic regression analysis, when the sample size is much larger than the number of genes, there are several other ways of evaluating the fitted model and selected covariates. For example, p-values and R 2 can be computed. However, we note that the validity of those evaluation criterions is established under the "sample size >> number of covariates" setting and is not applicable to the microarray data, where the number of genes is much larger than the sample size. To our best knowledge, there is still no consensus on evaluation methods with cancer microarray meta analysis. experiments. Such a strategy has been motivated by the fixed effect models in meta analysis [10]. The rationale is that a one unit gene expression change in experiment 1 (say, for example, a cDNA study) may not be equivalent to a one unit change in experiment 2 (say, for example, an Affymetrix study). The regression coefficients, which measure the strength of associations, should be allowed to differ. Data and model We choose data with binary outcomes to describe the proposed MTGDR. We note that this method is also applicable to other types of cancer clinical outcomes, as long as statistical models and objective functions can be properly defined. For experiment m and binary outcome, Y m = 1 and Y m = 0 may denote the presence and absence of cancer or two different cancer stages, respectively. We assume the commonly used logistic regression model, which postulates that the logit of the conditional probability logit(P(Y m = 1|Z m )) = α m + Z m ' β m , where α m is the unknown intercept. Suppose that there are n m iid observations in experiment m. The log-likelihood is: Since the intercept α m is usually of little interest, for simplicity, we rewrite R m (α m , β m ) as R m (β m ). MTGDR method The MTGDR is a gene selection method. It embeds gene selection in the construction of regression models. Gene selection then amounts to identifying the nonzero components of the regression coefficients β m . 6. Steps 2-5 are iterated k times, where k is determined by cross validation. In Step 1, the MTGDR algorithm starts with the zero estimates (i.e., no gene is identified as cancer-associated). In Step 2, the gradients are computed for each individual dataset. Genes with stronger effects on cancer outcomes will have larger gradients. In Step 3, the meta gradient, which is defined as the sum across different experiments, is computed. It evaluates the overall effects of genes on cancer outcomes across multiple experiments. For example, consider that gene 1 shows only a large positive effect in experiment 1 and no effects in other experiments, whereas gene 2 shows moderate negative effects in all experiments. Then the sum of gradients for gene 2, which measures the overall effect across multiple experiments, may be larger than that for gene 1. Gene 2 is thus more likely to be selected since consistent effects are demonstrated across experiments. In Step 4, a meta threshold vector is computed. With this vector, when a gene is selected, it is selected in all models across multiple experiments. In Step 5, we update the MTGDR estimates for only those selected genes. In addition, by allowing for different gradients across multiple studies, the MTGDR allows for different estimates (and, hence, different models) for different experiments. The tuning parameters τ and k jointly determine the property of β and the property of gene selection. When τ ≈ 0, β is dense even for small values of k (i.e, many genes are selected). When τ ≈ 1, β is sparse for small k and remains so for a relatively large number of iterations. But it will become dense eventually. At the extreme, when τ = 1, the MTGDR usually updates estimates for a single gene at each iteration, which is similar to the stage-wise approaches. When τ is in the middle range, the characteristics of β are between those for τ = 0 and τ = 1. For τ ≠ 0, gene selection can be achieved with cross-validated, finite k by having certain components of β exactly equal to zero. As can be seen, the MTGDR involves only simple calculations and can be programmed with many existing software. In our study, research software has been developed using R and is available at [34]. The MTGDR has been partly motivated by the TGDR [35]. The two approaches share a similar thresholding scheme. However, the MTGDR differs significantly from the TGDR by analyzing multiple datasets. When analyzing a single dataset with the TGDR, the effect of a gene can be represented by a single number -its regression coefficient. However, when multiple datasets are present, the effect of a gene needs to be considered across multiple studies and represented with a vector of regression coefficients. Loosely speaking, the TGDR conducts the selection of individual coefficients, whereas the MTGDR conducts the selection of groups of coefficients. Although intuitively simple, extension from individual selection to group selection has been shown to be highly nontrivial. Tuning parameter selection We use the V-fold cross validation to select the optimal k and τ. For τ = 0,0.05, ..., 0.95,1, we search over k to maximize the V-fold cross validation objective function, which can be defined following [20]. With the V-fold cross validation, partial protection against over-fitting is also provided. In this study, we set V = 3, which is due mainly to the small sample size consideration. A graphic demonstration We use the following numerical example to demonstrate the MTGDR parameter paths. For m = 1, 2 and 3, we generate data from
2014-10-01T00:00:00.000Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "b3b965c4daedc7980d595f189b622b416276bf23", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-10-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1be7b1e8c1e6b77d85684821b29d79360b5999ae", "s2fieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "extfieldsofstudy": [] }
196927080
pes2o/s2orc
v3-fos-license
RGD conjugated cell uptake off to on responsive NIR-AZA fluorophores: applications toward intraoperative fluorescence guided surgery The tumour switches on the NIR-AZA emission for fluorescence guided surgery. Introduction Intraoperative uorescence imaging to guide surgical resections in real-time has huge untapped potential. Advantages lie in its ease of use, enhanced safety prole over radiolabelling and the ability to acquire image data in real-time during surgical procedures. 1 Currently indocyanine green (ICG) is the sole clinically approved near infrared red (NIR) uorophore. 2 Clinical uses include vascularisation assessments during reconstructive 3 and bowel anastomoses 4 surgeries and lymph node mapping in digestive tract, 5 cervical 6 and breast 7 tissues. Due to its non-specicity and very short in vivo half-life, its use as an agent to demarcate tumour boundaries for surgical resection is limited to hepatocellular carcinoma of the liver. 8,9 As a result, new classes of NIR-uorophore have recently emerged, several of which are bio-conjugated to enhance their affinity for specic cancer types. 10 However, a remaining complexity for in vivo uorescence imaging using molecular uorophores exists. Following intravenous administration, uorophore distributes to all vascularised regions within seconds, resulting in a strong non-specic uorescence. This necessitates an unpredictable time delay to allow background uorophore clearance, following which imaging is achievable if sufficient uorophore is retained in the region of interest (ROI) (Fig. 1a). This limitation is irrespective of whether the uorophore alone is used or if conjugated to a targeting group (e.g. antibody), as an initial broad distribution will still occur. The time between administration and imaging depends on several parameters such as rates of accumulation and clearance from both the ROI and surrounding tissues and elimination from the body via metabolic and excretion pathways. Each of these factors can be inuenced by the structure of uorophore itself and by groups conjugated to it, but a time lag before imaging is unavoidable. To provide sufficient contrast for imaging, it is necessary to identify an optimal time point at which a maximum quantity of uorophore is retained in the ROI with a minimum remaining in the surrounding tissues. For example, antibody conjugated uorophores have been adopted in recent clinical trials for visualising breast and colorectal cancers utilising labelled bevacizumab and carcinoembryonic antigen (CEA) respectively. 11,12 Yet in spite of using these expensive cancer specic antibody technologies, uorescence images could only be acquired between two and four days post administration. The prolonged waiting period to achieve sufficient tissue contrast is due to the very long biological half-lives of antibody labelled agents. This time delay adds signicant uncertainty to their practical use and raises doubts as to whether all of the cancer would then be detectable by the low levels of remaining uorophore. In effect, what makes large molecular weight antibodies attractive for sustained drug delivery, can work against them when used for the delivery of contrast agents (Fig. 1a). Thus, for rapid and accurate intraoperative imaging innovative alternative approaches are needed to enhance target-tobackground signal ratio at early stages following uorophore introduction. One plausible solution is to exploit a mechanism of selective uorescence quenching in the background areas, whilst rst establishing the emitting potential of the uorophore in the ROI (Fig. 1b). This overcomes the issue of waiting for background clearance and allows observation of dynamic tissue accumulation in real-time during the course of the surgical procedure. While beyond the scope of this work, it may become feasible that dynamic images are recorded continuously, with tissue classications determined using in-line soware image analysis. In our previous work, we have shown that bio-responsive NIR-AZA uorophore 1 performs as an excellent probe capable of real-time continuous imaging of fundamental cellular processes such as endocytosis, lysosomal trafficking and efflux (Fig. 2). 13a Specically, the highly photostable NIR uorescent probe 1 has off/on uorescence switching controlled by a reversible phenol/phenolate interconversion (Fig. 2). Emission from the probe was shown to be highly selective for cellular lysosomes and, as the off/on switching mechanism is reversible, it is capable of real-time continuous imaging of lysosomal trafficking in 3D or 4D over prolonged time periods without perturbing normal cellular function. 14 Preliminary in vivo imaging in a mouse tumour xenogra model showed good tumour discrimination 24 h post i.v. injection of 1 with no observable toxicity. 13a These positive in vitro and in vivo features are good indicators that bio-responsive NIR-AZA uorophores warrant further investigation for translation towards clinical use in uorescence-guided surgery. In recent preclinical tests, always-on NIR-AZA uorophores have shown their potential for lymph node mapping and ureter identication using clinical instrumentation. 15 In this report, we describe the synthesis, photophysical characterisation, in vitro and in vivo imaging assessment of bio-responsive NIR-AZA uorophores conjugated to cyclic-RGD peptide sequences and polyethylene glycol polymer acting as active or passive targeting agents respectively. Breast cancer is a key health concern for women, with over two million new cases diagnosed worldwide annually. Screening programs have resulted in most breast cancers being identied in the early stages with over 80% of breast cancer patients undergoing surgery as part of their treatment. Numerous trials have shown that for patients with between zero and three node metastases, breast-conserving surgery has similar or superior outcomes to mastectomy. 16 As tumour-free surgical margins are critical to the success of breast-conserving surgery, utilising uorescence guidance to improve surgical outcomes could have signicant patient benet. Integrins are membrane bound cell adhesion receptors important for cell-cell and cell-extracellular matrix (ECM) interactions. They act as transmembrane linkers between extracellular ligands such as ECM proteins, growth factors, matrix degrading proteins and the cytoskeleton, which serves to modulate various essential signalling pathways in most cells. 17 Integrins such as avb3 and avb5 (among others) are known to play a key role in tumour angiogenesis and are associated with the metastasis of solid tumours. 18 Of the integrins, avb3 is one of the most studied as it is the most prevalent integrin involved in the regulation of angiogenesis and is widely expressed on tumour blood vessels. 19 Over-expression of avb3 integrin has been associated with increased tumour growth in breast cancer and it has been shown that the activation of avb3 is a contributing factor for metastasis in breast cancer models. 20 The tripeptide arginine-glycine-aspartic acid (RGD) sequence can recognise and bind anb3 and avb5 integrins and promote cellular internalisation with conjugates of the more stable cyclic variant c(RGDfK) being widely investigated as a selectivity enhancer for tumour therapies and diagnostics. 21 The related iRGD peptide sequence (cCRGDKGPDC) has been reported to provide both specic integrin targeting and increased tumour uptake and penetration. It contains the RGD motif, which mediates binding to the endothelial cell membrane expressing the av integrins but upon proteolytic cleavage a second neuropilin-1 (NRP-1) binding motif (CRGDK) is revealed to promote internalisation. 22 Both RGD and iRGD conjugates have been investigated to improve drug selectivity with chemotherapeutic conjugates such as RGD-doxorubicin and RGDpaclitaxel showing promising preclinical results in breast carcinoma mouse models. 23 RGD conjugates for uorescent imaging using always-on probes has been explored in several preclinical models including breast cancers. 24 For this study we have chosen one cRGD (c[RGDfK(PEG-PEG)]) and one iRGD (cCRGDKGPDC) peptide sequence for conjugation to the bio-responsive NIR-AZA imaging platform. It was hoped that these low molecular weight peptides would promote rapid uptake and switch-on of emission preferentially within tumours allowing a high tumour to background ratio (TBR) to be established without waiting for prolonged clearance times. In practice, it is envisaged that they would be administered and visualised intraoperatively, thereby not impeding the normal surgical or hospital workow. Results and discussion Incomplete tumour removal during surgical resection is closely related to cancer reoccurrence and patient survival rates. A major challenge in achieving cancer free margins is to fully distinguish between all of the cancerous growth and normal tissue during surgery. While high denition images obtained by PET, CT or MRI scans identify and diagnose tumour growths prior to surgery, such images are not overly useful to guide surgical resection during the operation. Currently, tumour margins are typically assessed by visual assessment and palpation of the tumour intraoperatively. However, the possibility of micro-invasion of the surrounding tissues can make it difficult to determine an adequate tumour-free excision margin. In this report, we have developed synthetic routes to RGD conjugated bio-responsive uorophores, examined their photophysical and in vitro cellular emission proles and tested their in vivo tumour imaging performance using a human breast tumour model in mice. Synthesis and characterisation Cell uptake responsive probes 2 and 3 were selected for synthesis using activated ester/amine coupling to conjugate the cRGD sequence and cysteine to maleimide addition for the covalent linkage of the iRGD peptide sequence (Fig. 3). Two bioconjugation approaches were adopted to conrm synthetic exibility of NIR-AZA bio-uorophores to functionalisation with targeting moieties. Synthesis of 2 required uorochrome 4 which has been previously reported, though only in reaction with an aminopegylated polymer to produce 1 (Scheme 1). 13a For this study, the amino-pegylated substituted cRGD substrate 5 (cyclo[Arg-Gly-Asp-D-Phe-Lys(PEG-PEG-NH 2 )]) was selected as it is known to be an ideal construct for housing the integrin recognising tripeptide sequence (Scheme 1a). The reaction of 4 and 5 in DMSO at rt was followed by HPLC and 1 H NMR which showed a clean conversion to conjugate 2 in 4 h. Transformation of activated ester 4 into conjugate 2 was clearly distinguishable by the shi of the key methylene 1 H NMR peak from 5.53 to 4.61 ppm (Scheme 1b). Purication of product was achieved using preparative reverse phase HPLC and the structure conrmed by high-resolution MS and NMR methods. The generation of iRGD conjugate 3 rst required the synthesis of the corresponding maleimide-substituted uorochrome 7 (Scheme 2). This was readily achievable starting from the previously reported derivative 6 which was subjected to the nitration conditions of KHSO 4 /KNO 3 at reux in CH 3 CN/H 2 O to yield the o-nitro phenol substituted substrate 7. 25 Synthesis of iRGD peptide 8 followed literature procedures to produce (cCRGDKGPDC) which was coupled with N-acetyl protected cysteine, through the amine of the lysine residue, to provide the nal thiol substituted peptide. 26 Bio-conjugation via cysteine to maleimide addition was efficiently achieved by reaction of 7 with iRGD 8 in DMSO at rt for 30 min to produce the required derivative 3 (Scheme 2). Product purication utilised preparative reverse phase HPLC with the structure conrmed by usual analytical methods (ESI †). To allow comparisons be made between bio-responsive and non-responsive conjugates the iRGD substituted always-on control 10 was synthesised. Conjugate 10 is similar in structure to 3 but has the o-nitro phenol uorescence switching substituent replaced by a water solubilizing alkylsulfonic acid group (Scheme 3). The synthetic route adopted to make this control utilised the reaction of known uorochrome 9 with peptide 8 to produce 10. 25 The cysteine/maleimide coupling proceeded smoothly in phosphate buffered saline (PBS) at pH 7.2 with iRGD conjugated 10 obtained in good yield (Scheme 3, ESI †). To test the extent of advantage gained from utilising integrin targeting RGD peptide conjugates such as 2 and 3 versus a passive accumulating agent such as a PEG group, the pegylated bio-responsive 1 was also included for testing in the study as a comparative control (Fig. 2). This we envisaged would allow a direct imaging performance evaluation between bioresponsive uorophores using either active targeting peptides or the passive enhanced permeability and retention (EPR) effect of a PEG group. Photophysical properties The photophysical properties of the bio-responsive uorophores 2 and 3 and the always-on control 10 were studied in solutions of Dulbecco's modied eagle's cell medium (DMEM) containing 10% fetal bovine serum (FBS). Absorption and emission wavelengths are listed in Table 1 and are, as would be expected for the NIR-AZA class, in the 690-730 nm range. At pH 7.4 uorescence intensity of always-on 10 was 17-and 15-fold greater than 2 and 3 respectively (Table 1). This illustrates both the emissive potential of the peptide conjugates and the ability to quench the bio-responsive derivatives in relevant biological media (Table 1, spectra and inset). In order to display the responsive nature and full emissive potential of 2 and 3, their DMEM solutions were sequentially acidied ( Fig. 4a and b). This caused a successive increase in emission intensity as acidity increased, with a maximum intensity reached at approximately pH 4. Plotting the measured data revealed pK a values of 4.9 for both 2 and 3 which is consistent with the previously reported value of 4.6 for 1 (Fig. 4 insets, Fig. S1-S3 †). 13a This shows that the conjugating group does not overly inuence the important p-nitro phenol emission-controlling feature. While it is recognized that the extracellular matrix of a solid tumour can be more acidic than normal tissue, intracellular organelles such as late endosomes and lysosomes are also acidic ranging between pH 4.5 and 5.5. 27 Encouragingly, the measured uorescence enhancement factor (FEF) for 2 and 3 between pH 7.2 and that of 4.5 (as found in lysosomes) was 23 and 18 respectively (Fig. 4c). As such, a switch-on of emission could be expected to occur both in the Scheme 2 Synthetic route to bio-responsive iRGD NIR-AZA conjugate 3. localised extracellular tumour microenvironment and upon cancer cell uptake. As emission quenching in the off states of 2 and 3 at pH 7.2 is highly effective, good background to noise differentials could be anticipated. In contrast, control uorophore 10 showed no absorption or emission spectral changes between pH 7.2 and 4.5 (Fig. S4 †). The next stage involved testing 2, 3 and control 10 in live cell imaging using the epithelial human breast cancer cell line MDA-MB 231 that is known to express membrane integrins including avb3 and avb5. MDA-MB 231 is a highly aggressive triple-negative cell line, with its invasiveness mediated by proteolytic degradation of the extracellular matrix. 28 The metastatic invasive nature of MDA-MB 231 cells is closely associated with large acidic vesicles (LAV) in which endocytosed extracellular matrix can be digested by activated lysosomal proteinases such as cathepsin. 29 As these intracellular LAVs have a pH of approximately 4, they also could activate the bio-responsive uorophores upon cancer cell uptake in addition to lysosomes. In vitro live MDA-MB 231 cell imaging With the responsive nature of 2 and 3 established, the potential for translation of these constructs to real-time live cell imaging was investigated. In order to illustrate the imaging effect of the bio-responsive characteristics of 2 and 3 the always-on 10 was also imaged as a positive control. For live MDA-MB 231 cell experiments, chamber slide seeded cells were placed in a wideeld microscope surrounded by an incubator to maintain the temperature at 37 C and CO 2 at 5%, following which an imaging eld of view containing viable cells was chosen. The cells were treated with either 2, 3 or 10 (1-5 mM) and time-lapse NIR-uorescence and differential interference contrast (DIC) images were acquired over 120 min. Image data showed that for always-on 10, a uorescence specic to the plasma membrane was rapidly observed within 15 min, which could be attributed to its strong association with the cell membrane (Fig. 5a). As expected, the endothelial-like morphology of the cell line is distinguishable by its membrane lopodia projections which is characteristic of its metastatic invasive phenotype (Movie S1 †). To allow a common comparison for each uorophore solution across a wide acidity range 4 mM Triton X 100 was included in each solution. c Fluorescence spectra taken at 5 mM concentration, 2 (black spectra) 3 (grey spectra) 10 (red spectrum), inset shows expansion of spectra for 2 and 3. Aer 60 min incubation, both cell membrane and intracellular vesicle staining of the cytoplasm could be observed, both of which persisted at 120 min ( Fig. 5b and c). A wider eld of view showing a larger number of cells and Z-stack images can be seen in Fig. S5, S6 and Movie S2. † Revealingly, the bio-responsive NIR-AZAs 2 showed no cell membrane staining in the rst 15 min of incubation and only following this time point could intracellular regions of uorescence be observed (Fig. 6a). The intracellular punctate staining pattern is consistent with those previously observed for 1 and are due to a selective bio-responsive switch-on of emission within the acidic vesicles of the cytoplasm. 13a,14 The images revealed two distinct vesicle sizes, the smaller of which are attributable to cellular lysosomes and the bigger LAVs specic to the metastatic nature of MDA-MB 231 cells (Movie S3 and S4 †). At 60 and 120 min the intracellular uorescence intensity increased, but at no point was membrane uorescence observed (Fig. 6b, c and Movie S5 †). This lack of plasma membrane uorescence shows that 2 can translocate across the membrane without uorescence being activated. A wider eld of view showing a larger number of cells can be seen in Fig. S7. † Z-Stack analysis of cells conrmed that regions of uorescence were within the cytoplasm (Fig. S8 and Movie S6 †). Similar results were obtained from imaging experiments using bio-responsive iRGD NIR-AZA 3 which can be seen in Fig. S9, S10 and Movie S7. † In addition, similar results were obtained from imaging experiments with HeLa Kyoto cells using bio-responsive 2 which can be seen in Movie S8. † The different cell staining patterns between 10 and 2, 3 over time shows the delity of the uorescence switching and the potential signal to background contrast advantage of the bioresponsive NIR-AZA probes. The next challenge of this work was to examine if a preferential in vivo switch-on of bioresponsive NIR-AZAs in cancerous tumour could allow both early (due to initial switch-on) and later (due to retention of switched on probe) stage discrimination of tumour from background. In vivo tumour imaging For this study, the human breast cancer cell line MDA-MB 231 was selected for its relevance to clinical forms of aggressive breast cancers for which the rst line of treatment is oen surgical resection. The ability of bio-responsive conjugates 1, 2 and 3 were tested using subcutaneous tumours grown in nude mice. Fluorophore 10 was also included as a positive control in the study to demonstrate the advantage of using off to on responsive uorescence over a constant emission. It was anticipated that 1, 2 and 3 would remain predominately uorescent silent until tumour uptake occurred causing a uorescence signal modulation to on. Experimental measurement of changes in tumor-to-background ratio (TBR) over time was the preferred method to quantify differences between bioresponsive and always-on uorophores. Pegylated bioresponsive 1 was included to compare the turn-on time differences between passive PEG and the active integrin targeting of 2 and 3. The expectation being that the larger EPR dependent PEG conjugate would be slower. Each uorophore was subjected to in vivo analysis using a standard dosing set at 2 mg kg À1 delivered by i.v. tail vein injection. Post injection, images were acquired initially at regular intervals between 10 min and 9 h and thereaer less frequently at 24, 48 and 96 h. The method used for image analysis was consistent across all experiments with TBR values calculated by measuring tumour ROI uorescence against an average of three equally sized background ROI regions, two of which were close to and one distant from the tumour (Fig. S11 †). In previously reported preclinical studies, when imaging through the skin, a TBR ratio of two was shown to be a clinically relevant threshold. 24a,30 As such, we adopted this value as a point of reference to compare result from different uorophores and different time points for individual uorophores. For always-on iRGD control 10, from 10 to 60 min post i.v. injection an immediate strong and non-specic uorescence was observable throughout the animals, with no discernible bias for tumour as demonstrated by the measured TBRs of below 1.3 (Fig. 7a). The TBR value marginally improved over the following 2 h with a TBR value of 1.5 achieved at 3 h post administration. While it is likely that 10 has begun to accumulate at the tumour site, the cancerous ROI is not readily distinguishable from the background uorescence (Fig. 7c, see Fig. S12 in ESI † for additional time point images). By the 6 h time point the TBR had improved further to 1.9, though, it was not until 24 h post administration that a TBR above 2 was obtained. By this time, the overall uorescence intensity had dropped approximately 75% fold from its peak (Fig. 7b). The TBR value of 2 was maintained out to 48 h as the emission intensity further decreased (90% of peak), and by 96 h it had fallen below the threshold. This sequence of TBR values comes about due to an initial distribution through normal and cancerous tissues followed by a faster clearance of 10 from normal tissue with retention within cancerous tissue. The sequence of images shown in Fig. 7c illustrates the general challenge facing always-on uorophores, regardless of whether they are substituted with cancer specic targeting agents or not. As the process of accumulation and clearance of uorophore from normal and cancerous tissues are both dynamic processes, the success or failure of an always-on probe relies on identifying the time point at which uptake and clearance for the different tissue types are most divergent from each other. This poses signicant challenges for their use in surgical oncological Fig. 8 and 9). First image shows tumour ROI (solid circle) and three background ROIs (dashed circles). practice with respect to patient-to-patient variances and complex hospital scheduling. Analysis of the image timelines for bio-responsive RGD NIR-AZA 2 showed remarkable differences from control 10 (Fig. 8). As 2 is administered in solution at pH 7.2, it is non-uorescent and remained virtually uorescent silent within the vasculature immediately post injection ( (Fig. 8b), see Fig. S13 in ESI † for additional time point images). At 60 min post injection, 2 begun to accumulate at the tumour region and the uorescent signal had turned on giving a measured TBR of 1.4, with some background also observed from the adjacent liver (Fig. 8a and c). By 3 h a signicant 4.1-fold increase in tumour uorescence intensity had occurred along with a jump in TBR, surpassing the threshold of 2. The TBR value (2.5) reached a maximum at 6 h and maintained this level until 24 h. Importantly, emission intensity from the tumour reached a maximum within 3 h and there was no reduction in intensity between 3 and 6 h coinciding with when the TBR was at its maximum ( Fig. 8b and inset). Encouragingly, even at 9 h only a $20% intensity reduction had occurred which provides a wide time frame in which tumour visualisation could be achieved (Fig. 8a). The ability of 2 to effectively tumour stain shortly aer administration can be judged by the sequence of images shown in Fig. 8c. This we view very positively as not only is the threshold reached quickly, it is maintained for a prolonged time. This ts well with a clinical surgical workow whereby the contrast agent could be administered at the start of surgery with intraoperative tumour visualization possible. Similarly, responsive iRGD NIR-AZA 3 also had emission suppressed at the start of imaging with very low uorescence at 20 min and tumour accumulation evident at 60 min with a TBR of 1.4 (Fig. 9c, see Fig. S14 in ESI † for additional time point images). Between 20 min and 3 h there was a 4.7-fold increase in tumour uorescence intensity ( Fig. 9b and inset). Yet, in comparison to 2 the extent of background signal was larger, though not brighter than the tumour itself and there was a longer delay until 6 h before the TBR threshold reached 1.7 ( Fig. 9a and c). Disappointingly, the TBR never exceeded the threshold value of 2 with the best value of 1.8 recorded 48 h post injection. Finally, the pegylated bio-responsive 1 was studied to determine the inuence of the conjugating group on the time taken to reach maximum tumour uorescence and TBR. Again, emission remained off in the beginning with a similar, but considerably time delayed, prole to that of 2. TBRs remained low ($1.2) for the rst 60 min then rose to 1.6 and 1.8 at 6 and 9 h respectively. It took 9 h to reach maximum tumour uorescence intensity with this level remaining relatively unchanged at 24 h. Pleasingly, the TBR threshold of 2 was reached at 24 h and this was maintained for a further 48 h. Comparing the changing TBRs of pegylated 1 and RGD 2 over time illustrates that substituting with the peptide sequence provides a considerably faster tumour accumulation. If clinically adopted, the slower time frame of 1 would most likely require its administration 24 h before surgery, though its long retention within the tumour may provide a prolonged window in which it could be imaged. To complete this study, further in vivo tests were performed on the most promising derivative RGD NIR-AZA 2. To gain insight into the initial rate of tumour uorescence turn on, images were acquired every 10 min for 3 h immediately aer introduction of 2. To establish that the RGD substituent was inuencing the rate of tumour uptake, competitive binding or blocking experiments were also carried out. Experimentally this was achieved by rst administering an i.v. tail injection of the RGD peptide 5 (6.8 fold equivalence excess) and following a short time period (5 min) 2 was then administrated. It would be expected that the rst administration of RGD 5 would result in the integrin receptors being bound by the free peptide such that when NIR-AZA 2 was next introduced there would be a reduced uorophore uptake and as such a lower switch-on of uorescence. In order to make direct comparisons, pairs of mice with closely matching sized tumours were selected for Fig. 11 Competitive bind studies using RGD 5 and RGD NIR-AZA 2 (n ¼ 4 pairs). (a) Plots of increasing tumour intensity over 3 h post administration for 2 (red trace) and for 5 followed by 2 (black trace). (b) Plots showing rate of increasing fluorescence for first 60 min post administration for 2 (red trace) and for 5 followed by 2 (black trace). (c) Imaging of resected tumours before and after dissection from animals treated 2 and for 5 followed by 2. each experiment (n ¼ 4 pairs). One animal was rst given RGD 5 then both animals were administered with 2, following which uorescence images of both animals were taken every 10 min for the following 3 h. Averaged results from four experiment showed a 1.75 fold reduction in the total tumour uorescence intensity aer three hours for the mice which were rst treated with the peptide 5 prior to receiving the NIR-AZA conjugate 2 versus the mice which only received 2 (Fig. 11a). Additionally, the rate of uorescence turn on in the rst 80 min within the tumour that was not exposed to free RGD peptide was 1.9-fold higher than that exposed to the competing peptide (Fig. 11b). These results conrm that the peptide substituent of the RGD-uorophore 2 is positively inuencing tumour accumulation immediately following administration, which facilitates early time point imaging. Upon completion of imaging, tumours were resected from an animal pair with quantication of their uorescence intensities showing a 3.9 fold suppression of emission intensity from the RGD peptide pre-treated animal ( Fig. 11c and S15 †). Encouragingly, dissection and imaging of the tumours showed that the uorescence intensity was highest at the outer boundary of the tumour, which would be most benecial for operative identication of the full extent of tumour margins (Fig. 11c). Finally, analysis of the uorescence turn on prole of 2 alone showed that intensity had reached a near maximum at 80 min ( Fig. 11a and b red traces). This indicates that 2 could be administered at the start of a surgical procedure with intraoperative tumour visualization taking place without signicantly impeding the normal surgical workow. Conclusion In summary, three bio-responsive NIR-AZA uorophore constructs have been synthesised conjugated to either active (RGD) or passive (PEG) tumour targeting groups and their photochemical, cellular and in vivo properties compared with an always-on uorescent control. Each bio-responsive derivative showed excellent off to on uorescence switching characteristics with large enhancement values. In vitro live MDA-MB 231 cell imaging experiments showed internal acidic organelle cell staining with the responsive probes 2 and 3, contrasting with the always-on derivative 10 which rst showed plasma membrane before internal organelle staining. This result proves that the delity of uorescence switching is maintained in cellular experiments and is independent of the conjugating group. A comprehensive in vivo assessment of tumour imaging performance for bio-responsive probes 1, 2, 3 and always-on derivative 10 was conducted with monitoring of the uorescence distributions over 96 h following administration. As anticipated, the always-on 10 gave an immediate, non-specic and very strong emission prole throughout animals whereas the bio-responsive 1, 2 and 3 displayed relatively very low initial uorescence. In the case of 10, clearance from normal tissue with accumulation and retention in tumour, allowed for a TBR above 2 to be reached between 9 and 24 h. All three bioresponsive derivatives switched on within tumours at time points consistent with their conjugated targeting groups. cRGD 2 and iRGD 3 both had effective switch-on in the rst hour though 2 had superior specicity for tumour than 3. Probe 2 achieved the threshold TBR value of above 2 within 3 h and this was maintained for a further 24 h. Relatively, the PEGylated 1 had slower similar turn on characteristics taking 9 h to reach maximum uorescence from the tumour. Despite the slower accumulation, its retention was biased to the tumour tissue with the threshold TBR value being reached at 24 h and maintained out to 96 h. The side-by-side imaging comparison of 1 and 2 is an important and unique illustration of the dynamic differences between passive EPR and active targeting in action. Overall, the cRGD-conjugate 2 has been identied as showing excellent potential for clinical translation for intraoperative uorescence guided tumour margin identication. Its bio-responsive nature with early accumulation at the tumour periphery may overcome the inherent drawback of always-on uorophores requiring prolonged clearance times. The PEGylated derivative 1 does also offer potential for clinical translation though its slow switch-on rate may ultimately limit its clinical scope. The next ongoing stage of this research is to record continuous NIR-uorescence video of the bio-responsive turn on at tumour margins to gather more kinetic data on the tissue dependent rates of emission increase over the rst 90 min. This real-time data will be utilised in conjunction with specically developed algorithms for dynamic image analysis that could provide the surgical team with an augmented reality (AR) heat map representation of the tissue to be excised during the operation. An intraoperative use of dynamic uorescence tissue imagery combined with AI analysis and a clinical AR interface has the potential to transform surgical practice. Conflicts of interest DOS declares the following competing nancial interest. Patents have been led on BF 2 -azadipyrromethene based NIR uorophores (EP2493898 and US8907107) in which he has a nancial interest.
2019-07-17T21:05:02.251Z
2019-06-21T00:00:00.000
{ "year": 2019, "sha1": "2866128028d66d7e3b3ff43809103e3c0f5b4e30", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/sc/c9sc02197c", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d98271d0eebf444376bf435e0eb93beb56e8bf2", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
153926330
pes2o/s2orc
v3-fos-license
Fluctuations in the momentum of growth within the capitalist epoch This paper reviews the history of cyclical and long wave analysis and examines the evidence on changes in the momentum of economic growth in 16 advanced capitalist countries from 1820 to 2001. It assesses the work of the main Business Cycle Research Institutes in Western Europe the USA and Russia, as well as that of Kondratieff, Kuznets, Schumpeter, Abramovitz and the long-wave revivalists-Rostow, Mandel and Mensch. It concludes that the existence of regular long-term rhythms in capitalist development is not proven, but distinguishes major changes in the momentum of growth due to disturbances of an ad hoc character. The role of system shocks and historical accidents is important but the role of policy error and success is also emphasised. It identifies five major phases of capitalist development since 1820. Analysis of business cycles Cyclical analysis started in 1856 with Clement Juglar (1819-1905 and 1862 with Jevons (1835-1882). Both emphasized periodicity in economic activity whereas most earlier writers interpreted interruptions to growth as random financial crises. Jevons work was largely concentrated on English experience and had an idiosyncratic emphasis on the influence of sun-spots (1). Juglar's analysis was comparative. He concluded that cycles were roughly synchronous in France, the UK, and USA. The evidence he assembled related mainly to monetary phenomena-expansions or contractions in central bank activity, rates of interest, prices of key commodities, etc., plus narrative 'business annals'. It is frequently asserted that Juglar found cycles of a characteristic length of 9 years, but this is not in fact true. His cycles for France averaged 7 years with a range from 3 to 18 years, and for the UK 6 years with a range from 2 to 10 years. For several decades the quantitative indicators available to cyclical analysts were similar to those used by Juglar, though they were later augmented to include price indices, data on output and foreign trade. A more sophisticated causal analysis (under-consumptionist) was also developed by the Russian economist Mikhail Tugan- Baranowsky (1894Baranowsky ( , 1865Baranowsky ( -1919 in 1894 in his analysis of cycles in the UK. In the 1920s, institutes were set up in several countries to measure and interpret variations in current economic activity, in historical perspective. These included the Conjuncture Institute in Moscow and the National Bureau of Economic Research (NBER) in New York, both set up in 1920. The London and Cambridge Economic Service started its Monthly Bulletin in 1923. In 1925, Ernst Wagemann, the President of the German Statistical Office, set up an officially financed Institut fur Konjunkturforschung in Berlin, with a staff of 50, to brief decision-makers in the public and private sectors with an up-to-date analysis of the economic situation (see Tooze 1999). In 1933 it became the Deutsches Institut f} ur Wirtschaftsforschung. The Institute of Statistics in the Sorbonne in Paris, the Institutes of Statistics in the Universities of Rome and Padua presided by Corrado Gini, and the League of Nations Committee of Experts on Economic Barometers started work in 1926. The Osterreichisches Institut fur Konjuncturforschung, directed by Friedrich von Hayek, started in Vienna in 1927 (see Mitchell 1927, pp 201-202). At this time, most of these countries did not have national accounts (except Germany, where they were initiated in 1929 and the USA from 1934), so their basic analytical tools were not macro-measures of output, expenditure and income but miscellaneous indicators of prices, financial transactions, commercial activity, output of various agricultural, mining and manufacturing products. They concentrated on business activity and generally ignored the role of government. The heterogeneity of their indicators was a substantial problem in measuring cross-country synchronicity of business cycle movements. The ultimate refinement in this type of business cycle analysis was the massive effort of the NBER, under the successive leadership of Wesley Mitchell (1874Mitchell ( -1948, and Arthur Burns (1904Burns ( -1987. The first phase of their research was preparation (a) of a huge databank of statistical indicators and charts on production, construction, transport and communication, wholesale, retail and foreign trade, wages and employment, currency, banking and financial transactions (see Mitchell 1927), and (b) narrative chronologies of past cycles (business annals). The annals were compiled by Willard Thorp (1926), in liaison with associates in other countries, including von Hayek, Aftalion, R. Kuczinski, and Kondratieff; they provided a cyclical periodisation for 17 countries; for the UK and USA back to 1790, for other countries to the mid-nineteenth century. With this material, NBER developed a series of 'reference cycles' for four countries (France, Germany, Great Britain, and the USA) based mainly on monthly quantitative data, which started in 1854 for the last two countries, in 1865 for France, and 1879 for Germany. The number of monthly or quarterly series for the USA was 19 for 1860 rising to 811 in 1942 (plus 161 annual indicators). The monthly and quarterly series were seasonally adjusted. For the three European countries the number of indicators was much smaller. The NBER derived its 'reference cycles' by plotting most of the information on charts in de-seasonalized form, and, by iterative procedures of inspection, identifying the turning points of cycles by the size of clusters of roughly concurrent fluctuations (Burns and Mitchell 1947, pp 20:78-79, and 82). Thus its central concept of economic activity was a fuzzy cocktail rather than a clearly defined measure of aggregate economic activity. The main purpose was to develop sensitive warning indicators of turning points in business activity. These were classified as leading, coincident, or lagging. The reference cycle became part of the official statistical armoury of the USA for forecasting purposes, though it is now supplemented by the more sophisticated econometric models of aggregate activity (quarterly GDP). For the period 1857 to 1978 the NBER discerned 28 successive peak-to-trough movements for the United States, i.e. a recession on average every 4 years, with a variation in their incidence from two-and-a-half to nine-and-a-half years. For other countries the average duration was found to be longer: 53 months for France, 62 for the UK and 64 for Germany for prewar years. The NBER cycles were not adjusted to eliminate trend, so they were not measures of oscillation in economic activity. They registered recessions only when there was an absolute fall in the relevant indicators (2). The NBER technique of using monthly and rather volatile series picked up more cycles than a GDP index based on annual data, and the 'reference cycles' that did correspond with GDP movements did not always have exactly the same dates (3). The NBER approach is a useful tool in interpreting quantitative economic history, but a major problem is that it yields no satisfactory measure of the amplitude of fluctuations because of the difficulty of producing a meaningful summary measure from such heterogeneous data. Thus one cannot use the reference cycle itself to distinguish major and minor cycles, in the same way that one can with simpler measures of industrial output or GDP. Arthur Burns (1934), Production Trends in the United States since 1870 was an attempt to look at longer term 'trend cycles', based on annual movement of commodity output indicators for 90 agricultural, mining and industrial items and 14 for transport and communications. It analysed patterns of growth and retardation for individual industries, but made no attempt to measure their aggregate movement. Although the bulk of NBER research in the 1920s was concentrated on business cycles, there was also a feeble effort to measure national income of the USA. Two studies were published in 1921, followed by Wilford King's unimpressive National Income and its Purchasing Power in 1930, which contained no aggregate estimate nor any concept of what such an aggregate would be. In 1930 Mitchell asked Simon Kuznets to take over the work on national income. Kuznets migrated to the USA in 1922, after doing his undergraduate studies in Kharkov. He did his Columbia Ph.D. thesis on cyles under Mitchell in 1926 and worked in the Bureau in 1927-1929 on his post-doctoral monograph Secular Movements in Production and Prices (1930). It resembled other work in the Bureau in type of indicator, but Kuznets used only annual data, had somewhat wider country coverage, much more ambitious decomposition of time series, explored fluctuations over periods distinctly longer than business cycles, and his analytical approach was influenced by the Russian tradition. From 1930 onwards his professional interests were transformed. He blazed a new trail which he followed for the rest of his life, developing comprehensive measures of aggregate activity and economic growth, first for the United States, and then the rest of the world. He lost interest in the fuzzy cocktail of indicators that continued to be Burns'delight, though he continued his close association with the NBER, and was successful in persuading it to undertake important studies on economic growth which buttressed his own work (4). Long-wave analysis Although there was some discussion of the Great Depression (in prices) in the last quarter of the nineteenth century, the idea of recurrent long waves in capitalist development did not arouse much interest until after the First World War, i.e., about 50 years later than cycle analysis, when the rhythm of development had been very dramatically broken. The main long-wave analysts were van Gelderen (1913), Kondratieff (1922Kondratieff ( , 1926Kondratieff ( , 1992Kondratieff ( , 1998, Kuznets (1926Kuznets ( , 1961Kuznets ( , 1965, and Schumpeter. All drew heavily on cyclical indicators to test their ideas quantitatively. 3 Van Gelderen (1891 Jacob van Gelderen was not the first to distinguish long waves, but he was the first to measure them (5). He was a statistican in the municipality of Amsterdam, later (1919Amsterdam, later ( -1933 chief statistician in Indonesia, and worked thereafter in the Dutch Ministry of Colonies. As a civil servant, he wrote under a pseudonym (Fedder, his mother's maiden name). He was active in the Jewish labour movement (Poale Zion) and his article on long waves was published in 1913 in three successive issues of the Dutch Social-Democratic party's journal (De Nieuwe Tijd). This had been founded in 1896, on the the model of Die Neue Zeit, the organ of the German SDP founded in 1883 by Karl Kautsky. Both journals had been vehicles for Marxist discussion of the dynamics and prospects of capitalism-crises, overproduction, likelihood of breakdown, and for reviews of books on stages of capitalist development by Werner Sombart in 1902, and Rudolf Hilferding in 1910. Van Gelderen mustered a good deal of statistical material on wholesale price movements in Austria-Hungary, Belgium, Canada England, France, Germany and the USA, foreign trade and gold production, stock market activity and interest rates. Without any decomposition of his time series, he concluded by inspecting his evidence in graphical form, that there had been a long cycle of 45 years between 1850 and 1895. There were two phases-expansion and contraction. The former he called a ''spring tide'' (1850-1873), when prices rose. In 1873-1895 there was an ''ebb tide'' when prices fell. He discerned a second long wave beginning in 1895. He demonstrated his waves most clearly by a technique later adopted by Kuznets. He smoothed out shorter 10-year cycles in the period 1845-1911, by presenting averages for overlapping decades (p 269) of the Sauerbeck price index. He argued that movements of this kind were inherent characteristics of the capitalist system associated with surges and recessions in capital formation, expansions of the European capitalist orbit to include north America and Australasia, fluctuations in gold production, etc. However, he did not suggest that the system was threatened with breakdown. One gets the impression that he felt these waves to be more or less ineluctable and innocuous. Van Gelderen and his family committed suicide in May 1940, when the Nazi army invaded the Netherlands. Fluctuations in the momentum of growth within the capitalist epoch 149 4 Kondratieff (1892Kondratieff ( -1938 Nikolai Kondratieff was a Russian economist (a pupil of Tugan-Baranowsky), who worked on agrarian problems and business cycles. His long wave analysis was much more sophisticated than van Gelderen's. He was very briefly vice minister of food supply in 1917, in the Kerensky government preceding the October revolution. Thereafter he taught in the Agricultural Academy, worked on agrarian reform with Chayanov and Makarov and was the founder and director (1920)(1921)(1922)(1923)(1924)(1925)(1926)(1927)(1928) of the Business Cycle Research Institute in Moscow. He was also a high-level policy advisor to the Ministry of Agriculture and the Ministry of Finance in the 1920s. The Institute's large staff (50 people) included distinguished economists and statisticians: Eugen Slutsky, the expert on mathematical decomposition of time series, A. A. Konus, who did pioneering work on consumer price indices and Albert Vainstein, who survived and moved on from business indicators to macromeasurement. Its job was to monitor the economic situation in the USSR and the major capitalist economies and to make economic forecasts. It was financed from the budget of the Ministry (People's Commissariat) of Finance. Several of Kondratieff's articles were published in translation and he traveled abroad to establish contact with kindred researchers. From early June 1924, accompanied by his wife, he spent a month in Berlin, 10 weeks in England (Cambridge, Oxford and London) and 3 months in the USA on an official mission for the Ministry of Agriculture to gather information on the international competition which Soviet agriculture faced. His most intensive agricultural investigations took place in the USA where he had many discussions with public officials in Washington from the Secretary of Agriculture, Henry Wallace, downwards. He also took the opportunity to meet economists with kindred interests. In England he met Keynes, visited the London School of Economics and the Royal Statistical Society. He met Irving Fisher at a meeting of the American Economic Association in Chicago. He participated in a conference of agronomists in Cornell University in Ithaca, where he met Kuznets' older brother, Solomon, who was teaching there. His most extensive discussions and subsequent correspondence related to his work on cycles were with Wesley Mitchell in New York. Mitchell was briefed on Kondratieff's work by Simon Kuznets who was his Ph. D. student at the time. It is not clear whether Kuznets talked to Kondratieff in New York, but they probably met in Cornell (6). In the 1920s, there was freedom to exchange opinions and criticise official policy in the USSR which disappeared in 1928. Within the Institute, Kondratieff submitted his work to detailed comment by his colleagues. The report of a 1926 discussion reads like a session reported nowadays in Brookings Papers on Economic Activity. Kondratieff (1925b) presented a 67 page paper (a revision of his first long wave paper of Kondratieff 1925a). It was subjected to an 81 pages critique by D. I. Oparin, 22 pages of comment by seven other colleagues, a 40 pages reply by Kondratieff, a 17 pages reply from Oparin, and 46 pages of tables and notes. (see Kondratieff and Oparin 1928 and its full translation in Makasheva et al. 1998, vol. 1). Kondratieff distinguished three kinds of cycle: long ones of 50-year-duration, middle ones of 7 to 10 years, and short ones of 3 to 4 years. He measured the long cycles by a double decomposition of time series-eliminating the trend and showing a secondary movement-a 9-year moving average of the deviations from trend. Nine-year averaging was enough to remove the influence of the two shorter types of cycle. His analysis covered the period 1770 to the 1920s and the duration of his long cycles ranged from 40 to 60 years. Kondratieff also adjusted his series to eliminate the population component, in which some Kuznetsians (Brinley Thomas and Richard Easterlin) found the best evidence for their own long-wave analysis. Kondratieff's waves were most clearly demonstrated by long-term movements in wholesale prices, though some of the long-term oscillation was obviously attributable to wars (e.g., the peaks in the Napoleonic wars and 1914-1920). He analysed wholesale price developments for France, the UK, and the USA, and it is not surprising that in these relatively open economies he found price trends to be similar. After he adjusted to eliminate the effect of exchange rate changes, the individual country series gained greater synchronicity. On this basis Kondratieff claimed his waves to be an international phenomenon. Most of Kondratieff's other indicators contained a strong price element, because they were expressed in current values: e.g., wages, interest rates, the value of foreign trade, and bank deposits. Not surprisingly, the price component of these value series moved in a similar way as the general price indices (7). The only physical series were those relating to per capita coal production in England, coal consumption in France, pig iron and lead production in England. Kondratieff concluded tentatively that, on the basis of available data, it was very probable that there had been three long cycles in economic life (a rather vague term, but one that is clearly intended to include output as well as price movements). His chronology referred not to particular years but to spans, and he distinguished only two phases, the rise and fall, in each wave. He did not discuss the amplitudes of these waves, which varied between series. His dating is shown in Table 1. There are several problems with Kondratieff's approach. The first is his failure to establish that long waves exist as more than a monetary phenomenon. He failed to show the existence of broad movements in the volume of output that even remotely correspond to our present measures of aggregate 1890-1896 to 1914-1920 1914-1920 to ? Fluctuations in the momentum of growth within the capitalist epoch 151 economic activity. The second problem was that he eliminated the trend and discarded it as if it were irrelevant to the discussion. Between 1820 and 2001, British GDP rose 33-fold, and American by 635fold. This fact is left out when the time series are decomposed in Kondratieff fashion for wave analysis, but such very different trends transform the nature and operational significance of any long waves that may be discerned. The third problem is that double decomposition of time series to eliminate trend and smooth out cycles blurs the impact of major historical events. Thus, Kondratieff's chronology pays no attention to the impact of the First World War, and later long-wave analysts tend to brush off the catastrophic 1929-1933 recession and the Second World War as well. Finally, Kondratieff threw out some hints on long-cycle causality, but did not adequately explain why capitalist development should involve long waves as a systematic phenomenon. There is no doubt that Kondratieff's contribution to long-wave analysis was fundamental. He formulated the three-cycle schema adopted by Schumpeter, and his statistical technique was very similar to that Kuznets used to distinguish 'secondary secular movements'. Furthermore, he pointed to the likelihood of poor terms of trade for agriculture in periods of decelerated development-a point given major stress later by Walt Rostow and Arthur Lewis (see Kondratieff 1928b). Outside his own Institute, Kondratieff was heavily cricised because his long cycle analysis seemed to conflict with the more fundamental Marxist expectation of the ultimate breakdown of capitalism (see Garvy 1943). He was dismissed from the Institute in 1928 ''for introducing ideology alien to Soviet policy into his work''. The Institute was merged with the Central Statistical Office in 1928 and abolished in 1929. Kondratieff was arrested and interrogated in 1930, sentenced to 8 years in prison in 1932 and executed after a second trial in 1938. His alleged counter-revolutionary crimes were not attributable to his long-cycle analysis, but his public opposition to the Stalinist collectivization of agriculture (in which he was associated with his friend Chayanov, who was also executed). Kondratieff's wife and daughter preserved his unpublished work and pressed for his rehabilitation. During Khruschev's 1963 thaw the 1938 death sentence was repealed, and in 1987, with Gorbachev's glasnost, the 1932 sentence was also repealed. Menshikov (1984) in 1984 published the first article on Kondrieff's work in the USSR since the 1920s, and in the 1990s, Kondratieff's work was republished in Russia (see Makasheva, p xxxiii). 5 Kuznets (19015 Kuznets ( -1985 Chronologically, the next development in the long-swing literature was Simon Kuznets' analysis of 'secondary secular movements' published in 1930. Kuznets was educated in Kharkov and worked in the Ukrainian Statistical Office before emigrating to New York in 1922. He was already a sophisticated and well-trained economist. His research interest centred on business cycles and long waves in economic activity. At that time, analysis of this field was more sophisticated in Russia than in the United States. Tugan-Baranowsky and Eugen Slutsky were both teaching in the Ukraine when Kuznets was a student and he was very familiar with Kondratieff's work on long waves. His basic technique for identifying long waves was similar to that of Kondratieff, i.e. it involved elimination of the trend from the basic annual data, and used a nine-year moving average to smooth the deviations from trend. The latter he called secondary secular variations. However, Kuznets made a special point of not eliminating population movements, his technique for eliminating trend was different from that of Kondratieff, and his conclusions were also different. Kuznets' evidence was more detailed, involving careful analysis of 59 series, most of which represented annual movements in physical output and the relevant price variance for particular commodities. He presented 23 indicators for the USA, of which 16 were commodities with both price and quantity data and six were financial indicators (including the general price index). For the UK he had nine indicators, France and Germany eight each, Belgium five, Canada and Japan two each, Australia and Argentina one each. He did not claim that the individual indicators could be added to provide a meaningful picture of aggregate economic activity, and he did not use aggregative sector indicators for agriculture or manufacturing that were available when he wrote. His major conclusions were: (a) that ''secondary secular variations in production are in most cases similar to those in prices, the latter following a rather general course in agreement with the well-known historical periods of the rise and fall in the general price level'' (p 197); (b) he found a much shorter periodicity than Kondratieff, ''about 22 years as the duration of a complete swing for production and 23 years for prices'' (p 206); (c) most fundamentally, he did not think there was enough evidence to conclude that these secondary secular variations were systematic. They were ''rather specific, historical occurrences'' (p 258) and there was ''an absence of factors that would explain the periodicity'' (p 264). Kuznets did not attempt to cluster his individual series to present a global chronology of long waves in economic life, nor did he analyse the degree of synchronization of the series (8). From 1930 onwards, Kuznets dropped his work on cycles and the indicator approach, moved on to growth analysis and aggregate measures of economic activity. He did fundamental definitional work on the scope and composition of aggregate GDP measurement and produced historical estimates of US economic development that conformed to his criteria. Thus he made it possible to analyse long-term movements in economic life on a much more satisfactory conceptual basis than the cocktail approach that he and virtually all cyclical analysts had previously been forced to use. Furthermore, Kuznets successfully stimulated and inspired replication of his work by scholars in many other countries. This aggregate accounting approach had some drawbacks for cyclical analysis, before GDP estimates were available on a quarterly basis, but it revolutionized the study of long-term growth and greatly facilitated the testing of long-wave analysis. From time to time after 1930 Kuznets returned to long-swing analysis in a rather tentative way. Unlike his disciples, he himself never called them 'cycles', as the word implied greater certainty about such phenomena and their periodicity than Kuznets conceded. In Kuznets (1956, p 50) he showed an ''internationally common chronology'' of long swings for eight countries, applying a different analytic technique from that he used in 1930, and different indicators-decennial averages of population, GDP and GDP per capita. In Kuznets (1971, pp 43-50), a rewritten version of the 1956 paper, he was more cautious about synchronicity. His most affirmative position was in his 1958 essay on population growth, where he found the long-swing hypothesis plausible in relation to US population growth and to ''population-sensitive'' components of capital formation such as housing and railway construction (9). 6 Abramovitz (19126 Abramovitz ( -2000 Although Kuznets had abandoned long-swing analysis, he had several disciples with a continuing interest in what Arthur Lewis called 'Kuznets Cycles' (10). Moses Abramovitz made the most ambitious attempt to discern long swings in aggregate US economic activity and veered between more positive affirmation of long swings than Kuznets and outright recantation, in the sense that he did not find valid evidence for the phenomenon in the postwar period. His work in this field was almost entirely concerned with the US economy. Abramovitz distinguished waves of acceleration and retardation in US growth with an average duration for the full swing of 14 years and a variance from 6 to 21 years, using NBER reference cycle indicators back to the 1820s. He used a cocktail of 29 indicators including GNP. He smoothed his series by a rather complicated procedure, designed to eliminate NBER reference cycles, before removing the trend. He found that the turning points of his different series ''cluster in relatively narrow bands of years''. He therefore produced a general chronology with nine swings between 1814 and 1939. Even at his most affirmative, Abramovitz was basically cautious about the nature of long swings. Thus in 1959 he wrote: ''It is not yet known whether they are the result of some stable mechanism inherent in the structure of the US economy, or whether they are set in motion by the episodic occurrence of wars, financial panics, or other unsystematic disturbances''. In 1968 he concluded that Kuznets cycles were ''a form of growth which belonged to a particular period in history' ' (1840-1914), and had disappeared thereafter. He was somewhat miffed by the fact that Kuznets showed little interest in his work on ''Kuznets' cycles'' (see Abramovitz 2000, p 111). 7 Schumpeter The most complex cycle system was propounded by Joseph Schumpeter. He incorporated Kondratieff long waves of 50 years, on each of which he superimposed 8 to 9-year 'Juglars'. Within each Juglar, he showed three 40month 'Kitchin' cycles (see Chart I in Schumpeter 1939, pp 213, and1051). He said nothing about the amplitude of these cycles. Schumpeter insisted on the empirical regularity of his schema as if the basic facts about the three cycle components were well established, whereas there is great doubt about all three, as well as the legitimacy of his nomenclature. Kitchin's paltry contribution to the literature in 1923 was lean meat indeed compared with that of the NBER, and Juglar never claimed to have demonstrated the existence of an 8 to 9-year rhythm. In fact, the NBER had already demonstrated rather wide variance in the length of cycles, so that there was little ground for distinguishing Juglars and Kitchins. Schumpeter's treatment of statistical material was illustrative rather than analytic and was at times rather cavalier. In Schumpeter (1939) he used business annals of the type favoured by his former colleague Spiethoff , or by Tugan-Baranowsky, both of whom had an obvious influence on his views. He also used NBER type of statistical 'cocktail' material in pulse charts of industrial production, prices, interest rates, deposits, and currency circulation (p 465). He made passing reference to national income analysis (p 561), but elsewhere referred to the concept of total output as a ''meaningless heap'' (p 484), national income as a ''highly inconvenient composite'' (p 561). Schumpeter's long-wave chronology (see Table 2) was rather similar to that of Kondratieff (see Table 1), though he gave each a name and divided each wave into four phases rather than two (11). Schumpeter's cycle analysis ran to 1,050 pages and was highly discursive. Judged on its statistical evidence alone, it would have been long discredited. Its power lies in the imaginative theory he supplied to explain long waves and the highly illuminating commentary on many aspects of German, British, and American economic history. He argued that each wave represented a major upsurge in innovation and entrepreneurial dynamism. Although writing in the late 1930s, he was remarkably sanguine about the long-run productive potential of capitalism. For him, depressions were a necessary part of the capitalist process. They were a period of creative destruction, during which old products, firms, and entrepreneurs were eliminated and new products were conceived. Schumpeter (1943, p 64) dismissed the 1929-1933 recession much too lightly ''the depression that ran its course from the last quarter of 1929 to the third quarter of 1932 does not prove that a secular break has occurred in 1898-1911 1912-1925 1925-1939 ? Fluctuations in the momentum of growth within the capitalist epoch 155 the propelling mechanism of capitalist production because depressions of such severity have repeatedly occurred-roughly once in every 55 years''. He then quoted the 1873-1877 period as if it were a precedent for 1929-1933. Such a comparison was totally misleading. In the earlier period the peak-trough fall in US industrial production was 14.8%; in the later one, 44.7%! There was no earlier parallel to the 1929-1933 collapse either in amplitude or international incidence. Like most long-wave analysts, Schumpeter gave primary stress to autonomous features of the capitalist process and said very little about the role of government in economic life. When he did mention government, it was usually to scorn its perversity-as in his attack on Roosevelt's New Deal-though he regarded government as pretty impotent. For him the driving force in economic life was entrepreneurship, which he regarded as having been taken over more or less completely by large firms. The emphasis on entrepreneurship was present in his earliest work on capitalist development written in 1911, and was obviously influenced by the ideas of Max Weber and Werner Sombart, which were popular at that time. The main weaknesses of Schumpeter's long-wave theory (ignoring his failure to demonstrate their existence in the real world) were threefold: (a) he did not explain why innovation and entrepreneurial drive should come in regular waves rather than in a continuous but irregular stream, which seems a more plausible hypothesis for analysis concerned with the economy as a whole; (b) he made no distinction between the lead country and the others, but argued as if they were all operating on the same level of productivity and technological opportunity. Thus his waves of innovation were expected to affect all countries simultaneously; (c) he greatly exaggerated the scarcity of entrepreneurial ability and its importance as a factor of production. Schumpeter extended his analysis of capitalist development further in 1943 in Capitalism, Socialism and Democracy. It was not concerned with long waves but with capitalist breakdown. This was paradoxical coming from an analyst who had such great faith in its robust character. However, his breakdown theory was sociopolitical rather than economic. He argued that there were four major forces destroying capitalism. In the first place, entrepreneurship was likely to be stifled by bureaucratization of management and decision-making in large firms. The second menace was the disincentive of progressive taxation and the increasing power of trade unions, which had already (he argued) retarded US recovery in the 1930s and were likely to become more stifling. The third threat came from the growing power of socialist ideas, and the fourth from the unpopularity of capitalism with intellectuals, who were continually engaged in denunciatory activities and harassments such as anti-trust suits. Schumpeter's approach to long waves and the breakdown of capitalism contained bold hypotheses and unsettling paradoxes, which gained in impact through his emotional detachment. His view of capitalist development was fatalistic, and he wrote as if he were charting destiny. He disliked most of what was happening in the real world, but did not advocate policies to remedy the predicted catastrophe. In fact, one is never sure with Schumpeter whether he was putting forward a specific hypothesis because he seriously believed it or because it stimulated interest in his fundamentally dynamic and original conception of capitalist development. Long-wave revivalists The significant slowdown in the momentum of economic growth after 1973 revived the notion of long rhythms in economic life and a number of new longwave pundits emerged, most of them neo-Schumpeterians. Some were vulgarizers of past long-wave theories, which they invoked uncritically in support of a fashionable gloom about the future (12). Others deserve critical inspection, though I have not found much in their work to shake my scepticism about long waves as a systematic phenomenon affecting output. 9 Rostow (1916Rostow ( -2003 Walt Rostow's interest in 'Kondratieff' movements was concentrated on swings in the terms of trade of primary producers against those selling industrial goods. From his viewpoint the 1951-1973 period was the 'downswing' of a fourth Kondratieff, and the OPEC-inspired price increases marked the upswing of a fifth Kondratieff. He produced 800 pages of empirical material to back his thesis, in welcome contrast to some of his earlier work. However, he complicated his argument by embedding his long waves in a loosely integrated framework that featured neo-Schumpeterian surges of innovation in leading sectors, demand changes as economies work themselves through a hierarchy of stages, and a reiteration of his earlier erroneous belief that there was a short, sharp take-off in Western countries which was staggered in time. Rostow (1978) placed great emphasis on a mishmash of sectoral and commodity indicators and had little time for broad aggregates such as GDP, which to my mind are the central indicators to be used in measuring acceleration or deceleration of growth. 10 Mandel (192310 Mandel ( -1995 Ernest Mandel was an erudite Belgian Marxist of Trotskyite persuasion. He asserted that there are long swings, roughly 50 years in length, caused by surges of new technology. In each swing there were two phases. In the first, profit rates rise as new technology is developed, and in the second they fall as technical possibilities are exhausted. The timing, like the causality, is similar to Schumpeter's. His first wave, from the 1780s to 1847 was attributed to the 'industrial revolution'; the second, from 1847s to the 1890s, to a technological revolution dominated by 'machine production of steam motors'; the third, Fluctuations in the momentum of growth within the capitalist epoch 157 from the 1890s to 1939, to 'machine production of electric and combustion motors'; and the fourth, from 1940 to a future unspecified date, with machine production of electronic motors and atomic energy. He suggests that the first phase of the fourth wave ended in 1967 and that we were in the second phase at the time he was writing. Unlike other writers in this vein, he did not refer to the waves as 'Kondratieffs'. He considered Kondratieff unoriginal as compared with van Gelderen, for whose work he had exaggerated respect (13). Mandel was mainly interested in theory and his empirical underpinning was very weak. He claimed (p 137) that 'economic historians are practically unanimous' in distinguishing expansions and recessions in the periods he used in his periodization, but the only justification he gave was an article by Hans Rosenberg (1943) published in 1943, which itself contained no empirical material and was written before quantitative economic history began. Mandel also presented estimates of world trade and industrial production indices for the UK, Germany, and the USA to buttress his argument. These were not deviations from detrended moving averages, but compound rates of growth between the years specified (which varied by type of indicator). Table 3 shows Mandel's indicators for his second-wave downswing, which he called a period of 'pronounced depression', and for his third-wave upswing, which he characterised as a period of 'tempestuous increase in economic activity'. Mandel's statistical evidence does not warrant such dramatic language. It is even less appropriate if one uses my alternative measures in the lower panel (from more recent sources referring to exactly the same periods and concepts). Source Top five rows from Mandel (1975), pp 141-142 (omitting his citation of Dupriez's 1947 estimates of world per capita output as these were much too shaky for serious use in this context). Bottom five rows from industrial production including construction for the UK, Germany, and the USA for Mandel's periods from Lewis (1978); world trade volume from Maddison (1962) Mandel's stages of development Mandel identified ''stages'' as well as ''waves''. Here he was influenced by Lenin's (1916) essay on ''Imperialism the Highest Stage of Capitalism''. He was mainly concerned with ''late capitalism'' but, interestingly enough, he did not consider it to be a new stage but merely a development within imperialist monopoly-capitalism, which Lenin had distinguished from a first phase of ''free competition''. At first sight this is puzzling, for Mandel frequently refers to features of 'late capitalism' (the enhanced role of the state in the economy, the formal ending of colonialism, the importance of military spending, and the changed international power locus) that seem rather different from those described by Lenin. The reason for Mandel's position is explained in his 1975 book (pp 524-525), where he dissociated himself from ''revisionists'', such as John Strachey (1956), who claimed that there was a new-era mixed economy that can 'suspend the internal economic contradictions of capitalism'. Thus there was no real connection between Mandel's stages of growth and his long waves. The latter were the fruit of more or less exogenous technological development, and did not have the policy-institutional flavour that Schumpeter conferred on his by calling one 'bourgeois' and another 'neomercantilist'. 12 Mensch Gerhard Mensch (1975), another long-wave revivalist, had a neo-Schumpeterian approach and a detailed catalogue of different types of innovation. He considered that the clustering of innovations determined the tempo of capitalist performance, and that the 1970s slowdown was due to a shortage of exploitable innovations and market saturation. He had interesting ideas about lags in application of inventions, but lapsed frequently into apocalyptic sermonizing. He concentrated on illustrative examples of ''industrial evolution' and presented almost no quantitative evidence on variations in the pace of macroeconomic performance he was presumably trying to explain. He did not discuss inter-country diffusion of innovations, and nowhere made the lead-follower dichotomy, which is fundamental in analysis of technological diffusion. Conclusions on long-wave theories My basic conclusion is that the existence of a regular long-term rhythms in economic activity is not proven, although many fascinating hypotheses have been developed in looking for them. Nevertheless, it is clear that major changes in growth momentum have occurred since 1820, and some explanation is needed. In my view it should not be sought in systematic waves, but Fluctuations in the momentum of growth within the capitalist epoch 159 in specific disturbances of an ad hoc character (14). Major system shocks have changed the momentum of capitalist development at certain points. Sometimes they were more or less accidental in origin; sometimes they occurred because some inherently unstable arrangement could no longer be sustained and finally broke down (e.g., the Bretton Woods fixed exchange rate system). Changes in the institutional-policy mix play a bigger role in capitalist development than many long-wave theorists would admit. A system shock will produce the need for new policy instruments. These are not always selected on the most rational basis, and they may require a long period of experiment before they work properly. There may be conflicts of interest within and between countries which prevent the emergence of efficient policies. Hence there have been prolonged periods in which supply potential was not fully exploited. Some of these problems figure in Schumpeter's analysis but he usually sees the solution as a matter of destiny rather than choice. Capitalist development since 1820 has a certain unity because economic growth in all phases has been much more rapid than in the merchant capitalist epoch from 1500 to 1820. Nevertheless, there have been big changes which influenced the type of fluctuations that were experienced in the advanced capitalist economies. These changes have to be kept in mind in constructing any general theory of fluctuations or phases. Increased levels of income and changed patterns of demand and productivity have changed the structure of production and employment. In 1820, agriculture characteristically employed well over half of the labour force in these countries, whereas the average is now nearer 3%. Agriculture was and still is subject to erratic fluctuations in output owing to weather, and its products have generally been sold in flexprice markets in which prices go down as well as up. This erratic element in economic life is much smaller than it used to be. Industry provided about a quarter of total employment around 1820 and rose towards a peak of somewhere round 40% in 1970 in my 16-country sample. Hence the process of capitalist development is often referred to as industrialization. However, the industrial share of employment has been on the decline for more than 30 years, and has now regressed closer to the 1820 proportion than to its peak level. The big long-run gains have been in services, which accounted for a fifth of total employment in 1820 against three-quarters now. It was in the industrial sector that the business cycle was most marked in terms of demand fluctuation and stock-output supply adjustments, but in the service sector both demand and supply have been more stable, and this has dampened the amplitude of fluctuations in GDP. A second major change in economic life has been the growing role of government. In 1820 government spending was typically less than 10% of GDP, but the proportion is now much bigger. In the advanced capitalist countries, governments intervene on a massive scale to operate a vast network of social transfers, which change the distribution of income and the pattern of private spending. Total government spending (including transfers) in our 16 countries is now nearly half of GDP. The government regulatory role in the economy has greatly increased. One result of the latter is that the stability of financial institutions has improved. Before the Second World War, depressions were often reinforced by major bank failures, but these are now rarer and their impact is cushioned. As a result of these changes, government exercises both a propulsive and a compensatory role in economic life, which generally operates to stabilize the expenditure and income flow, and the aspirations of governments to act as managers of economic destiny have greatly increased. There are also other changes to keep in mind when developing hypotheses intended to cover the whole capitalist period. One important one is the change in the average size of firms, and the role of trade unions. Hence, the atomized market paradigm is less relevant in wage and price fixing, which explains some of the changes that have occurred in price behaviour. Another is the character of international linkages between countries, the degree to which trade, capital and migration are subject to restrictions and the scope for international transfers of technology. These have varied a good deal over time and have been the most exposed to system shock. There have also been big changes in the international monetary system, which have had a major impact on the type of policy weapons used domestically. Phases of growth Although I find no convincing evidence in the work of Kondratieff and Schumpeter to support the notion of regular or systematic long waves in economic life, there have nevertheless been significant changes in the momentum of capitalist development. These changes in momentum can be seen clearly in our first four graphs, which make binary comparisons of growth performance 1820-2001 in the biggest countries, i.e., France/USA, Germany/ USA, Japan/USA, and UK/USA. Since 1820 one can identify separate phases which have meaningful internal coherence in spite of wide variations in individual country performance within each of them. Comparative performance is quantified in detail in Tables 4, 5 , 6, 7, 8, 9, 10, 11, 12 and 13. Phases are identified, in the first instance, by inductive analysis and iterative inspection of empirically measured characteristics. Annual estimates were derived for as many years as possible since 1820, including war years. Aggregate performance of the sixteen countries is also shown, with both weighted and unweighted averages. For many purposes the unweighted average is the most relevant indicator of the characteristic experience of these countries. The weighted average is a useful supplement, but it should not be forgotten that the USA now has a very large weight in this measure. My preference is for measures of annual movements in aggregate activity (GDP), which reveal clearly the big changes in the severity of recessions that have appeared systematically across the sixteen advanced capitalist countries shown in Table 4. It is clear that peacetime business cycle history has been Fluctuations in the momentum of growth within the capitalist epoch 161 Table 4 Amplitude of recessions in aggregate output 1820-2001 (maximum peak-trough fall in GDP or lowest rise) (annual data) 1820-1870 1870-1913 1914-1919 1920-1938 1939-1949 1950-1973 1973-2001 1500-1820 1820-1870 1870-1913 1913-1950 1950-1973 1973-2001 1820-2001 Source Maddison (2003) much milder since the Second World War than before, and that the 1920-1938 period was generally much worse than 1870-1913. Except in 1929-1933, when depression hit every country, the weighted average of cyclical movements for the sixteen countries as a group was dampened by the fact that individual country cycles were not synchronized. Table 12 shows the cyclical record for foreign trade. It confirms the pattern shown by GDP movements, with notably smaller cycles since the Second World War. Table 10 shows the amplitude of annual changes in aggregate GDP for the sixteen countries taken together for the period 1871-2001. For this period we have complete information on annual movement of GDP for all sixteen countries (2,096 readings). Table 11 shows the incidence of recession by country for every year between 1871 and 2001. The biggest interruptions to growth occurred in 1914-1919, the 1930-1932 depression, and the 1945-1946 period of demobilization, dismemberment, defeat, and victory. All other Table 6 Growth of per capita GDP at constant 1990 prices, 1500-2001 (annual average compound growth rate) 1500-1820 1820-1870 1870-1913 1913-1950 1950-73 1973-2001 1820-2001 Maddison (1991) and (2003), and Tables 4, 5 , 6, 7, 8, 9, 10, 11, 12 and 13 Fluctuations in the momentum of growth within the capitalist epoch 163 disturbances had a much milder impact on output. The aggregate stability in the collective output of the group in peacetime has been quite impressive. In the 43 years from 1870 to 1913, there were only 3 years of recession in aggregate output, in the 27 years 1947-1973 none, and in the 28 years 1973-2001 only one occasion when aggregate output fell. However, it is clear from Table 11 that individual countries have been much more unstable than the group as a whole. Their cyclical experience has not normally been synchronized, but compensatory. Cyclical experience has been synchronized only when they have been subjected to 'system-shocks' such as wars, or the collapse of long-standing international payments mechanisms, as in the in the 33 years 1914-1946, there were 9 years in which aggregate GDP of the sixteen countries fell. For 1820-1870, the statistical coverage is weaker. The annual estimates are complete for Australia, Denmark, France, the Netherlands, and Sweden. For the UK there is coverage for 42 years, for Belgium for 27 years, 23 for Germany and less elsewhere. Altogether there are 418 annual readings out of a potential 800. However, judging from the evidence we have, it seems that average cyclical experience in 1820-1870 was not too different from that of 1871-1913 Tables 12 and 13. My primary interest is in identifying major changes in growth momentum rather than shorter-term oscillations. There is a need for annual time series for major indicators of aggregate economic activity for our sixteen countries in as complete and comparable a form as possible and a special need to get coverage for all of the countries for the initial benchmark year 1820. By inspection of the data and graphs derived from them, one can identify fundamental turning points in growth momentum, and try to distiguish growth and cyclical behaviour patterns that differ significantly between phases. The technique is not unlike that of the NB ER in its attempt to identify reference cycles, and in -1913 -6.0 -18.2 4.5 b 0.4 1920-38 -12.7 -36.5 7.3 -0.7 c 1950-73 +0.4 -7.0 2.6 4.1 1973-2001 -2.8 -8.2 6.5 5.3 1973-83 5.2 9.3 1983-2001 7.3 3.0 a UK only b UK and USA 1900-1913c 1924-1938for Austria and Germany, 1921-1938 for Belgium Source Appendices to Maddison (1991Maddison ( , 1997Maddison ( , pp 472-473, 2001, OECD, Economic Outlook, 2002, 1 and Table 11 below. UK price index 1820-1870 from Mitchell (1962, pp 471-474) Tables 5 and 6 show growth performance (GDP and GDP per capita, respectively) in the five phases (1820-1870, 1870-1913, 1913-1950, 1950-1973, 1870-1913 1920-1938 1950-1973 1973-2001 1870-1913. 1913-1950 was the period when performance was worst. Extreme shocks struck three times between 1914s and the 1940s. Performance in the unprecedented postwar boom-the golden age, 1950-1973, was also very special in Western Europe and Japan. The nature of the growth process changed sharply after 1973. Kuznets (1963), in his critique of Rostow's stage schema (15), postulated five minimum requirements for acceptable stages of growth: (a) they must be identified by characteristics that can be verified or quantified; (b) the magnitude of these characteristics must vary in some recognizable pattern from one phase to another: ''stages are presumably something more than successive ordinates in the steadily climbing curve of growth. They are segments of that curve with properties so distinct that separate study of each segment seems warranted''; (c) there should be some indication of when stages terminate and begin and why; (d) it is necessary to identify the universe to which the stage classification applies; (e) finally, Kuznets required that there be an analytic relation between successive ''stages'', which, optimally, would make it possible to predict how long each stage has to run. This fifth requirement seems too deterministic. It suggests that movements between successive stages are more or less ineluctable. I have tried to fulfill Kuznets' first four requirements, but cannot meet his fifth condition. For this reason, I have called my periods ''phases'' rather than ''stages''. Tables 7 and 8 summarise the main kinds of evidence I used. My growth phases fulfil the first four Kuznets' requirements as explained below. (a) They are identified by seven simple indicators: rate of growth of the volume of output, output per head and exports: cyclical variations in output and exports, unemployment, and rate of change in consumer prices. These are the conventional macroeconomic indicators one might use for growth accounting or conjunctural monitoring. The results are shown in very aggregative form in Tables 7 and 8. Each phase also has five non-quantifiable 'system characteristics', by which I mean the basic policy approaches and institutional environment that condition growth performance. Changes in these between periods are summarised in Table 9 These include the government approach to demand management (i.e., the kind of trade-off that is made between unemployment and inflation), the bargaining power of labour, the degree of freedom for trade and international factor movements, and the character of the international payments mechanism. (b) Most of these characteristics are systematically different in the phases identified. Generally, they were most favourable in the ''golden age '' 1950-1973, second-best in the latest phase (1973 onwards), third-best in the 'liberal'phase and worst in the 'beggar-your-neighbour' phase' . (c) There is room for argument as to which years are turning points between phases. I picked 1820 as the starting point for capitalist development. The evidence now available suggests that the transition to the accelerated growth that distinguishes capitalist from merchant capitalist momentum took place after the Napoleonic wars rather than in 1760 as Kuznets thought. There is also strong evidence that the acceleration of growth was synchronous in western Europe from this time and not staggered throughout the nineteenth century as Gerschenkron and Rostow believed. The year 1870 is an appropriate turning point as it marked the emergence of Germany and Italy as integrated nation states, the emergence of the new growth-oriented Meiji regime in Japan and a USA reunited after civil war. The year 1913 is clearly the last year of a 'liberal' phase, which ended with the outbreak of the First World War. The year 1950 was the point where recovery from the Second World War was completed and the previous peak in output for the sixteen countries as a whole surpassed. However, five countries did not surpass their wartime output peaks until 1953 (Austria, Germany, Japan, UK, and USA), so one might well argue that 1953 rather than 1950 should mark the beginning of the postwar golden age. On the other hand, there is a case for starting in 1948, which is when the ground rules for international cooperation within the capitalist group were set up by the Marshall Plan. The year 1950 seems a reasonable compromise. It should be noted that use of 1948-1973 or 1953-1973 instead of 1950-1973 would not affect the analysis seriousl. This golden age would still be a period of secular boom on an unparalleled scale, and the preceding phase, which encompassed two world wars and a world depression would still have the worst performance. (d) The emergence of a new phase after 1973 is rather clear. The 1974The , 1980The -1982The and 1991The -1993 recessions affected virtually all sixteen countries. They were by far the biggest breaks in the postwar growth momentum. The grounds for treating the post-1973 period as a new phase include price, unemployment and output behaviour, changes in the international monetary system, in government policy concerning the level of demand, in expectations in the labour market, and greater openness of capital markets. The economic system behaves in a different way, which has created major new tasks for economic policy, and makes it more difficult to reconcile different policy objectives. Table 8, shows a breakdown within the latest phase of inflationary and unemployment experience. A major reason for changes in macroeconomic objectives in this period was the sharp acceleration in the rate of price rise from 1973 to 1983, due to the two oil shocks and the breakdown of the Bretton Woods international payments system. This led countries to abandon Keynesian full employment objectives in favour of deflationary policy. These drastic policy changes were successful in cutting the pace of inflation after 1983, but much higher levels of unemployment became endemic, except in the UK and USA where economic policy has been more expansionist than was the case in most west European countries and Japan. The income safety-net provided by extensive welfare payments in west European countries was an important influence cushioning demand in a situation where recessionary experience might well have been bigger. Main conclusions on phases (1) There have been five distinct phases of economic performance in the capitalist epoch, each with its own momentum. (2) Phases of growth are not ineluctable, and within each there is considerable scope for variation in country performance; but the policy-institutional framework and policy attitudes characteristic of each phase have had a striking distinctiveness and generality of acceptance. The expectations of economic agents about growth and inflation have also had distinctive characteristics which differed between phases. (3) The transition from one phase to another was caused by system-shocks. Some were due to a predictable breakdown of a basic characteristic of a previous phase, but the timing of the change was usually governed by exogenous or accidental events which are not predictable. (4) The present phase generally ranks as second-best. Performance is well below that in 1950-1973 in almost all important respects, but the economies have been a good deal more stable in real terms than before 1950, and per capita output growth has been significantly better. Endnotes (1) William Stanley Jevons (1835-1882) initiated business cycle research in England in 1862, by adjusting time series on business activity to eliminate seasonal variation. He analysed longer term price movements in a brilliant study (1863) of the impact of surging Californian and Australian gold production in the 1850s. In 1878-1879, he estimated the average periodicity of ''commercial crises'', and the influence of variations in solar activity (sun-spots) on agricultural output. His essays on these topics were collected and published posthumously in 1884 (see Keynes' 1936 assessment of Jevons' work on cycles). (2) See Burns and Mitchell, op. cit., p 270 states their reasons for not eliminating trend: ''cyclical fluctuations are so closely interwoven with these secular changes in economic life that important clues to the understanding of the former may be lost by mechanically eliminating the latter. It is primarily for this reason that we take as our basic unit of analysis a business cycle that includes that portion of secular trend falling within its boundaries.''
2019-05-15T14:33:49.547Z
2007-02-06T00:00:00.000
{ "year": 2007, "sha1": "80890807ae78477cc6c9e04b45e53b68e6bd7a89", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11698-007-0007-3.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "88bd9fc2b79b638486fa2670cf93f8dd9ad03905", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
253561223
pes2o/s2orc
v3-fos-license
A novel reliability index approach and applied it to cushioning packaging design This paper is aimed at proposing a new approach to Reliability-based Design Optimization (RBDO) and applying it to cushioning packaging design with a highly nonlinear system. The problem is formulated as an RBDO problem, which is included a minimizing cost function and probabilistic constraints. Here, the thickness of the cushion material is dealt with uncertainty and uncontrolled parameters. The traditional reliability index approach (RIA) has evolved as a powerful tool to solve the RBDO problem; however, due to its convergence problem, the modified reliability index approach (MRIA) is proposed. Although the MRIA method solves the problems of the traditional RIA, it inherits the low efficiency of searching for the most probability point (MPP). Thus, we developed a novel RIA based on MRIA to improve the efficiency and robustness during the RBDO process. The innovation active set strategy is developed in reliability assessment, which is a strict inequality to determine whether the current constraint is active or inactive. An application example is presented, and the results are compared with MRIA to assess cost-effectiveness and efficiency. Results indicate the proposed method is feasible to solve the uncertainty problem of packaging materials in the processing process and is also an efficient RBDO method. Introduction During handling and transportation, a lot of goods are damaged due to various uncontrolled uncertainties, including cushion material properties, drop height, and storage temperature. [1][2][3] Therefore, many methods have been well developed and widely applied to protect the product from the risk of damage. One of the most well-known ways to protect products from damage is to provide a more reasonable packaging based on the cushion curves of a certain material, which is established by ASTM D 1596. 4 The simple cushion curves are depicted in Figure 1, which provide a lot of information, including the G-value (the fragility of the product) vs statics stress, the drop heights (h), and the thickness of the cushion (t). However, it takes a lot of experimental cost and time to generate all the curve information, and the cushion designed in this way is often over-designed. Although it protects the products in the circulation environment, it is at the cost of a lot of materials. Those drawbacks prompted many researchers to establish a simplified cushion-curve method based on the stress-strain curve of the material. 5,6 However, this simplified method of establishing the cushion curves is performed well for closed-cell cushion materials and has certain limitations for open-cell packaging materials. 7,8 Moreover, the stress-strain characteristics of packaging materials are not willing to be provided by the manufacturer. Another way to protect the product from damage is to make the impact peak acceleration of the product from vibration less than the allowable G-value. Generally, the drop model of the packaging system, like a single-degree-of-freedom or two-degree-of-freedom spring system, 9 is abstracted to assess the damage to the product. Besides, the Transport Packaging Laboratory of Kobe University has conducted a large number of various drop tests and simulations 10 to predict whether the product is damaged or not based on the comparison of the measured impact acceleration with the G-value of the product. Recently, Ge et al. 11 and Ge and Rice 12 have been developed a new nonlinear Kelvin foam. After studying its cushioning properties, it can be regarded as a new type of packaging protection material in the future. Although many efforts have been made to protect products from being destroyed, the uncertainties in the transportation process are still one of the great challenges faced by designers. Every year, the product damage and commercial value caused by uncertainties in the circulation environment and is highly due to uncontrolled manufacturing parameters of the cushion are still immeasurable. Therefore, the uncertainties deserve special attention, especially the uncontrolled manufacturing parameters of the cushion. To date, reliability-based design optimization (RBDO) has been derived as a dominant model to consider uncertainties and is widely utilized in various engineering fields. [13][14][15] RBDO model integrates uncertainty parameters into optimization problems with probability constraints to seek more reliable engineering design and save materials at the same time. 16,17 Traditionally, two different approaches, which mainly include the reliability index approach (RIA) 18 and the performance measure approach (PMA) 19 are widely utilized for assessing the probabilistic constraints in RBDO. RIA is developed focused on the concept of the first-order second moment (FORM), 20 which is widely applied to solve RBDO problems. Conceic xa˜o Anto´nio 21 proposed a gradient algorithm for RIA, based on the Hasofer-Lind method, 22 and then applied it to the RBDO problem of composite laminates. Tu et al. 23 emphasized that RIA has the disadvantages of convergence problems and numerical singularities. which promotes PMA as a more efficient and robust choice to wisely deal with RBDO problems. 24,25 Ting Lin et al. 26 proposed the modified reliability index approach (MRIA) to overcome the disadvantage of RIA by a new definition of reliability index. Although MRIA could converge the optimal solution efficiently and stably in assessing the active probabilistic constraints, it also inherits the low efficiency of searching for the most probability point (MPP); hence, the hybrid reliability method is presented. 27 das Neves Carneiro and Anto´nio 28 developed a prominent reliability method based on the RIA by introducing the genetic algorithm (GA's) 29,30 with elite strategy into the reliability assessment process to improve the efficiency and convergence of RIA. Although these approaches have been presented to enhance the efficiency of evaluating the failure probability of RIA, the computational cost is still huge in the process of solving the RBDO problem. In general, the above methods are mainly divided into two strategies, the double-loop strategy (DLS), 26,31 the single-loop strategy (SLS), 32 followed by the decouple-loop strategy, 33,34 and the hybrid loop strategy 35,36 are carried out based on the two strategies. Compared with the DLS and the SLS, the former is simpler and more stable, although DLS has a nested nature. In general, those strategies for assessing probability constraints lead to performing first-order or second-order approximate expansion at the MPP, that is, the first-order reliability method (FORM) 37,38 and the second-order reliability method (SORM), 39,40 to convert probability constraints into deterministic constraints. The FORM has been widely used in RBDO procedures due to its simplicity and efficiency. Besides, another method of probability assessment is Monte Carlo simulations (MCS), [41][42][43] which is regarded as a robust reliability analysis and often is applied as an auxiliary tool to verify the accuracy of the optimal solution. Nevertheless, if the performance function has a highly nonlinear system, MCS may need excessively time-consuming computational efforts because it requires many sample data, especially in cases with highly nonlinear RBDO problems. Therefore, the FORM approaches based on DLS exhibited a good performance both efficient and robust to assess the failure probability in RBDO. After an in-depth study, the enhanced modified reliability index method (EMRIA) is proposed to improve the efficiency of MRIA based on the innovation active set and applied to deal with the uncertainty based on cushion thickness in cushioning packaging design. EMRIA can not only improve the accuracy and efficiency in the process of evaluating the failure probability but also provide an optimal and reliable design for cushion packaging design. In the remainder of this paper, a brief reliability methods review is given in Section ''Existing relevant reliability index.'' In Section ''Construct the search region of MPP,'' the innovation active set is defined. After that, the details of EMRIA are presented in Section ''EMRIA in RBDO.'' An application is carried out to demonstrate the superiority and efficiency of the suggested method in Section ''Application,'' followed by conclusions in the next section. Final section states the further study. Existing relevant reliability index The classical RBDO mathematical model is 19 findd min z(d) whereX andd are the vector of random variables and design variables, respectively, which is satisfied the lower (d L ) and upper bounds (d U ). z stands for the cost function or objective function should be minimized. P f i is the target failure probability. P f ½g i (X ) ø 0 describes the i-th probability constraints of the system while g i (X ) is greater than zero denotes the failure region. P f i is the target failure probability. Reliability index approach The system failure probability, P f , is estimated in the statistical model as where fX (x) represents the joint probability density function (JPDF) ofX . To obtain the failure probability of the structure, multi-dimensional integration is often required. However, for most practical highly nonlinear problems, it is very complicated to solve multiple integrals. Thus, the reliability index b HL is reported by Hasofer and Lind 20 to obtain the failure probability of the system instead of solving multidimensional integrals. The graphical of b HL is depicted in Figure 2, where b HL is the minimum distance from the origin to the performance function in the standard normal space. Based on the above definition, the RIA is formulated as whereũ à i denotes the MPP. Equation (3) is a sub-optimization to solve MPP and the corresponding reliability index. In equation (3), the reliability index obtained is all positive. If the design point falls within the failure region, then the currently defined reliability index is invalid and will not converge to obtain a true MPP, here, the convergence problem will occur. Modified reliability index approach Li et al. 35 proposed a modified reliability index expression to solve the convergence problems of RIA, b M as where r u g(ũ à i ) denotes the gradient vector of g(ũ) at the pointũ à i , as r u g(ũ à i ) = ½ ∂g ∂ũ 1 , ∂g ∂ũ 2 , ::::, ∂g ∂ũ n T and its module This definition takes full advantage of the property that the collinear relationship between vectorũ à i and gradient r u g(ũ à i ) to distinguish whether the current design point is within the failure area or not. Although MRIA can accurately and stably converge to the optimal solution regardless of the location of the initial design. However, when strong nonlinear probabilistic constraints are involved, MRIA inherits some inefficient features of the MPP search. 26,27 Construct the search region of MPP In this section, the EMRIA is presented, which inherits the robustness of the MRIA while improving the search efficiency of the MPP at the same time. Firstly, the innovative active set ( à ) is defined using inequality to construct a region for the MPP search. In the innovative active strategy, we introduced the concept of ''6-sigma'' to efficiently and robustly solve MPP sub-optimization, which is shown in Figure 3. If the MPP is in the defined set of à , then the corresponding constraint is active and keeps it, as the point u à a 1 . Otherwise, the constraint is inactive and discarded, then goes to the next iteration, as the point u à ina 2 ,u à ina 3 . Thus, the innovative active set is suggested as: where s denotes the standard deviation. A new reliability index,b s j , is defined aŝ In whichũ à s j states the effective MPP of the j th active constraint.ũ à i + 1 (d) is the MPP of the next constraint. It is worth mentioning that the design and quality of ''6-sigma'' products have been recognized by the global market because of their high reliability and confidence level. 44,45 The ''6-sigma'' design level is equivalent to a 99.9999998% confidence level, which is significantly more reliable than the 3-sigma design (99.73%). Therefore, we adopt a ''6-sigma'' design level to assess the reliability of the system. EMRIA in RBDO The proposed EMRIA in this paper solves the RBDO problem in equation (1), which mainly adopts DLS. In this section, we will introduce the functions of the inner loop and the outer loop in detail. FORM for constraint conversion As mentioned before, a sub-optimization iteration will be executed in the inner loop, which is utilized to solve the MPPũ à s j and reliability index,b s j in the standard normal space (U-space). Thus, the basic independent random variables are in the original space (X-space) should be transformed into independent U-space using the mapping factorsx =d +ũs, as illustrated in Figure 4. Then, the sub-optimization scheme is normally expressed as: where g j (u) represents the j th active performance function. With the help of the MPPũ à s j andb s j , the probability constraints of RBDO are converted into the deterministic constraints by the Rosenblatt 46 transformation, which is expressed as: where F represents the standard cumulative distribution function (CDF). F G (g j ) is the cumulative distribution function. Using the following two inverse Gaussian transformation 47 to complete the transformation of constraint: where F À1 stands for the inverse cumulative distribution function of the standard normal distribution. F G À1 is the inverse cumulative distribution function of the performance function. According to equation (9), the probability constraints of equation (1) are re-expressed as the deterministic formulation: whereb s j (d) is the function ofd andũ à s j while b t j is the target reliability index. DLS combining the EMRIA In the outer loop,b s j (d) is expanded by the first-order approximate Taylor series at the current design point,d à k . For the k th iteration.b s j (d) is approximated as: whereb s j (d) is obtained equation (10). In equation (6), ifũ à s j 2 à ,b s j (d à k ) takes the partial derivative ofd is written as: Combining equations (12), (10), and (6), the optimized iterative scheme of the outer loop is: where c( Á ) is expressed as the constraint. In the beginning, the initial values,d (0) andũ (0) , are given arbitrarily. Equation (7) is applied to perform the inner loop and update the MPP,ũ à s j and reliability index,b s j (d). Whenũ à s j 2 à , the current constraint is active and goes to the out loop of equation (13). Otherwise, the current constraint is invalid and continues the next constraints. The stopping criteria arẽ .d (k + 1) ł e, here e = 10 À6 , which is a small positive number in this paper. Application To demonstrate the ability of the proposed method in RBDO, a practical problem is discussed here. Comparing the convergence results of EMRIA with that of both MRIA and RIA. Among them, function evaluations (FEs) are used as a prominent indicator to evaluate the efficiency of both methods in the entire solution process. A drop packaging system A package is depicted in Figure 5, where the product and cushioning materials are idealized as a nonlinear mass-spring system with a stiffness coefficient, k. Here, the outer corrugated carton is ignored, and the mass of the product is termed by m. The dropping process of a system is built with the nonlinear mass-spring system, as shown in Figure 6. Here, h denotes the drop height while t is the thickness of the cushion. In the collision with the ground, the cushion material is compressed and shows a dynamic compression process. 8 In this study, closed-cell foam is used as cushion material in the presented analysis due to its good cushioning property. The relationship between stress and strain is fitted by Matlab and the data from Instron 5566 static compressor, as shown in Figure 7, where the cushion material is placed between two parallel plates. The fitting analytical formula is: s(e) = 6:7578e À 43:02144e 2 + 119:4211e 3 À 147:45467e 4 + 68:45325e 5 ð14Þ where s and e are the stress and strain of the foam, respectively. During the impact, we assume the impact energy is consumed by the cushioning material (without energy loss), which makes the cushion material has maximum deformation. Thus, apply the energy balance as: 48 where g denotes gravity acceleration. A and e m indicate the bearing area of the cushion. e m is the peak strain. The integral expression of equation (15) can be better understood from a geometric point of view regarding Figure 8, here, the marked area is related to energy consumed by the cushion Ð e m 0 s(e)de, which corresponds to the numerical value mgh=At. Thus, impact acceleration is achieved: It is worth emphasizing that the maximum impact acceleration is compared with the fragility value of the product to assess the damaged possibility of a shock to the product. 49 The fragility value is the critical acceleration of product damage. Formulated RBDO A packaged product with a mass (m) of 2.5 kg falls freely from the height (h) of 0.6096 m. The cost function is the volume of the cushion should be minimized while the random uncertainty parameter is the thickness (t) of the cushion. In the reliability assessment, the uncertainty based on cushion thickness plays a vital role in protecting the product from damage during product transportation. Its uncertainty is related to the manufacturing of the material itself and has superior importance on system reliability than the other design parameters. Assuming the random variable follows a normal distribution t;N (d, s 2 ), where the mean valued is the design variable of the system and the standard deviation value is kept as constant during the RBDO. The parameters are detailed in Table 1. The RBDO model contains two highly nonlinear performance functions. One constraint is that the maximum impact acceleration generated during the impact should be less than the G-value of 110 g. The second is that the maximum strain of the cushion cannot exceed the given 0.2. In these two failure modes, the failure probability of the system shall not exceed the target of 3%. In addition, the iteration convergence criterion is set to 10 26 . RBDO model is constructed as: Min : z = 0:007 Ãd s:t : P(g 1 = a max À 110g ø 0) ł 3% P(g 2 = e m À 0:2 ø 0) ł 3% 0:02 łd ł 0:1, s = 0:00003 The results of RIA, MRIA, and EMRIA are tabulated in Table 2 for this RBDO problem. From Table 2, all approaches presented the same optimal design point as 0.0365. Meanwhile, the minimum objective function or cost function obtained is also the same as [2.5526e-4] under the same initial setting. However, the converged results obtained by the proposed EMRIA method require 4 iterations (Iter.) and 158 function evaluations (FEs), followed by the MRIA method with 17 iterations (Iter.) and 264 function evaluations (FEs), and finally, RIA 35 iterations (Iter.) and 423 FEs required. It is obvious to conclude that the proposed EMRIA is more efficient and faster than MRIA in the whole solution process. Figure 9 presents the iteration history of both methods in this problem. In addition, the failure probability (Failure Pro.%) of two constraints is evaluated by MCS at the optimal solution based on 10 6 samples. The outcome results are 0/2.762%, which is almost identical to the allowable failure probability of 3%. It also exhibited the constraint of g 1 is inactive while g 2 is active. In Table 3, the design variable (optimal solution) is reduced by 0.5% which causes the failure probability of the system by nearly 10%. It shows that the obtained result is the optimal and correct solution. Therefore, analysis based on the above results, EMRIA could find the correct optimal solution and show good efficiency and accuracy. Conclusion A novel reliability analysis method of cushioning packaging design named enhanced modified reliability index method (EMRIA) is proposed based on MRIA. The problem is formulated as an RBDO model. The key step of MRIA is to construct the region for MPP search stably and robustly in the inner loop using an innovative active set strategy. This strategy is a strict inequality, which is used to determine whether the current constraint is active or inactive. The active constraint will go into the outer loop to complete structural optimization design analysis, while inactive ones will be discarded. This process is continued to iterate until it stably converges to the optimal solution. An example of cushioning packaging design was studied. The thickness of the cushion is an uncertain and random parameter, which is related to the manufacturing of the cushion. The results show that EMRIA could converge the same optimal solution as both RIA and MRIA but has higher computational efficiency than RIA and MRIA. In addition, the optimal solution is verified by MCS based on 10 6 samples, which shows an agreement with the target reliability. The studied cases prove that the developed method not only has the capacity to solve the uncertainty in the cushioning packaging design but also has a better performance than MRIA, in such of efficiency, stability, and accuracy. Further work In this work, we demonstrated that the proposed EMRIA has the following advantages: (1) The proposed EMRIA can be applied to solve the RBDO problems in practical engineering, which can achieve a balance between cost and quality. (2) The proposed EMRIA is suitable for testing highly nonlinear systems and performs well. (3) The proposed EMRIA has a prominent characteristic in both efficiency and stability in comparison with MRIA. The disadvantage of the proposed method is that the conclusion is not robust. A small disturbance of the design variables will cause a greater failure probability of the system. In Table 3, the design variable (optimal solution) is reduced by 0.5%, the Failure Pro.% is close to 10%. It can only show that the obtained result is the optimal and correct solution but not robust. In response to this important issue, we are currently investigating further. The future work of this paper is as follows, (1) The research will focus on large and complex engineering field designs of RBDO with multiple design variables and constraints to test the proposed method. (2) The research will further explore the efficiency of the proposed method compared with the other reliability method-based RIA and PMA. (3) The research will further verify whether the proposed EMRIA applies to other cushioning materials such as open-cell cushion materials. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author gratefully acknowledges the support from the National Natural Science Foundation of China [grant number 51679056]. The author thanks the Packaging Laboratory of Rutgers, The State University of New Jersey. Data availability statement Data will be made available on reasonable request.
2022-11-17T16:12:36.994Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "fd205f296f582a08405d2465dd559a2f75adb236", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/16878132221135992", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "e15269f03764f1c00692cce7d52f160227e27fea", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
221459047
pes2o/s2orc
v3-fos-license
Wuzi Yanzong Pill for the treatment of male infertility Abstract Background: The incidence of male infertility is increasing worldwide, and has become an important problem that plagues many married couples. Half of the infertility cases have induced by male infertility. Wuzi Yanzong Pill is a traditional Chinese herbal formula used in treating spermatorrhea, premature ejaculation, erectile dysfunction, lumbago and male sterility widely. Therefore, in this systematic review, we design to evaluate the effectiveness and safety of Wuzi-Yanzong Pill for the treatment of male infertility. Methods: The English and Chinese literature published before June 30, 2020 will be searched in PubMed, EMBASE, Cochrane library, and Chinese literature in China National Knowledge Infrastructure, Chinese biomedical document service system, VIP Chinese Science and Technology Journal Database, WANFANG data. All related randomized controlled trials that meet the eligibility criteria will be included and other studies will be excluded. We will search literature with text keywords “male infertility” or “sperm” or “semen” and “Wuzi Yanzong Pill” or “Wuziyangzong” or “WZYZ”. Progressive motility, sperm concentration, sperm morphology, sperm viability, sperm DNA fragmentation, sperm number per ejaculate, pregnancy rates will be evaluated. RevMan 5.3 and Stata 14.0 will be used to conduct this systematic review. The preferred reporting items for systematic reviews and meta-analysis protocols statement is followed in this protocol and the PRISMA statement will be followed in the completed systematic review. Conclusion: The efficacy and safety of Wuzi Yanzong Pill in the treatment of male infertility will be e evaluated. The results of this review may provide some help for the clinician's decision. Introduction Infertility is a worldwide problem and defined as the failure to achieve spontaneous pregnancy after 1 year of regular intercourse without any contraception. [1] About 15% of couples are impacted by infertility and nearly half of them account for male's factors in general. [2] There are many elements for male infertility patients, less than 10% with congenital or genetic abnormalities and 20% to 30% with pathogenic statuses such as anti-sperm antigen, varicocele, or infection, 30% to 50% male cannot find the pathogeny of infertility (idiopathic infertility). Currently, male infertility treatment methods are various, including antioxidants and assisted reproductive technologies such as IUI, IVF, and ICSI. [3][4][5] Nevertheless, there is a lack of effective and specific pharmaceutical treatments for male infertility. Based on traditional Chinese medicine (TCM) theory, infertility be classified as the category of "no child", "sterility". TCM believes that the kidney stores essence which controls development and reproduction in human life. Physicians of different dynasties believe that the prosperity and decline of kidney essence determine male's fertility, kidney insufficiency is the main pathogenesis of semen abnormality. [6] Therefore, the therapeutic principle should be to focus on tonifying kidney and nourishing essence. In China, traditional herbal prescriptions, the basic form of clinical application of TCM for thousands of years, have been proven by clinical practice to play a positive role in human fertility. Wuzi Yanzong (WZYZ) Pill is the most common Chinese herbal formulas prescription for the treatment of male infertility. This prescription is known as the "The first prescription of ancient and modern infertility treatment ", was first documented in the " She Sheng Zhong Miao Fang " edited by Shi-che Zhang in the Ming Dynasty. WZYZ pill is consists of Fructus Lycii, Semen Cuscutae, Fructus Rubi, Schisandra Chinensis, and Semen Plantaginis. There are relatively many clinical reports that have found that WZYZ pill has a significant therapeutic effect on infertility. For instance, it is confirmed that WZYZ pill can significantly elevate the semen volume and sperm density in infertility patients with low semen counts. [7,8] However, its quality and efficacy have not been systematically evaluated, which affects the reliability of the research conclusion. This brings confusion to the clinical application for clinicians. Therefore, it is necessary to carry out a systematic review and meta-analysis to fully evaluate the efficacy and safety of WZYZ Pill in the treatment of male infertility. Review objectives The purpose of this systematic review is to evaluate the efficacy and safety of WZYZ Pill for the treatment of male infertility, including sperm progressive motility, sperm concentration, sperm morphology, sperm activity rate, sperm DNA fragmentation, sperm number per ejaculate, pregnancy rates, provide evidence-based medical evidence and suggestion for andrologists and urologists in the future. Methods This is a systematic review, and the meta-analysis will be carried out as conditions permit. Since this is a systematic review based on original research, no ethics committee approval is required. Protocol and registration This protocol has been registered on the International Platform of Registered Systematic Review and Meta-analysis Protocols (registration number: INPLASY202070046) which could be available on https://inplasy.com/. The preferred reporting entries of the PRISMA statement for system review and meta-analysis protocols (PRISMA-P) will be followed in this protocol. [9,10] And the PRISMA statement will be followed when reporting the systematic review. Eligibility criteria The inclusion and exclusion criteria are summarized as follows. 3.2.1. Study designs. The study will include only randomized controlled trials (RCTs). All the case reports, patient series, retrospective studies, self-controlled or before and after controlled studies, animal studies, reviews, laboratory researches, observational studies, meta-analyses, letters and other secondhand studies will be excluded. Participants 3.2.2.1. Included population. The infertile patients must be older than 18 years old, who were at least 1 year of unprotected sexual intercourse without contraception, and healthy female partners (their tubal, uterine, cervical abnormalities, and ovarian disorders were excluded). The patients should be conforming to the diagnostic criteria established in the European Urological Association's 2012 edition [2] or other authoritative standards. Excluded population. Healthy people; undiagnosed patients; female infertility patients; azoospermia; infertility reason for obstructive diseases, hypothalamic-pituitary lesion, chromosomal or genetic lesion, endogenous or exogenous hormone abnormalities, congenital abnormality. 3.2.3. Interventions. This group was treated with WZYZ pill or it combines with Western medicine are used as treatment interventions, limited to RCTs for drug therapy. If WZYZ pill is used as a control in the trial and another drug is an intervention, we consider reversing the order of the 2 interventions in this systematic review, which means WZYZ pill will be regarded as an intervention measure and the other drug as a control measure. Control interventions. The control interventions can be accepted simple western medicine or didn't get any treatment as a blank control. But, once they had accepted other traditional Chinese medicine treatments such as intravenous medication, acupuncture, and moxibustion, the trials will be excluded. Primary outcome indicator. (1) Progressive motility sperm: including the activity of A and B levels or forward-moving sperm in the World Health Organization classification, which provided as a percentage (%). (2) Sperm concentration: number of sperm per milliliter (10 6 /mL). [4] (3) Sperm morphology: proper sperm ratio, provided as a percentage (%). (4) Sperm viability: Proportion of all active sperm (including A, B, C or PR, NP), provided as a percentage (%). It will be based on the results reported at the end of included studies. (1) Sperm DNA fragmentation: DNA integrity damage was reported in the study. The detection method may be sperm chromatin structure assay (SCSA), terminal deoxyuridine nick end labelling (TUNEL) assay, Comet assay, sperm Chromatin Dispersion (SCD) assay, Acridine orange (AO) test, Aniline blue (AB) staining, Toluidine blue, Chromomycin A3 (CMA3) staining. [11] (2) Sperm number per ejaculate: The total number of sperm contained in once ejaculation (10 6 /once ejaculation). (3) Pregnancy rate: defined as all pregnancy reported in the study. (4) Adverse events: all adverse events, including nausea, vomiting, facial flushing, increased heart rate and other adverse events in the study. 3.3. Data source 3.3.1. Electronic search. We will search in PubMed, EMBASE, Cochrane library, and Chinese literature in China National Knowledge Infrastructure (CNKI), Chinese biomedical document service system (Sino Med), VIP Chinese Science and Technology Journal Database (VIP), WANFANG data. The literature publication deadline is June 30, 2020, in each platform or database and the search work will be done in July 2020. The literature search update will be executed again before the systematic review is completed. Subject heading, free text words will be used to search in PubMed, EMBASE, Cochrane library. In Cochrane Library and EMBASE, the use of free words will be limited within title, abstract and keywords, but in PubMed, limited in title/abstract. The "topic" field will be used for the search of CNKI and WANFANG, and the "title or keyword" filed for the search of VIP. The subject heading plus free words form will be used to retrieve SinoMed. We will choose Medical Subject Heading or text key words "male infertility" or "sperm" or "semen" AND "Wuzi Yanzong" or "Wuzi Yangzong Pill" or "WZYZ". The Chinese form of the above terms will be used in Chinese search. A specific search example for PubMed is shown in Table 1. 3.3.2. Other sources of search. Grey literature will be retrieved through Open Grey. Full texts will be obtained through library interlibrary loan or purchase. The manual review of references in published articles will be conducted to identify other relevant studies. 3.4. Selection of studies and data extraction 3.4.1. Selection of studies. Document management will be conducted by Endnote X8 software. The software will be used to filter duplicate studies first, and then delete duplicate researches by reading titles, abstracts and other relevant information. According to the Included and excluded population, the literature will be further screened. In this process, the controversial literature will be screened after obtaining the full text. Further detailed screening and data extraction of the documents will be carried out simultaneously by 2 professionally trained reviewers (Shanshan Yong, Yali Yang). Then, the articles that meet the inclusion criteria are full-text read and re-screened. If 2 or more articles have repeated or staged research results, only the articles with the largest sample size, the most complete intervention and follow-up time are included. When the review team cannot confirm the repeated studies, the original study author will be contacted for judgment. The flow chart of literature screening is shown in Figure 1. Data extraction. Before the formal process of data extraction, the review group will discuss and a unified data extraction form will be produced. Two review authors (Shanshan Yong, Yali Yang) will independently conduct data extraction exercises. All differences will be discussed and resolved with the third reviewer (Fuhao Li). The content of data extraction is as follows. 1. General characteristics: name of first author, publishing year, study title, nation or country, execution time of the study, email or other contact information. 2. Information of studies: study design, sample size, randomized information, assignment hiding, blind method, diagnostic criteria, outcome indicators, safety indicators, statistical methods, information of outcome indicators, follow-up. 3. Information of participants: age, severity of disease, course of disease, baseline level, comorbidity, healthy condition. 4. Information of control group: The packaging, shape, taste and color of the oral drug should be consistent with that of the treatment group, and neither the researcher nor the participants could distinguish them. 5. Outcome indicators: Detailed statistics of sperm quality parameters, including sperm viability, progressive motility, sperm concentration, sperm morphology, sperm DNA fragmentation, Sperm number per ejaculate and pregnancy rates, data of adverse events, specific information. When necessary, the review team will contact the original research author by email to obtain the full text or relevant results. If there are any questions or confusion about the original research in the process, we will be contacted again to get specific answers. Risk of bias assessment There are 2 review authors (Shanshan Yong, Yali Yang) who will independently evaluate and cross-check the risk of bias: including selection bias, performance bias, detection bias, attrition bias and reporting bias, which will be evaluated based on the Cochrane Collaboration Network Risk Assessment Tool. Discrepancies between review authors on the risk of bias will be resolved through discussion with a third review author (Fuhao Li). Assessment items include random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other bias. Each item of bias situation includes low risk, unclear and high risk. [12] Since the authenticity of blindness cannot be determined, the outcome indicators of the systematic review are relatively objective. Therefore, we define the generation of random sequence, allocation concealment and incomplete data as the key areas of bias assessment risk. The risk of bias assessment chart containing the study will be generated using the Review Manager 5.3 software. [12] Recodes identified through all database searching ( Data analysis and synthesis Descriptive analysis or narrative synthesis will be performed. When there is clinical heterogeneity between the studies or when the data cannot be synthesized or results data cannot be extracted. When the included trials are clinically homogeneous and the data are similar and synthesizable, a meta-analysis will be performed. [13,14] Dichotomous will be determined by relative risk (RR) with 95% confidence interval (CI). [13] Continuous data will be analyzed using weighted mean difference (if measurement methods are consistent) or standardized mean difference (if measurement methods are different). We will use Cochran's Q statistic and I 2 statistic to test heterogeneity. P < .10 is heterogeneous, or I 2 > 50% is significant heterogeneity. A fixed effects model (Mantel-Haenzel method for RR and Inverse Variance for MD) will be used for I 2 < 50%. A random effects model (D-L method) [13] will be used when the heterogeneity is still significant after sensitivity analysis and subgroup analysis. P < .05 of Z test will be considered statistically significant. The meta-analysis will be generated by Review Manager 5.3 software. [13,14] Subgroup analysis If the data is sufficient and there is heterogeneity between studies, we will perform subgroup analysis. Subgroup analysis will be conducted according to different ages, ethnic groups, male infertility types, comorbidity, interventions, control measures, measurement methods or measurement time. Sensitivity analysis Sensitivity analysis will be used to test the stability and reliability of meta-analysis. It can be done by eliminating each study individually or using random-effects model (D-L method) to test the results after using the fixed effect model. Publication bias We will use a funnel plot to test the risk of publication bias if a result of a meta-analysis contains >10 articles and above. Quantitative methods such as Begg test and Egger test will be used to help assess publication bias in the application. Grading the quality of evidence The quality of evidence in the systematic review will be judged by the GRADE tool. [15] According to 5 key domains: risk of bias, consistency, directness, accuracy and publication bias. The level of evidence for each outcome can be divided into high quality, moderate quality, low quality and very low quality levels. [16] 4. Discussion In recent years, infertility has received increasing attention. Due to environmental pollution, or long-term lack of trace elements, unhealthy living habits, long-term mental stress, excessive smoking, drug abuse, abuse of hormone drugs, sexually transmitted diseases and other factors, the reproductive capacity of humans, has shown a significant downward trend, particularly in men. [17] Currently, there are diverse drugs recommended to treat male infertility, but medication is mainly experiential therapy. Traditional Chinese medicine pays attention to the theory of the concept of holism and syndrome differentiation, which is to be considered as a safe and effective method in the treatment of infertility. The pathological mechanism of male infertility in TCM is asthenia, sthenia, cold, heat, phlegm, blood stasis and depression, which eventually leads to kidney essence deficiency. [18,19] WZYZ pill is composed of 5 traditional Chinese herbs, exert the effect of nourishing kidney essence jointly. WZYZ pill primarily acting on testicular spermatogenic epithelial cells, which directly affects the differentiation and development of spermatogenic cells and restores spermatogenic function. [20] It has been shown that WZYZ pill can significantly improve the semen quality and increase serum testosterone (T) and luteinizing hormone (LH) levels in male infertility patients. [7,21] WZYZ pill can remarkably enhance male semen parameters. The results show that WZYZ pill combined with levocarnitine for treatment of oligoasthenospermia could elevated sperm quality and sperm motility. [22] The study has shown that WZYZ pill can improve semen parameter indexes of patients with sperm DNA damage, reduce the rate of sperm DNA fragmentation, and have a repairing effect on sperm DNA damage. [23] WZYZ pill is a traditional Chinese formula that has been making use of the treatment of male infertility for a long time. There are many relatively clinical reports testified its clinical effect is remarkable. However, the experimental quality and conclusion of these researches are not well substantiated, which affects the reliability of the studies, and it is difficult to be recognized in the clinical application of clinicians. Therefore, this meta-analysis aims to evaluate the therapeutic effect of WZYZ pill on male infertility patients through only select randomized controlled trials (RCT). We will assess the effect of WZYZ pill on semen parameters including Progressive motility, sperm concentration, sperm morphology, sperm viability and sperm DNA fragmentation, sperm number per ejaculate, pregnancy rates in infertile men. In conclusion, this systematic review will provide evidence-based medical evidence to prove the effectiveness and safety of WZYZ pill in improving the reproductive outcome and reproductive capacity of male infertility, and provide recommendations for further researches. This systematic review also has some limitations: First, there may not be enough large samples of RCTs. Second, the quality of some RCTs may not be high, which will affect the authenticity of the evidence. Therefore, we will hope there will have more largescale, rigorous, high-quality and reasonable multicenter randomized controlled trials (RCTs) to explore the clinical efficacy of the treatment of male infertility, provide a more objective and wellfounded conclusion in the future.
2020-08-13T10:07:20.840Z
2020-08-14T00:00:00.000
{ "year": 2020, "sha1": "6fe2745b9cc9c4588a0531e66626a9b8806887ca", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000021769", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a767aeb9081cee861c34b82f81f00284986abf9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248253084
pes2o/s2orc
v3-fos-license
Evaluation of PPG Feature Values Toward Biometric Authentication Against Presentation Attacks In this study, we examined information leakage in photoplethysmogram (PPG)-based biometric authentication and assessed an attack against authentication based on the information leakage. Several approaches have been proposed to apply PPG to biometric authentication using a wearable device; however, there may be several attacks against PPG-based authentication. One of the attacks is a “presentation attack” (PA), which utilizes the information leakage originating from the various PPG measurement sites on a body. The PA records the victim’s PPG stealthily on non-genuine measurement sites and transmits it to the PPG sensor to break the authentication. We examined the information leakage and assessed the PA by evaluating feature values extracted from the PPG signals. We recorded the PPG signals of 12 participants on their fingertips and wrists. We compared the feature values extracted from the recorded PPG signals by computing the differences, correlation coefficients, and mutual information to examine the leakage of information required for the PPG-based authentication. We then assessed the feasibility of a PA based on existing PPG-based authentication algorithms and evaluated the contribution of each value to authentication and PA by computing the permutation importance of all feature values. The experimental results indicated that there might be information leakage and selection of feature values to reduce the feasibility of the PA up to 62.8 %. I. INTRODUCTION With the growth of the Internet and wearable devices, such as smartwatches, our bodies are becoming connected to networks [1]. Many devices provide users with applications via networks. The devices often retain and show confidential information in applications such as message exchange by connecting to smartphones. Some devices are even able to provide users with applications such as payment functions using stored personal wallet data without connecting to smartphones [2]. However, there may be attackers who try to access and steal confidential information to show the capability to hack or make a profit from the information. Therefore, The associate editor coordinating the review of this manuscript and approving it for publication was Kin Fong Lei . an authentication function must be added to the device to identify the user of the device and reject attackers in case it is stolen. There are several authentication methods for wearable devices, such as smartwatches. The pervasive methods are password-and pattern-based authentication, which require touching the touchscreen mounted on the device [3]. However, the authentication methods can be vulnerable to identity spoofing such as shoulder-surfing and brute-force attacks [4]. Several approaches have been proposed to apply biometric authentication to wearable devices. For example, ring-type devices equipped with fingerprint sensors [5] and watchtype devices equipped with electrocardiogram (ECG)-based authentication modules [6] have been developed. These methods contribute to user convenience because there is no risk of forgetting or losing the information required for authentication [7]. However, many biometric authentication methods for wearable devices often restrict the user activity. For example, fingerprint-or ECG-based authentication requires the user to have a specific posture, such as touching the sensor or the electrode with the finger. Therefore, it is necessary to develop a biometric authentication method for wearable devices with few restrictions, such as posture limitations. Meanwhile, there have been several approaches utilizing the distinctiveness of photoplethysmogram (PPG) for biometric authentication [8]. A PPG signal is a noninvasive circulatory signal related to changes in blood volume in the tissue [9], which can be recorded by a sensor mounted on a smartwatch to provide health information such as SpO 2 (arterial oxygen saturation) [10]. As shown in Fig. 1, the sensor comprises a light source such as a typical lightemitting diode (LED) illuminating the tissue and a photodetector such as a phototransistor (PTr) sensing the arterial expansion and contraction in the intensity of the reflected or transmitted light [10]. PPG-based biometric authentication can be realized by extracting feature values from recorded PPG signals and comparing them to registered templates [11]. The advantage of a PPG is that its measurements can be performed on various sites, such as fingers and wrists using only one sensor, as shown in Fig. 1, whereas ECG measurements require at least two electrodes on separated sites on the body. PPG is often an alternative to ECG for heart rate (HR) estimation because it can be recorded with fewer restrictions than ECG [12]. PPG is expected to seamlessly connect some applications, such as health monitoring and authentication, with one sensor [13]. Therefore, wearable devices may provide PPG-based authentication with PPG sensors installed on them in the near future. However, there may be specific vulnerabilities in PPG-based authentication. Although the variety of measurement sites on a body with few restrictions is an advantage of PPG measurement, it may lead to a leakage of information required for PPG-based authentication because several biomedical approaches have investigated the similarity in PPG waveforms recorded at different measurement sites on one person [14]. If an attacker stealthily records a victim's PPG signal on non-genuine measurement sites and utilizes the signal as an input to the PPG sensor, the PPG-based authentication algorithm may accept the input as genuine. In this study, we investigated the vulnerability of PPG-based authentication to develop PPG-based authentication systems with countermeasures. We focused on the variety in PPG measurement sites on the body and the waveforms at each site to examine the leakage of information required for authentication. Several studies have been conducted on PPG-based authentication and its attacks. Our previous work also investigated attacks against PPG-based authentication [15]. The attack utilized a victim's PPG signal stealthily recorded on non-genuine measurement sites on the victim's body and transmitted the signal to the PPG sensor to break authentication. However, there is a paucity of studies investigating the effect of waveform variety on PPG-based authentication and attacks against it. Studying the relationships between PPG waveforms recorded at different sites may lead to the development of countermeasures against attacks. Therefore, we investigated the vulnerability by recording PPG signals at multiple measurement sites on the participants' bodies, examining the information leakage, and assessing the feasibility of the attack using the recorded PPG signals. II. RELATED WORKS There can be several attacks on biometric authentication systems. A typical biometric authentication system comprises three components: a sensor that obtains biometric information from a user, a feature value extractor that obtains identical information from the biometric information and stores it as templates, and a matcher that compares a new value with the stored templates [16]. Several attack vectors exist for the components of the system. One of the most wellknown vectors is the presentation of fake biometrics to a sensor, which is referred to as a presentation attack (PA). PAs have been demonstrated for pervasive image-based biometric authentication systems utilizing commercially available products. For example, face recognition can be achieved by presenting photographs or liquid-crystal displays to camera [17]. Fingerprint recognition can also be achieved by presenting silicone rubbers or gummi candies to a fingerprint sensor [18]. In general, most pervasive biometric authentication systems utilize physical images such as faces and fingerprints. Recently, an increasing number of approaches have utilized time-series physiological signals for biometric authentication because of their distinctiveness and difficulty in replication [8]. For example, a watch-type device called Nymi Band, which provides authentication using an ECG signal derived from the electrical activity of the heart, is VOLUME 10, 2022 available [6]. An electroencephalogram (EEG) signal, which can be recorded by electrodes mounted on headband-and headset-type devices, is also utilized for biometric authentication [19]. In addition, many approaches utilize PPG signals that can be recorded by sensors mounted on smartwatches for biometric authentication [11]. However, several PAs against biometric authentication using time-series physiological signals and physical image-based authentication have been proposed. For example, several approaches proposed PAs against ECG-based authentication and demonstrated them against the Nymi Band. Eberz et al. proposed a PA against the Nymi Band, which focused on a variety of ECG measurement sites on the body [20]. The PA maps the victim's ECG signal using a device other than the victim's device to produce a genuine signal, which is transmitted to the Nymi Band. Shukla et al. proposed a PA for EEG-based authentication [21]. The PA utilizes the correlation between the recorded EEG and the user's movement during EEG recording. In addition, several PAs against PPG-based authentication have been proposed, focusing on the various PPG measurement sites on the body. Seepers et al. investigated the possibility of passing a heartbeat-based authentication by utilizing heartbeats estimated based on blood circulation in the face using camerabased PPG as an attack vector [22]. Our previous work also proposed a PA against PPG-based authentication that utilizes multiple feature values [15]. We assumed that the victim wore a smartwatch that included a genuine PPG sensor, and often logged into some applications using confidential information after PPG-based authentication. We also assumed that the attacker intended to steal the information through the PA, which was executed as follows: Step 1: Install a malicious PPG sensor in the daily necessities or office supplies such as mouse or a desk that the victim may touch with the finger. Step 2: Record the victim's PPG signal at the finger using the malicious PPG sensor. Step 3: Generate an electrical signal or control the light intensity based on the recorded signal. Step 4: Obtain the victim's smartwatch after he/she removes it for charging the battery. Step 5: Transmit the signal to the PPG sensor installed on the smartwatch to break its authentication. In the experiment, we investigated the feasibility of proposed PA using the PPG signals recorded at multiple measurement sites on participants and an existing PPG-based authentication algorithm. The experimental results suggested that the PA could occur [15]. The PAs against biometric authentication using time-series physiological signals in the previous paragraphs are based on a hypothesis about possibility of information leakage; the biometric information required for authentication may be available on other measurement sites or sensing devices. For example, a feature value extracted from a signal recorded at a measurement site may be equal to a value from a signal at another site. However, there are few studies examining the information leakage required for biometric authentication. To the best of our knowledge, no studies have examined the leakage of information, such as feature values required for PPG-based biometric authentication, to investigate attacks. Our previous work [15] also did not examine the leakage using the feature values in the PPG-based authentication algorithm. If we examine this and evaluate the contribution of information to authentication and PAs, we may derive an optimized authentication algorithm with countermeasures against PAs. For example, if the same feature values extracted from PPG signals recorded at different sites are equal based on the hypothesis, we should eliminate the value from the algorithm against the PA. Therefore, to develop PPG-based authentication with countermeasures, we investigated information leakage as a vulnerability in PPG-based authentication. We evaluated the feature values extracted from the PPG signals recorded on multiple measurement sites to examine information leakage. Then, we evaluated the contribution of each feature value to the authentication and PA to derive an optimized authentication algorithm with countermeasures against the PA. III. EXPERIMENT A. OVERVIEW We conducted an experiment to examine information leakage in PPG-based biometric authentication and assess the feasibility of PA. Figure 2 presents an overview of the experimental protocol. We recorded PPG signals at two measurement sites on the participants using the developed sensing system. Then, we extracted feature values from the recorded PPG signals and compared them to examine information leakage for authentication. Subsequently, we assessed the capabilities of the authentication and feasibility of the PA against it using feature values and classifiers based on existing algorithms. At the same time, we evaluated the contribution of each value to the authentication and PA and compared the results in combination with the values. B. SETUP AND RECORDING We developed a PPG sensing system to record PPG signals at two measurement sites on the body. The system included two sensors consisting of an LED and a PTr (LED emitting peak wavelength: 570 nm, New Japan Radio Co., Ltd., NJL5303R-TE1). Each output of the PTr was filtered with a low-frequency cutoff of 0.40 Hz and a high-frequency cutoff of 5.0 Hz, amplified with a gain of 47 dB and a sampling rate of 1 kHz with a resolution of 16 bits, and recorded using an AD converter (National Instruments, USB-6216). We recorded PPG signals from 12 participants (S1, S2,. . . , S12, one female and 11 males, aged 25-30 years) who did not have any cardiovascular diseases. As illustrated in Fig. 3, the sensors were fastened using Velcro tape to record PPG signals on the fingertip and wrist. Although there are more candidates for PPG measurement sites, as shown in Fig. 1, we selected the two sites as our scope to examine the information leakage and assess the PA using PPG signals recorded on relatively close sites, which might resemble each other in waveform based on the blood vessel configuration and blood circulation. In addition, PPG signals are generally recorded on the fingertips in many clinical applications [23], whereas smartwatches record PPG signals on the wrist. The participants wore the PPG sensors on the two measurement sites and maintained a resting state for 30 s while the PPG signals were recorded. Five recordings (trial T1, T2,. . . , T5) were obtained for each participant. The experiment was approved by the Ethical Committee of Information Technology R&D Center (2020-B001), Mitsubishi Electric Corporation, Japan. Informed consent was obtained from the participants before recording was started. C. EXAMINATION OF INFORMATION LEAKAGE 1) FEATURE EXTRACTION We extracted 43 feature values C i,1 , C i,2 , . . . , C i,43 from PPG segments to examine the leakage of the information required for authentication, where i denotes the number of segments. Each PPG segment contained a negative peak at the starting point, followed by at least one positive peak, followed by a negative peak at the end of the segment as illustrated in Fig. 4. The values were originally from previous works on PPG measurement, including not only biometric authentication, but also physiological studies [24]- [27] as follows: 4 were the peak-related values such as number of peaks, which might reflect the arterial stiffness. Gu et al. proposed the first approach to apply PPG to biometric authentication using those four values [24]. • C i,5 , . . . , C i, 15 were mainly statistics-related metrics such as mean and maximum value in a segment. The values also included mean of dynamic time warping (DTW) distance, which was similarity between one time-series segment and another segment. DTW distance might reflect changes in arterial distensibility in one recording. Jindal et al. proposed the first approach to apply a deeplearning technique to PPG-based authentication using those 11 values [25]. 16 , . . . , C i,39 were 24 Mel-frequency cepstral coefficients (MFCC1, . . ., MFCC24), which were often used in audio signal processing systems. They were computed by applying discrete Fourier transform, logarithmic transform of mel-scale warped spectrum, and discrete cosine transform to an input PPG signal. Those MFCCs might reflect frequency characteristics of PPG signals. Siam et al. proposed the application of a deeplearning technique to PPG-based authentication using those MFCCs [26]. 40 , . . . , C i,43 were amplitude-related values such as the peak-to-peak values in each segment. The values included those related to the dicrotic notch, which is a small and brief increase in a segment [28]. The dicrotic notch might reflect the arterial stiffness. Hartmann et al. proposed them for the evaluation of differences in PPG waveforms recorded at multiple measurement sites on the body, but not for PPG-based authentication [27]. Table 1 shows the abstract of each feature value. 2) EVALUATION OF FEATURE VALUES We used three evaluation metrics between the feature values: the mean absolute percentage error (MAPE), correlation coefficient (CC), and mutual information (MI) to investigate the relationship between a feature value C wr i,m from the PPG signal recorded on the wrist (PPG wr ) and a value C fi i,m from VOLUME 10, 2022 the PPG signal recorded on the fingertip (PPG fi ), where m denotes the identifier of the extracted feature values (m = 1, 2, . . . , 43). We computed the difference between C wr i,m and C fi i,m as MAPE assuming the former as a true value as follows [29]: where N denotes the total number of segments. We used CC to describe the linear relationship between C wr i,m and C fi i,m as follows [23]: whereC wr i,m andC fi i,m denote the mean of C wr i,m and the mean of C fi i,m , respectively. We computed MI between C wr i,m and C fi i,m to describe a nonmonotonic relationship as follows [31]: where p wr (C wr i,m ), p fi (C fi j,m ), and p(C wr i,m , C fi j,m ) denote marginal probability mass function of C wr i,m , the marginal probability mass function of C fi j,m and joint probability mass function of C wr i,m and C fi j,m , respectively. D. ASSESSMENT OF PA 1) CLASSIFIER GENERATION To compute the capability of PPG-based authentication and the feasibility of the PA, we used four classifiers: k-nearest neighbor classifier (kNN), multilayer perceptron (MLP), support vector machine (SVM), and random forest (RF), which were used in existing PPG-based authentication algorithms [24]- [26], [32], [33]. kNN is a classifier based on the minimum distance from the sample feature vectors to the training feature vectors [32]. We selected the Mahalanobis distance (MD) as the distance function, which has been successfully applied in various biometric systems. MD is defined as follows [34]: where C test , C train , and A denote an input sample feature vector from each PPG segment, a trained feature vector, and a variance-covariance matrix of C train , respectively. The MLP is a classifier of feedforward neural networks that can create nonlinear decision boundaries [35]. We used an MLP with three hidden layers consisting of 40, 40, and 10 nodes, which were estimated as the optimal combination of layers and nodes using a grid search approach as well as the original article [25]. SVM is a classifier that creates boundaries that divide data into two or more classes to maximize the margin between the boundaries and data, and has been successfully applied in various biometric systems [36]. RF is a classifier that generates many classifiers and aggregate their results to satisfy the requirement, and has the capability to use many feature values and classes that biometric authentication processes require [36]. We compared the performance of the four classifiers in the verification. 2) VERIFICATION We computed performance metrics of authentication and PA based on fivefold cross-validation using the feature values in five trials. In the authentication, the validation used the feature values extracted from PPG wr in all trials except one for training and the remaining trials for testing using each classifier to compute the equal error rate (EER) as the performance metric. The EER was computed by tuning parameters such as the threshold for the predicted probability of each classifier to equalize the false rejection rate (FRR) and false acceptance rate (FAR). In the PA, the validation used the feature values extracted from PPG wr in all trials except one for training and the values from PPG fi in the remaining trials for testing using each classifier to compute the segment acceptance rate (SAR) as the performance metric. For example, a classier was generated using the feature values extracted from PPG wr in T1-4, and the values from PPG fi in T5 were input to the classier. SAR is defined as follows: where T , M acpt,t and M t denote the total number of trials, number of authenticated segments in one trial, and total number of segments in one trial, respectively. We used SAR to assess the feasibility of PA by comparing it with EER (= FAR), which describes the feasibility of simple identity spoofing using the authentication device of the other person. Each validation was repeated five times and the average EERs and SARs were computed for each classifier. 3) FEATURE VALUE SELECTION We evaluated the contribution of each feature value to authentication and PA by computing the permutation importance (PI). The PI is computed by shuffling one column of a dataset of feature values to generate a corrupted dataset and calculating the classification performance using the corrupted dataset. The PI of feature value C i,m is defined as follows [37]: where s, K , and s k,m represent the accuracy score of the trained classifier, total number of repetitions, and accuracy score using the corrupted feature values, respectively. We computed the PIs of all 43 values for both authentication and PA, using these values. If we exclude values with high PIs in PA, we may reduce the SAR. Therefore, we selected values based on PIs to reduce the feasibility of the PA. Several approaches have investigated the optimal number of feature values for PPG-based authentication, and their results showed that 15-24 values were effective in achieving a high accuracy of identification [26], [28], [38]. . We compared the EER and SAR using the combinations i) -iv) for each classifier. Figure 5 shows an example of PPG signals recorded simultaneously at the two measurement sites on a participant's body (S6, T3). Table 1 presents the evaluation metrics (MAPE, CC, and MI) computed for each feature value. However, we exclude the case in which the denominators become zero in in Eq. (1), (2) when we computed the MAPE and CC. MAPEs smaller than 0.100, CCs larger than 0.300, and MIs larger than 0.300 are shown in bold in Table 1. Figures 6,7,8,and 9 show the examples of the relationships between the feature values extracted from the two measurement sites, which were proposed by Gu et al. [24], Jindal et al. [25], Siam et al. [26], and Hartmann et al. [27], respectively. Some of the relationships and evaluation metrics between C wr i,m and C fi i,m showed the possibility of information leakage. For example, Fig. 6(b) C i,2 : positive slant, Fig. 7(a) C i,7 : mean of DTW, Fig. 7(b) C i,11 : minimum value time, and Fig. 9(b) C i,42 : dicrotic notch time show a weak correlation between the values extracted from PPG wr and PPG fi . Generally, if the CC between two values is greater than 0.300, we can assume they have a correlation [30]. Therefore, CC 2 = 0.422, CC 7 = 0.389, CC 11 = 0.398, and CC 42 = 0.389 indicate that there is a correlation between the values PPG wr and PPG fi . Additionally, MI 11 = 0.353 had the highest MI (Table 1). We believe this is because C i,11 is usually at the starting point or the end of the PPG segment, and the latter values from the PPG signals recorded on the different sites would be close to each other if the PPG signals repeat the same period of the cycle, as shown in Fig. 5. Figure 6(a) C i,1 : number of peaks shows that there were only a few variations in the values of PPG wr and PPG fi . The difference between the values for MAPE 1 = 0.044 was the smallest, as shown in Table 1. Figure 9(a) C i,40 : peak-to-peak also showed that there were fewer variations in the value from wrists than from fingertips. These relationships, such as the correlation and coincidence of the values, suggested that there was leakage of information required for PPG-based authentication. Therefore, the attacker might break the authentication by using PPG signals recorded on non-genuine measurement sites based on information leakage. 1) EXAMINATION OF INFORMATION LEAKAGE However, as far as we can see in some relationships and evaluation metrics, there were also feature values that did not indicate the possibility of information leakage. For example, Fig. 8(a) C i, 16 : MFCC1 and (b) C i,17 : MFCC2 did not show a definite correlation between the MFCCs extracted from PPG wr and PPG fi . In addition, many MFCCs had MI m = 0.000, as shown in Table 1. These relationships and metrics suggest that there is less information leakage than the other values, and the difficulty of estimating the values of an MFCC extracted from a PPG signal recorded at a measurement site using MFCCs extracted from PPG signals recorded at the other sites. Therefore, the results suggest that the appropriate selection of feature values for the PPG-based authentication algorithm may contribute to a reduction in feasibility of the PA. Table 2 shows the EERs as the capability of authentication and SARs as the capability of PA. The combinations of VOLUME 10, 2022 feature values i)-iv) were selected based on the PI ranks included in Table 1. Smaller numbers of PI ranks indicate greater importance in either authentication or PA when using combinations of all feature values and each classifier. Table 2 i) indicates that the implemented authentication algorithm achieved an EER (= FAR) between 0.001 and 0.019 when using each classifier. Meanwhile, it also indicates that the algorithm SARs were between 0.197 and 0.279 when using each classifier, which were greater than the EERs. These results suggests that PPG-based authentication algorithms might accept the available signals in the PA with a higher probability than the impostor's signal (false acceptance). As shown in Table 1, PI ranks for the feature values often depended on classifiers. However, several values such as C i,7 , C i,41 and C i,42 , as shown in Fig. 7(a), Fig. 9(a), and Fig. 9(b), respectively, contributed to both authentication and PA in many combinations of feature values and classifiers. Using these values for the authentication algorithm is discouraged in terms of countermeasures against the PA, although they contribute to authentication performance. Table 2 indicates that the selection of feature values led to changing EERs and SARs, where i), ii), iii) and iv) denote the combination of all feature values, values that improved the authentication performance, values that improved the PA performance and values that did not improve the PA performance, respectively. Although the same number of values were used in ii) and iv), the SAR in iv) was smaller than the SARs in ii) for each classifier. The EER in iv) was larger than the SAR in ii); however, the EER difference between ii) and iv) was smaller than the SAR difference between ii) and iv) for each classifier. The results suggest that the selection of feature values based on the PIs would be effective for the PPG-based authentication algorithm against the PA. We succeeded in the reduction of SAR by 35.8 %, 52.4 %, 38.6 %, and 62.8 % by comparing the combinations iv) to ii) when we used kNN, MLP, SVM, and RF as the classifier, respectively. A. RELATIONSHIP BETWEEN INFORMATION LEAKAGE AND PA Greater evaluation metrics did not necessarily lead to a higher PI rank of PA for the feature values, depending on the classifier as shown in Table 1, although several metrics suggested that there might be information leakage based on their relationships such as the linear correlation between the values from PPG wr and PPG fi . We expected that the PA might be more likely to succeed when many feature values C wr i,m and C fi i,m satisfying C wr We used the 43 values proposed for the authentication and evaluation of differences in PPG waveforms in the four articles [24]- [27] and compared the same feature values such as C wr i,1 from PPG wr and C fi i,1 from PPG fi . Although they were different from each other in the computation methods, there might be correlations between the values extracted from the same PPG signal, which might contribute to information leakage and redundancy in the algorithm. The redundancy cannot be solved by computing and comparing PIs of the feature values. For example, C i, 8 : maximum value and C i,40 : peakto-peak (difference between maximum value and minimum value), which were originally from the different articles [25], [27], may have a correlation that leads to information leakage because both of the values are related to the maximum value of the PPG signal. An attacker may estimate C wr i,m 1 using C fi i,m 2 , where m 1 = m 2 , and generate PPG wr to break the authentication. If we reduce the number of values for authentication based on metrics such as CC and MI between the values extracted from the PPG signal, we can prevent information leakage and realize efficient PPG-based authentication algorithms. B. LIMITATIONS Under limited experimental conditions, we examined the information leakage and assessed the PA. We have addressed the following three limitations: 1) RECORDING CONDITION We have to record PPG signals of a larger number of diverse participants at their more various measurement sites. Although a larger number of participants are required statistically to ensure the reliability of biometric authentication systems [24], [25], we focused on the examination of the information leakage and assessment of the PA rather than the performance of the PPG-based authentication systems. If we record PPG signals at the other measurement sites, there may be more differences in the waveforms than the two sites, which may not lead to information leakage and PA. Meanwhile, the characteristics of blood vessels such as compliances depending on ages and measurement sites contribute to the PPG waveforms [39]. If blood vessels in different measurement sites of one participant have same characteristics, the feature values may be close to each other and contribute to information leakage and PA. In addition, considering the time between registration and authentication, we have to record PPG signals for a longer duration because the PPG waveforms gradually change, which may also contribute to the experimental results. 2) AUTHENTICATION ALGORITHM We must examine the information leakage and assess the PA using other PPG-based authentication algorithms. We used VOLUME 10, 2022 only the algorithms using hand-crafted feature values such as a maximum value, although there have been many PPG-based authentication algorithms [11]. Some of them use handcrafted feature values such as a maximum value, which includes the values used in the experiment, while others use all PPG signals based on deep-learning techniques such as a combination of convolution neural network (CNN) and long short-term memory (LSTM) [11]. 3) SIGNAL GENERATION PA was assessed by inputting the feature values extracted from the recorded PPG signals to the classifier, rather than transmitting artificial signals such as modulated current or light intensity to the input of the authentication system as proposed in our previous work [15]. Our future works include the assessment of PA by generating and transmitting the signals to the PPG sensor. Fujii et al. investigated a technique to modulate the light intensity to a PPG sensor to get the target HR from smartwatches [40]. We expect that the similar technique may contribute to the PA against the PPG-based authentication, which spoofs the sensor to impersonate a victim. C. COUNTERMEASURES More countermeasures against the PA should be considered because Table 2 suggests that it could occur with more SAR than EER (= FAR) in any combination of feature values and classifiers, although the selection of values contributed to reducing SAR. For example, typical replay attack prevention, such as using one-time information, is effective because the PA repeats the recorded signal. It is also effective to add liveness detection techniques, such as a humidity sensor, to the authentication device along with the PPG sensor to recognize the object of measurement as a human body, because we assume that the attacker sends artificial signals to the sensor. V. CONCLUSION We assumed that PPG-based authentication would be available in the near future, and that there might be leakage of information required for authentication. The leakage is derived from the characteristics of PPG: a variety of measurement sites on a body. We examined the information leakage as the vulnerability in PPG-based authentication and assessed a PA against PPG-based authentication based on information leakage. In the experiment, we recorded PPG signals of 12 participants on their wrist and fingertip and examined the information leakage by computing evaluation metrics of the feature values extracted from the recorded PPG wr and PPG fi . The results indicated that there were relationships such as linear correlation between some feature values, which might lead to information leakage and PA against PPG-based authentication. Then, we assessed the feasibility of the PA using the recorded PPG wr and PPG fi and evaluated the contributions of the feature values extracted from the signals to the PPG-based authentication and the PA by computing the PI of each value. The experimental results suggest the PA might occur based on the contribution of the feature values and the selection of feature values could reduce the probability of PA up to 62.8 %. SHUN
2022-04-20T15:08:30.383Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "bfb9c64caa50d6d7d5bb170d0ada6c5b4d6bc20c", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09758697.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "bc818a772cea426e79206e8d51ade73206d7f17e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
64310889
pes2o/s2orc
v3-fos-license
CARer-ADministration of as-needed subcutaneous medication for breakthrough symptoms in homebased dying patients (CARiAD): study protocol for a UK-based open randomised pilot trial Background While the majority of seriously ill people wish to die at home, only half achieve this. The likelihood of someone dying at home often depends on the availability of able and willing lay carers to support them. Dying people are usually unable to take oral medication. When top-up symptom relief medication is required, a clinician travels to the home to administer injectable medication, with attendant delays. The administration of subcutaneous injections by lay carers, though not widespread practice in the UK, has proven key in achieving home deaths in other countries. Our aim is to determine if carer-administration of as-needed subcutaneous medication for four frequent breakthrough symptoms (pain, nausea, restlessness and noisy breathing) in home-based dying patients is feasible and acceptable in the UK. Methods This paper describes a randomised pilot trial across three UK sites, with an embedded qualitative study. Dyads of adult patients/carers are eligible, where patients are in the last weeks of life and wish to die at home, and lay carers who are willing to be trained to give subcutaneous medication. Dyads who do not meet strict risk assessment criteria (including known history of substance abuse or carer ability to be trained to competency) will not be approached. Carers in the intervention arm will receive a manualised training package delivered by their local nursing team. Dyads in the control arm will receive usual care. The main outcomes of interest are feasibility, acceptability, recruitment rates, attrition and selection of the most appropriate outcome measures. Interviews with carers and healthcare professionals will explore attitudes to, experiences of and preferences for giving subcutaneous medication and experience of trial processes. The study has obtained full ethical approval. Discussion This study will rehearse the procedures and logistics which will be undertaken in a future definitive randomised controlled trial and will inform the design of such a study. Findings will illuminate methodological and ethical issues pertaining to researching last days of life care. The study is funded by the National Institute for Health Research (Health Technology Assessment [HTA] project 15/10/37). Trial registration ISRCTN, ISRCTN 11211024. Registered on 27 September 2016. Electronic supplementary material The online version of this article (10.1186/s13063-019-3179-9) contains supplementary material, which is available to authorized users. Introduction Caring for the dying during their last few days of life, in a place of their preference, is an essential part of health and social care. The majority express a wish to die at home (79%); however, only half of those achieve this [1]. The likelihood of patients remaining at home often depends on availability of able and willing informal carers [2][3][4]. These carers take on numerous care tasks, often including the responsibility of assisting patients to have their oral as-needed medications. Extending the role of carers to include administering subcutaneous (SC) injections has proven to be key in achieving home death in other countries [5]. Pain, nausea/vomiting, restlessness/agitation and noisy breathing (rattle) are common symptoms in the dying [6,7]. In addition to regular (background) medication, given via continuous SC infusion using a syringe pump, guidelines suggest using additional ('as -needed') medication for symptoms that 'break through' [8,9]. As dying patients are commonly unable to take oral medication, as-needed medication is most often given as a SC injection by a healthcare professional (HCP) [8], usually a district nurse (DN) in the UK. Medication for breakthrough symptoms is usually prescribed in advance (anticipatory prescribing) and kept in the patient's home. Medication administration can be severely delayed by HCPs' travel time to the home and/or the non-availability of anticipatory medication in the home. Delays happen even with dedicated out-of-hours (OOH) 'rapid response' nursing services for home-based dying patients. Our local audit revealed long waits. The median wait from call to OOH service for symptom control to as-needed medication administration by HCP was 86 min (mean = 99 min, range = 35-167 min), not including time from administration to onset of action or symptom control. Breakthrough pain is usually quick in onset with a median duration of 30 min [10]. Long waits mean that pain is often not adequately managed in the home setting, as shown in the National Survey of Bereaved People (VOICES) [1]. This project focuses on timely administration of as-needed medication for dying patients being cared for at home, particularly whether lay carer role-extension (to be trained to give as-needed SC injections) is feasible and acceptable in the UK. Rationale Although carer administration of medication (including strong opioids) is lawful and practical [11], it is not currently part of usual care everywhere in the UK. This practice is much-needed: the national Palliative and End of Life Care Priority Setting Partnership (PeolcPSP) surveyed 1403 people including patients within the last years of life, current and bereaved carers, and HCPs on their unanswered questions about palliative and end-of-life care. They accorded highest priority to research into the provision of palliative care, including symptom management, outside of working hours to avoid crises and help patients to stay in their place of choice. The survey noted the information and training needs of carers and families to provide the best care for their loved one who is dying, including training for giving medicines at home [12]. Data from the PeolcPSP indicated that UK patients are being denied the opportunity to die at home due to lack of access to adequate symptom relief [13,14]. Carers across the world embrace this as an option, as evidenced through the published literature as well as evidence from our Patient Public Involvement (PPI) group consultations. In Australia, the practice is well-established (> 30 years) and highly acceptable [5]. A manualised educational package and evidence-based guidelines are available. Successful carer-administration of as-needed SC medication for breakthrough symptoms in a dying patient may: ▪ improve the experience (and thus increase the likelihood of a 'good death') for the patient who chooses to be at home by providing speedier symptom control and supporting their wish to die at home; ▪ empower lay carers through the personal fulfilment of having supported a patient's wish to stay at home, increase satisfaction, and reduce anxiety and frustration related to poor symptom control; ▪ reduce inappropriate emergency (crisis) admissions due to uncontrolled symptoms and their associated costs [15,16]; ▪ free up community staff time to address other needs of patients and families, contributing to sustainability of services. This practice appears acceptable and has become embedded in Australia. However, as per our rapid review, there are no randomised studies testing carer-administered non-oral medication in the last days of life for home-based patients anywhere else in the world. Equipoise is emerging on this topic in the UK. Carer-administration of as-needed non-oral (including SC) medication for breakthrough symptoms in homebased dying patients is practised in a limited way in some areas in the UK and has been for a number of years. For this to be widely available to all carers who are considering supporting a loved one at home, it must be tested in a UK environment, with the support of an evidence-based carer education programme and resources. Not all family, carers or patients at home will want to be involved in this practice; the research will help to ascertain the proportion of those who are interested and how to train and support those who are willing. How does the existing literature support this project? A. Carers prioritise rapid symptom control and are willing and able to administer injectable drugs, including controlled drugs such as morphine: a narrative literature review of family carer perspectives on supporting a dying person at home illustrated the desire of families to provide immediate symptom relief [17]. Our review found that caregivers are willing to learn to overcome reservations about administering SC medications. The ability to alleviate their loved ones' symptoms and support them to stay at home was of paramount importance. B. There is an existing evidence-based education package and medication resources: a Brisbane group developed and evaluated an educational package [5,18] and a randomised trial of who prepares the SC injections (carer, nurse or pharmacist) was completed [11]. In Singapore, a colour-coded pre-prepared 'Comfort Care Kit' is in use [19], with oral and non-oral as-needed medication for caregiver administration. A telephone survey of 49 family carers showed that 67% used the kit, all family members found it easy and 98% found it effective for symptom management. All except one patient died at home. In Canberra, the provision of an Emergency Medical Kit (including for use by lay carers) was largely viewed as an effective strategy in giving timely symptom control and preventing inpatient admissions [20]. C. There is growing UK evidence about the carer-role for patients in the last months/year of life, but there are few studies focusing on the last days of life (as reiterated by the Neuberger Review into the Liverpool Care Pathway) [21]. The evidence that is beginning to accumulate mostly focuses on patients who have capacity within the last year of life. UK/Australian research includes 'Unpacking the home' [22,23]. The Cancer Carers Medication Management work [24], the SMARTE study [25] and IMPACCT [26]. Our project, in contrast, focuses on the last few days of life, where capacity is likely to be absent, with very different implications and issues for carer-administration. Community receptivity The UK is ready for testing this extended lay carer role: ▪ primary care teams and families are used to similar practices in other areas of medicine (insulin for diabetes, intravenous antibiotics for children with cystic fibrosis); ▪ the PeolcPSP report incorporated the views of 1403 people across the UK and placed great emphasis on empowerment of family carers and symptom management during the last days of life [12]; ▪ the 'Ambitions for Palliative and End of Life Care: a framework for local action' was published in Sept 2015 [27]. It was jointly developed and published by the National Partnership for Palliative and End of Life Care (27 national organisations) and has widespread support, especially as the Partnership included the Patients' Association and charities with large PPI groups. They identified eight foundations for the six ambitions; one of these foundations relates to 'Involving, supporting and caring for those important to the dying person', acknowledging their importance in the caring team. Each ambition has a set of building blocksthe one on 'practical support' in ambition 6 is particularly applicable to CARiAD as it calls for finding 'new ways to give the practical support, information and training that enables families … to help'. There has been strong positive reception to the framework and many localities are using it to consider their local strategies. Its message about shared ownership and responsibility is particularly pertinent; ▪ in the UK, this practice is not widely accepted as usual care. However, over the past few years, we have identified a small number of geographically distinct sites where the practice occurs (< 10). Recently, the Lincolnshire project was showcased on national radio as part of a series of talks on dying [28,29]. Since the conception of the CARiAD study, other areas have expressed interest in exploring the practice, including joining as a site of a future definitive trial. Pressure on health and care services in the UK HCPs in all three sites have been universally positive towards testing the intervention; if found to be beneficial, this could make their patients more comfortable and their jobs more manageable. In the longer term, this innovation could relieve some pressure on Emergency Departments by reducing inappropriate emergency (crisis) admissions due to uncontrolled symptoms [15,16]. Pressure on DN time could also be relieved as extra visits (in addition to the daily check) to administer as-needed medication would decrease, contributing to sustainability of services. Phase 1 work Expert stakeholder workshops To inform the development of the intervention and specific processes at each site, three expert stakeholder workshops were conducted, one in each recruitment site. Half-day face-to face workshops were convened, based on the successful model used in the El-CID trial [30]. Each workshop had 10-15 participants representing patients, carers, general practitioners (GPs), DNs, pharmacists and specialist palliative care (SPC) clinicians. Two research team members facilitated, setting the context and background to the proposed intervention. Notes were kept which allowed a report of proceedings to be generated. Participants discussed trial procedures, which were then developed based on the consensus reached. The issues covered were: Identification and risk assessment, and approach to potential participants; Consent; Prescription, supply and storage of drugs; Delivery of the intervention; Monitoring and accountability; Outcome measures collection; Post-bereavement interviews; and Ethical considerations. Trial-specific materials were also developed to reflect the consensus reached: for HCPsprescribing advice (relating to patients and carer in the intervention arm), competency checklist, risk assessment; for carers -Carer Diary, carer information booklet 'Subcutaneous medication for breakthrough symptoms in the last days of life: a Guide for carers' , step-by-step guides. Study aims Research Question: Is carer-administration of as-needed SC medication for breakthrough symptoms in homebased dying patients feasible and acceptable in the UK? P = Patients in the last days of life who are becoming unable to take their usual oral as-needed medication for breakthrough symptoms, being cared for at home, and their carers I = Carer-administration of as-needed SC medication for common breakthrough symptoms such as pain, restlessness/agitation, nausea/vomiting and noisy breathing/rattle, supported by tailored education C = Usual care (HCP-administration of as-needed SC medication) O = Main outcomes of interest: Feasibility and acceptability, recruitment, attrition, contamination We will undertake a randomised pilot trial of carer-administered as-needed SC medication for common breakthrough symptoms in home-based dying patients versus usual care, with an embedded qualitative component, to inform the design of a future definitive phase 3 randomised controlled trial (RCT). Methods/Design Trial design and setting The study will be a multicentre pilot RCT carried out in community settings in Gloucestershire, North Wales and the Vale of Glamorgan where patients are likely to die at home in accordance with their wishes. The three pilot study sites have been chosen as they are representative of the range of sites for a future definitive study. The trial was funded by the Health Technology Assessment Programme of the National Institute for Health Research (National Health Service). It has received a favourable ethical opinion from the Wales 1 National Research Ethics Committee (REC reference: 17/WA/0208, IRAS project ID: 227970) and the Bangor University Research Ethics Committee. The UK Medicines and Healthcare products Regulatory Agency (MHRA) has advised that this pilot RCT is not a Clinical Trial of an Investigational Medicinal Product (CTIMP). The study is registered on the International Standard Randomised Controlled Trials Number (ISRCTN) registry (ISRCTN11211024). Approval was granted from the Research and Development departments of all three sites. SPIRIT 2013 recommendations and CONSORT 2010 statements (including those specific to randomised pilot and feasibility trials) guided protocol development [31,32]. The current version of the protocol can be accessed via the National Institute for Health Research (NIHR) Journals Library [33]. Inclusion/exclusion criteria for participants and recruitment Inclusion criteria Dyads of ▪ an adult (aged ≥ 18 years) patient in the last weeks of life, who is likely to lose the oral route for medication and who has expressed a preference to die at home; and ▪ their adult (unpaid) lay/family carer, who is willing to have this extended role and have SC-injection training. Prognostication is reliant on the professional judgement of, and agreement within, the attending HCP team. There is an assumption that the carer will spend a significant amount of time with the patient. While Australian experience indicates that one lay person generally takes a lead role in this practice, where there is more than one suitable carer, we will ask the patient to identify which carer they would like to be included in the study. Exclusion criteria Patients who have only paid/formal care or with previously known adverse reactions to the 'usual' as-needed medications will be excluded. Patients or lay carers who have not met the risk assessment criteria (see below) will not be approached for consent. Patient identification Patient / carer dyads will be identified through the hospice, SPC service or DN team. When a patient is perceived by the HCP team to be in the last weeks of life and they have expressed a wish to be cared for and die at home, they will be screened for approach. Screening To be eligible, dyads must satisfy the risk assessment criteria. A risk assessment screening tool has been refined for CARiAD, based on existing self-medication tools [34]. Risk assessment is based on clinical knowledge and judgement of the situation and will take into account several factors, including: ▪ the carer's mental state, vision and physical condition; ▪ the dyad's attitudes to medicines and willingness to engage with the healthcare team; ▪ relational issues including concerns about burden; ▪ history of substance misuse in the family. The risk assessment will be conducted by the healthcare team involved in the patient's care. If a dyad does not satisfy the risk assessment criteria, they will not be approached. Approach The patient will be approached with written material by a member of their healthcare team. The initial patient approach will be done separately from the carer, unless otherwise requested by the patient and if the attending HCP deems this appropriate, i.e. there is no perceived risk of patient-carer coercion. As the project involves sites in Wales, the Participant Information Sheets and consent forms will be translated into Welsh for the Welsh centres and offered bilingually to comply with the Welsh Language Act 1993. Dyads will be given as much time as they need to consider the information sheets and discuss with family, friends or the healthcare team until they decide whether to take part. They will be told that they can refuse participation without giving reasons. Informed consent A researcher will seek advance consent separately from both the patient and their lay carer at a time judged to be suitable by the attending HCP. This gives the patient and carer as much time as they need to understand the nature of the research, ask questions and make their feelings clear on trial participation [35,36]. If the patient is unable to consent, or once they lose capacity after they have previously given consent, the assent of a Personal Consultee will be sought (as required by the Mental Capacity Act 2005) to the patient's participation in the trial [35,36]. As the risk assessment will exclude dyads where there are concerns about relational issues between patient and carer, the carer can act as Personal Consultee. If the carer does not wish to act as the Personal Consultee and there is no additional family member or close friend to take on this role, we will appoint a Nominated Consultee (e.g. a HCP not associated with the research) who will act for all patients in this situation in the trial. Randomisation Once the dyad has consented and baseline data has been completed, the dyad will be randomised to one of the trial arms. Secure online randomisation hosted by the North Wales Organisation for Randomised Trials in Health (NWORTH) Clinical Trials Unit will be performed by the researcher who has obtained consent. The system will use a dynamic adaptive method of randomisation stratifying for recruitment centre and diagnosis (cancer/non-cancer) [37]. Confirmation of allocation will be sent only to those members of study staff who need to be aware of the result. Blinding CARiAD is an open trial where blinded outcome assessment is not feasible; therefore, it is important that outcomes are as robust as possible in light of the lack of blinding. Outcome assessors will be experienced research nurses. Data entry will be completed unblinded; the trial statistician providing data analysis will be the only individual blinded to randomisation allocation. Withdrawal criteria Participants remain free to withdraw at any time from the trial without giving reasons and without prejudicing their further treatment. This will be made clear to all potential participants at the time they consent to participation and throughout their time in the trial. Non-completion of the follow-up questionnaires will not constitute formal withdrawal from the trial; unless the participant requests withdrawal of their data completely, it may be used to impute values for the analysis. The risk assessment will be reviewed at intervals based on HCP judgement and if the criteria are not met the dyad will be withdrawn from the trial. Interventions Training of carers in the intervention arm will be supported by a manualised training package based on the Australian package 'Caring Safely at Home' [5]. Lay carers will receive training on: common symptoms that may occur in the last days of life and how to assess if their loved one needs medication for a particular symptom; how to prepare (draw up) medication and dispose of sharps (glass ampoules and drawing up needles); how to administer SC medication by needle-less technique (utilising a 'butterfly' SC catheter); how to assess the effect of the medication; and support available, including primary care team as well as dedicated 24/7 SPC support. If a symptom occurs for which medication is deemed necessary (either as expressed by the patient, if able, or as assessed by the carer), the carer can use the training outlined above to administer the appropriate medication. Medication regimens Guidelines for anticipatory prescribing for last days of life care are in place across the UK [38,39]. They cover common symptoms in the dying phase: pain; nausea and/or vomiting; restlessness/agitation; and noisy breathing/rattle. CARiAD recruitment sites will be advised to follow usual prescribing practice. For patients in the intervention arm only, prescribers will be provided with specific additional advice, including instructions not to prescribe dose ranges/steps, and that dose changes can only be made after a face-to-face assessment (and not remotely, i.e. over the telephone). Care pathways The usual care arm has an unchanged care pathway for dealing with breakthrough symptoms at home for a dying patient, with usual palliative care and DNs administering as-needed SC medication. 'Usual routes' for support in each recruitment area are different. For some areas, there is direct access to a 24/7 SPC advice line for carers in addition to support from the patient's primary care team within or out-of-hours. In other areas support for the carer will be via their primary care team, while the GPs and DNs can call on advice from SPC clinicians. In the intervention arm, carers will be trained to administer as-needed SC medication, although they will not be obliged to do this. If the carer needs the support of a HCP, either because they would feel more confident having a HCP present when they administer medication or they wish the HCP to assess and give medication, they can obtain it via the usual routes in their area. If the carer has reached the limit of the number of injections which can be given in 24 h (maximum three injections for each indication per 24 h period unless the prescribing clinician advised a maximum of fewer than three), they will be asked to contact a HCP as review is indicated. Usual routes for support might include DN team, GP, GP/DN out-of-hours, Hospice at Home team or a hospice advice line. The use of such support will be captured in carer diaries (see 'Study procedures'). HCP training requirements In order for nurses to train carers, they will themselves receive training on: the standardised manualised education package (adapted from the Australian work); the legal framework (see Additional file 1); guidelines for medication handling and administration in a community setting; and on trial-specific materials and processes. Study procedures For an overview of study assessments, see Table 1. The main outcomes of interest will be those appropriate to a pilot trial, including feasibility, acceptability, recruitment rates, attrition and selection of the most appropriate outcome measures. Outcomes will be measured for patients, their lay carers and HCPs. System barriers will also be noted. These measurements will be made at baseline, on a daily basis for symptom control and lay carer confidence, at 6-8 weeks after bereavement and at 2-4 months for a sub-sample (carer interviews). Recruitment measurements are: the number of eligible patients who fulfil the inclusion criteria and are willing to be randomised expressed as a percentage of the numbers screened; the number who withdraw after baseline assessment and randomisation; the number who complete the various outcome measurements at baseline and at later time points; and reasons for any non-completion. Patient measurements are: baseline information (including demographic information, medical history, capacity assessment, preferred place of care in the last days of life, current drug management) and a daily Carer Diary during the study related to the presence and treatment of breakthrough symptoms (for use in both study arms). Data points in the diary will include: initial time breakthrough symptom triggered perceived need for an additional SC dose; whether noted by patient or lay carer; medication and dose, and time given; reason for medication (pain, nausea, restlessness, noisy breathing); symptom score before and 30 min after medication administration; and when symptom control/reduction of symptom to acceptable level was achieved. Hospital or hospice admissions during last illness and actual place of death will also be recorded. Carer measurements are: demographic information at baseline; Quality of Life in Life Threatening Illness -Family Carer Version (QOLLTI-F) (at baseline, after the first as-needed SC medication, then every 48 h until the patient's death); whether HCP support was sought; Carer Experience Scale at baseline and after bereavement; Family Memorial Symptom Assessment Score -General Distress Index (MSAS-GDI) at 6-8 weeks' post-bereavement visit; and qualitative interviews for a sub-sample at 2-4 months after bereavement. Specific to the intervention arm, confidence (in administering injection) and competence at intervals after training will be recorded. HCP measurements are: baseline measurements of attending team structure; primary prescriber; carer trainer; and evaluation of the training package. Safety The CARiAD project contains a number of safety outcome measures at different stages of the clinical journey taken by patients, carers and HCPs. Safety outcome measures include the risk assessment, competency checklist and Significant Event reporting. Significant Event reporting will include the following: the appropriateness of administration (is administration accompanied by evidence of need?); proportionality (has the correct dose been administered?); side effects both anticipated and not anticipated; drug accountability (do stocks tally?); and carer events (e.g. distress, needle stick injury, accidental or purposeful self-administration). An adverse event (AE) is defined as any untoward medical occurrence in a trial participant (either patient of carer) and includes incidents that are not necessarily caused by or related to the trial. A serious adverse event (SAE) is any untoward occurrence that results in death, is life-threatening, requires inpatient hospitalisation or prolongation of existing hospitalisation, results in persistent or significant disability/incapacity or is otherwise medically significant. All AEs and SAEs will be captured via Significant Event form. SAEs must be reported to the Principal Investigator (PI) and Sponsor within 24 h. As this is a study in patients who are terminally ill, death is an expected outcome. It will be recorded and reported to the sponsor, but will not be considered a SAE if, in the opinion of the PI, it was a natural conclusion to a patient's terminal illness. Due to the nature of the study, events of death will not require immediate reporting to the Data Monitoring and Ethics Committee (DMEC). Exploratory endpoints/outcomes for a future definitive trial The most likely candidates for primary outcome measures for a future definitive trial are: MSAS-GDI (a measure of overall symptom burden/distress in the last seven days of life) [10,[40][41][42] and QOLLTI-F (a measure of quality of life of carers looking after someone with a life-threatening illness, incorporating elements of control and self-efficacy) [43]. In addition, we will measure carer confidence using a 5-point Likert scale (where the carer is asked after administration of every as-needed SC injection to rate their level of confidence in administering this injection, 1 = not at all confident, 5 = very confident) and probe carer experience during qualitative interviews. Criteria for assessing feasibility as primary outcome measure: all outcome measures will be assessed on the same criteria (applicability, acceptability and level of completeness) for consistency. Once the feasibility of the outcomes is established, the design of the definitive trial will be confirmed. The potential suitability of the following secondary outcomes will be considered: 'Time to symptom relief ' and Carer Experience Scale [44][45][46]. Embedded qualitative study The aim of the embedded qualitative component is to inform the design and assess the feasibility of a phase 3 trial of carer-administered medication. The study will collect interview data from HCPs and carers to: ▪ assess clinical willingness to randomise patients for a future full RCT; ▪ understand the experience of randomisation between intervention and control, to identify relevant patientcentred outcomes for a phase 3 trial and to consider time points for assessment. The qualitative study aims to include interviews with non-consenters to the trial, as well as in-depth qualitative exploration of carer and HCP acceptability of carer-administered SC medication, e.g. strong opioids, anti-emetics, sedatives. The study will use a phenomenological and pragmatic approach to understand the meaning that carer-administration of injectable strong opioids and other as-needed medication has for bereaved carers and HCPs and the practicalities involved. Sample Face-to-face qualitative interviews across the three recruitment sites will be conducted with: ▪ 6-10 carers who have experience of supporting a patient in the intervention arm; ▪ 6-10 carers who have experience of supporting a patient receiving usual care; ▪ 6-10 carers who declined to be randomised to the trial. For carers in all three groups, sampling criteria will include gender and rurality; ▪ up to 30 HCPsto include prescribers (e.g. GPs and ANPs), administering HCPs (e.g. DNs) and SPC clinicians. Sampling criteria will include years since qualification, experience of supporting home deaths and practice characteristics. Consent Carers declining to take part in the trial will be approached upon declining and invited to participate in an interview about the reasons why they chose not to participate. They will be given a separate information sheet for this. Data gathering Interview topic coverage was informed by PPI input, the systematic review and the expert consensus workshop. Attitudes to and experiences of having administered medication including emotional, ethical and practical reflections will be explored, as will issues relating to trial recruitment and feasibility (supply and storage of medication, success of training and perceived competence of carer once trained, choice and recording of the primary outcome). Carers will be interviewed approximately 2-4 months after bereavement (as suggested by usual clinical follow-up and current literature) [1, [47][48][49]. Interviews will be face-to-face at carers' homes or alternative preference, or possibly by telephone, lasting 30-60 min. The interviews with carers who declined to be randomised to the trial will be shorter, lasting 15-20 min. HCP interviews will be by telephone and last around 30 min. All interviews will be audio recorded, transcribed verbatim and the carer interviews will be managed using NVivo. Participants will be asked to consent to publication of anonymised quotes. Analysis The analytic frameworks are selected to understand the meaning that carer-administration of injectable strong opioids and other as-needed medication has for bereaved carers and HCPs. Carer interviews will be analysed using Interpretative Phenomenological Analysis [IPA] to allow a deeper, inductive analysis of the data in the context of carers' and patients' daily lives and values [50]. This methodology focuses on the subjective experience of participants, as interpreted by the researcher. HCP interviews will be analysed using Framework Analysis with a deductive approach [51]. Framework analysis is commonly used in healthcare and is more appropriate for examining the specific aims and objectives of the HCPs. The data will be summarised thematically and displayed on a matrix linking to the original data. Identification of attributes for a future Discrete Choice Experiment (DCE) We have identified the need to determine carers' preferences for HCP versus own administration of medication to patients, using a DCE. The preferences of carers towards administering SC medications will have a bearing on their willingness to adopt this practice and the effectiveness of carer-administered medication. While the DCE (aiming to ascertain carers' preferences for their administration of SC medications) will be conducted as part of a future main study, the preparatory work required to identify relevant attributes and levels will be done as part of the embedded qualitative study component of the randomised pilot study. This will be done with each of the three carer groups in the second part of the interviews and will take up to 20 min. Attributes may feasibly include cost, time, perceived competency, confidence and potential risks. The process of attribute development will be informed by best practice [52]. The use of interviews for the determination of DCE attributes enables a greater opportunity for in-depth exploration of particular issues and concepts than would otherwise be possible in focus groups (which are more common in DCE development). Individual interviews are also better suited to discussions concerning sensitive topics. Within the first five interviews in each group, carers will be presented with a range of attributes, identified by the research team as being likely to affect carers' choice for own versus HCP administration of SC medications. Interviewees will have an opportunity to add other factors of their own choosing to the list and be asked to identify and rank attributes in order of importance. Thereafter, we will use the interviews to pilot the presentation of the highest ranked attributes. The ordinal ranking across each group will be determined and those ranked highest will be taken forward for DCE development. We have successfully implemented this method in previous DCEs [53] and it is consistent with the reductiveness approach of attribute development [52]. We will also pilot the Carer Experience Scale as a means to estimate carer utility [46]. The index values derived from this scale offer a preference-based approach to incorporate the effects on carers in economic evaluation, focusing on care (rather than health)-related quality of life. Statistical considerations Sample size A fully justified sample size is not required; size has been justified by estimating what a future definitive RCT will need. Assuming an important difference of 0.4 (SD=1) on the Family MSAS-GDI a sample of about 216 is required to achieve 90% power to detect a difference of this size with a significance level of 0.05 using a two-sided test. Equivalently a sample of about 550 would be required to detect a difference of 0.5 points (SD=2) using the QOLLTI-F. Using the larger of these estimates for the feasibility trial, we will assume about 9% of the main trial size, to give an 80% confidence interval to exclude a clinically important difference, requires~25 in each group [54]. Sim and Lewis recommend a sample of about 50-55 to ensure robust estimates of the variance [55]. Using estimates of dropouts, we predict we need to approach 200 potential participants to achieve 100 randomised participants, with 50 completers. ('Completer' is defined as a dyad who completed all the study measures from baseline to follow-up at 6-8 weeks after bereavement.) We will therefore need to approach 5.5 dyads per month from each of the three sites and randomise 2.7 dyads per months from each of the three sites to meet our recruitment target. Assuming we will recruit equally between the three areas, we need to approach 66, randomise 33-34 and have 16-17 available per area for analysis (see Fig. 1: Trial flowchart). As per the 2013 Office of National Statistics data described earlier, we know that 8.6% of all deaths are home deaths due to neoplasms in those aged > 15 years [1]. Deaths due to neoplasms are seen as a useful proxy for expected deaths. Therefore, the three recruitment areas have the following numbers available per annum: North Wales = 653; Gloucestershire = 517; and Vale of Glamorgan = 349. Statistical analysis Primary analysis will be concentrated on the feasibility metrics and adherence outcomes based on the thresholds defined in Table 2. There will be limited preliminary analysis of intervention outcomes. Point and 95% confidence interval estimates will be calculated and used to estimate variability and direction of effect to further inform the sample size calculation for a definitive study. Summary statistics of all outcomes will be used to inform the approximate models of analysis that would be used in a full trial. Models will be specified once the data is better understood through the feasibility trial (e.g. numbers of episodes where as-needed medication used, proportion of participants that never required as-needed medication). A preliminary analysis of the outcomes will be completed using an intention-to-treat approach. All analysis undertaken will be prespecified in a statistical analysis plan that will be written and agreed before data collection is completed. As this is a feasibility trial, there will be no imputation of missing data. Missing data will be considered as a criterion for assessing the suitability of measures. Descriptive statistics will be produced for each of the outcome measures, to evaluate the appropriateness of the measures for inclusion in a definitive RCT. Progression to full trial Clear progression rules are defined to determine whether an application for a future substantive trial powered to study effectiveness and cost-effectiveness should proceed. Our progression rules will relate to the following measures; which we considered important to feasibility: reaching our target (16)(17) for the number of patients recruited per site. We have also established clear assessment criteria for establishing the acceptability of the potential primary outcome measures. The table below summarises the objectives, action plan and criteria for progression to a full trial. Governance Trial governance procedures adhere to the NIHR guidelines and include a Trial Management Group (TMG), an independent Trial Steering Committee (TSC) and an independent DMEC. SAEs will be reported to the TSC and DMEC in line with NIHR guidance [56]. Trial Steering Committee (TSC) The TSC nominees have been reviewed and appointed as members by the NIHR Programme Director. The independent members include a Chair, statistician, primary and palliative care clinicians and two public contributors. All TSC meetings have a minimum of 75% majority of independent members. TSC responsibilities include [56]: ▪ providing advice, through its Chair, to the Trial Funder, the Trial Sponsor, the Chief Investigators, the Host Institution and the Contractor on all appropriate aspects of the project; ▪ concentrating on progress of the trial, adherence to the protocol, patient safety and the consideration of new information of relevance to the research question; ▪ ensuring that the rights, safety and wellbeing of the participants are the most important considerations and should prevail over the interests of science and society; ▪ ensuring appropriate ethical and other approvals are obtained in line with the project plan; ▪ agreeing proposals for substantial protocol amendments and provide advice to the sponsor and funder regarding approvals of such amendments. Data Monitoring and Ethics Committee The DMEC nominees have been reviewed and appointed as members by the NIHR Programme Director. All Table 2 Objectives, action plan and criteria for progression to a full trial Objectives Action plan Threshold for progression to definitive RCT 1 To refine the assessment and outcome measures to be used in any potential RCT Qualitative feedback will be collected from participants 2-4 months after the intervention, regarding the acceptability of the measures and will evaluate whether all of the intended information was captured 2 To evaluate the acceptability of the manualised intervention (and potentially refine) An initial workshop with the Australian team was held (Nov 2015). Expert consensus workshop discussions led to refined trial processes, education package and resources [30] A detailed process is described in the study protocol clarifying the legal and regulatory framework for the practice In the feasibility study the simplest method is for lay carers to draw up medications only in immediate form; a full trial would be more appropriate if able to extend this to advance preparation and labelling 3 To evaluate the recruitment process Referral sites and referral sources Where participants heard about the study Number and speed of referrals received and time elapsed between initial contact made with the study team (for information and consent form) In the feasibility we have assumed 50% recruitmentwe would say a full trial is not possible if recruitment falls < 30% 4 To estimate participant retention rate for the full RCT Retention rates will inform the refinement of the sample size calculation for any potential subsequent RCT Participant engagement will be monitored throughout the pilot trial In the feasibility we have assumed 50% retentionwe would say a full trial is not possible if retention falls < 40% 5 To test the assessment and outcome measures for suitability, relevant change factors and acceptability to participants Data from the assessment process will be compared against raw data from the outcome measures to assess the outcome measures sensitivity to identifying participant change 6 To identify acceptability and collection of relevant data to inform the data collection and analysis plan for implementation in the subsequent RCT A review will be completed of each outcome measure of levels of missing data and stability to ensure that the information collected will allow any future main analysis to be feasible and appropriate. Amendments can be suggested where appropriate to amend data collection for any potential future trial. The data available will also inform the details for the analysis plan of any potential full trial Peer review This protocol has had high-quality (independent, expert and proportionate) peer review through the NIHR HTA funding application process. The independent members of the TSC and DMEC will provide an element of continuous peer review. Bangor University is sponsoring the study, and Professor Chris Burton (Head of School, School of Healthcare Sciences, c.burton@bangor.ac.uk) is acting for and on behalf of the Study Sponsor. Sponsor responsibilities include: ▪ taking responsibility for putting and keeping in place arrangements to initiate, manage and fund the study; ▪ confirming that everything is ready for the research to begin; ▪ satisfying itself that the research protocol, research team and research environment have met the appropriate scientific quality assurance standards; ▪ satisfying itself that the study has ethical approval before relevant activity begins; ▪ allocating responsibilities for the management, monitoring and reporting of the research; ▪ ensuring that appropriate arrangements are in place to approve any modifications to the design, obtaining any regulatory authority required, implementing such modifications and making them known; ▪ satisfying itself that arrangements are kept in place for good practice in conducting the study and for monitoring and reporting, including prompt reporting of suspected unexpected SAEs. Quality assurance and quality control Monitoring, audit and inspection A Trial Monitoring Plan will be developed and agreed by the TMG and TSC based on the trial risk assessment. Site monitoring will be done by performing site visits (at least once per site, with a specific focus on consent recording and handling of data and site files) as well as remotely by exploring the trial dataset. The sites will be expected to assist the sponsor in monitoring the study. These may include hosting site visits, providing information for remote monitoring or putting procedures in place to monitor the study internally. Monitoring will be conducted across all sites and will include a focus on enrolment rates, numbers of withdrawals and numbers of reported AEs. Responsibilities for monitoring will be defined and documented in the Trial Monitoring Plan. Data handling Procedures are in place to protect participant confidentiality before, during and after the trial. Data collection tools and source document identification Source data will be captured on paper at the relevant time points. A study-specific MACRO database will be developed to allow researchers to enter data online. MACRO allows controlled access to the data by all centres and stores a full audit trail. The electronic data captured in the MACRO database will be stored on servers maintained by Bangor University and will be subject to the university IT disaster recovery procedures. Access to data and data management Paper data at sites will be stored in locked filing cabinets separately from identifiable participant data. Access to the MACRO site will be secure and password-controlled. Access to MACRO will be defined on two different levels-access to input (researchers at sites) and access to full dataset-which will be limited to those core team members involved in data and trial management. A detailed data management plan will include the definition of the data quality checks that will be performed on the data throughout the life course of the trial. These will include source data validation, random data checks and timelines for data entry. Access to the final trial dataset The trial statisticians will have full access to the dataset. The Chief Investigators (CIs) and trial manager will have access to the full dataset after the analysis has been completed. The DMEC will have access to the full dataset as required. The TSC will have access to the full dataset before the individual sites having access. Data sharing During the course of the trial, datasets may be requested from the trial team. A data request form will form part of the data management plan and will document the approval and retrieval process for datasets during the conduct of the trial. All requests will have to be approved by the CIs. All data requests and datasets issued will be retained for completeness. Data archiving Archiving of trial documents will be authorised by the Sponsor following submission of the end of study report. As per the sponsor's research data management policy, research data and records will be retained 'for as long as they are of continuing value to the researcher and the wider research community, and as long as specified by research funder, patent law, legislative and other regulatory requirements. The minimum institutional retention period for research data and records is five (5) years after publication or public release of the work of the research, unless required by the funder to retain for longer' [57]. In line with legal requirements, trial documents will be archived centrally at a secure facility with appropriate environmental controls and adequate protection from fire, flood and unauthorised access. Archived material will be stored in tamper-proof archive boxes that are clearly labelled. Electronic archiving will be provided by the sponsor for post-project deposit and retention of data. Destruction of essential documents will require authorisation from the Sponsor. Publication policy Dissemination plan The results of the study will be first reported to trial collaborators. The main report will be drafted and agreed by the trial coordinating team and the final version will be agreed by the HTA before submission for publication, on behalf of the collaboration. The study findings will be disseminated through publication in highly cited and open-access peer-reviewed journals and submissions to national and international conferences. In addition, dissemination of our work to clinical and academic colleagues will be via professional societies, newsletters, existing networks and professional websites. Relevant NHS organisations and healthcare providers, e.g. Clinical Commissioning Groups and National Institute for Health and Care Excellence (NICE), will be informed of the study outcomes. All carer participants, if they so wish, will be sent an accessible summary of the findings from the study within six months of study completion. The same summary will be made available to public/patient forums to inform patient groups across the area. It is expected that the TMG will ensure a high level of awareness of our work in the relevant media while exploring the use of social media to disseminate outcomes, encourage public/patient involvement and promote future research to improve patient care at the end of life. Authorship eligibility Authorship (individually named or group) on the final trial report and manuscripts submitted for publication will be in accordance with the authorship criteria defined by the International Committee of Medical Journal Editors [58]. Indemnity Bangor University has appropriate Clinical Trials Indemnity and Professional Indemnity insurance in place that will cover members of the research team to conduct the research as per protocol. Health and Care Research Wales staff has NHS contracts and will be responsible to ensure that their work is appropriately insured. NHS staff working with patients involved in the intervention will not be expected to do anything that is not covered by their contracts and will remain covered by the NHS insurance arrangements. Discussion Empowering lay carers to support a loved one's wish to die at home is an important part of care in the last days of life. Role extension for lay carers allowing them the option to give SC PRN injections is already practised and valued in other countries. It is important and timely to study this in the UK setting. In order to design a definitive study of the clinical and cost-effectiveness of carer administration in this context, rehearsal of the study procedures and logistics is required. The study we describe will fulfil this aim and will provide an exemplar for conducting RCTs in the last days of life by contributing to the emerging methodological development of palliative care research. Specific design considerations resulted in the decision to propose a stand-alone (external) randomised pilot trial. These include: ▪ the current UK context (post-Shipman, post-Liverpool Care of the Dying Pathway and with the ongoing euthanasia public debate), calling for careful attention to its impact on consent mechanisms and attitudes of carers, patients and HCPs to this innovation; ▪ the lack of clear UK-wide guidance on careradministration of as-needed SC medication to dying home-based patients: though the practice is lawful, current guidance is not detailed nor specific enough for wide adoption; ▪ the lack of a clear and widely accepted training package for lay carers, adapted for the UK context; ▪ the uncertainty about the primary outcome measure for a definitive trial. These are unpredictable barriers until the re-worked Australian manualised intervention is introduced and trial processes are tested. If the intervention is proven feasible and acceptable, we anticipate a phase of ensuring new guidance is developed and put in place at national level in UK health systems to enable the practice before rolling out a full trial quickly. We have demonstrated a clear path towards a definitive RCT as per the Medical Research Council Framework for the evaluation of complex interventions principles, further informed by the Methods of Researching End of Life Care guidance [59][60][61]. The CARiAD team is aware of the sensitivities surrounding, and ethical and methodological challenges associated with, researching last days of life care. With this in mind, study processes were carefully developed with strong public contribution. Public contribution Our team is committed to meaningful involvement of patient representatives. Two service users are co-applicants. Insights gained from their experiences of giving injections to dying loved ones at home were crucial in designing the project and they have offered to be involved at all stages of its development. Their involvement will be fundamental in disseminating the research results to patients, carers and healthcare professionals. Two additional groups of bereaved carers have been consulted and their suggestions on consent mechanisms, drug safety, training and ongoing support have been incorporated into the study design. The recruitment of representatives with appropriate and explicit experience ensures that we fully understand the needs of our research participants. This work builds on previous published work in PPI for trials and academic units [62,63]. In line with the standards, the PPI representatives will be invited to join the Involving People network in order to benefit from its training portfolio and support systems. All usual arrangements, refreshments, travel, access and carer support will be in place for considerate inclusion of PPI representatives at meetings. The project will work to the NIHR's newly developed National Standards for Public Involvement in Research and an audit of the standards will be reported at the close of the study [64]. Trial status This manuscript is a summary of the current, HTA-agreed protocol (Version 5 [17 August 2018]), with regard for NIHR dissemination guidance. The study opened to recruitment in North Wales on
2019-02-07T18:13:38.572Z
2018-12-20T00:00:00.000
{ "year": 2019, "sha1": "91c1d2df1303d046eda24942a45945d1b880263d", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-019-3179-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60a80d0b9d972d5467ecd06e1f17aa6aabe49aaf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211221978
pes2o/s2orc
v3-fos-license
Topological entropy of Bunimovich stadium billiards We estimate from below the topological entropy of the Bunimovich stadium billiards. We do it for long billiard tables, and find the limit of estimates as the length goes to infinity. Introduction In this paper, we consider Bunimovich stadium billiards. This was the first type of billiards having convex (focusing) components of the boundary ∂Ω, yet enjoying the hyperbolic behavior [6,7]. Such boundary consists of two semicircles at the ends, joined by segments of straight lines (see Figure 1). For those billiards, ergodicity, K-mixing and Bernoulli property were proved in [10] for the natural measure. We consider billiard maps (not the flow) for two-dimensional billiard tables. Thus, the phase space of a billiard is the product of the boundary of the billiard table and the interval [−π/2, π/2] of angles of reflection. We will use the variables (r, ϕ), where r parametrizes the table boundary by the arc length, and ϕ is the angle of reflection. We mentioned the natural measure; it is c cos ϕ dr dϕ, where c is the normalizing constant. This measure is invariant for the billiard map. As we said, we want to study topological entropy of the billiard map. This means that we should look at the billiard as a topological dynamical system. However, existence of the natural measure resulted in most authors looking at the billiard as a measure preserving transformation. That is, all important properties of the billiard were proved only almost everywhere, not everywhere. Additionally, the billiard map is only piecewise continuous instead of continuous. Often it is even not defined everywhere. All this creates problems already at the level of definitions. We will discuss those problems in the next section. In view of this complicated situation, we will not try to produce a comprehensive theory of the Bunimovich stadium billiards from the topological point of view, but present the results on their topological entropy that are independent of the approach. For this we will find a subspace of the phase space that is compact and invariant, and on which the billiard map is continuous. We will find the topological entropy restricted to this subspace. This entropy is a lower bound of the topological entropy of the full system, no matter how this entropy is defined. Finally, we will find the limit of our estimates as the length of the billiard table goes to infinity. The reader who wants to learn more on other properties of the Bunimovich stadium billiards, can find it in many papers, in particular [2,4,6,7,8,9,11]. While some of them contain results about topological entropy of those billiards, none of those results can be considered completely rigorous. The paper is organized as follows. In Section 2 we discuss the problems connected with defining topological entropy for billiards. In Section 3 we produce symbolic systems connected with the Bunimovich billiards. In Section 4 we perform actual computations of the topological entropy. Topological entropy of billiards Let M = ∂Ω × [−π/2, π/2] be the phase space of a billiard and let F : M → M be the billiard map. We assume that the boundary of the billiard table is piecewise C 2 with finite number of pieces. In such a situation the map F is piecewise continuous (in fact, piecewise smooth) with finitely many pieces. That is, M is the union of finitely many open sets M i (of quite regular shape) and a singular set S, which is the union of finitely many smooth curves, and on which the map is often even not defined. The map F restricted to each M i is a diffeomorphism onto its image. This situation is very similar as for piecewise continuous piecewise monotone interval maps. For those maps, the usual way of investigating them from the topological point of view is to use coding. We produce the symbolic system associated with our map by taking sequences of symbols (numbers enumerating pieces of continuity) according to the number of the piece to which the n-th image of our point belongs. On this symbolic space we have the shift to the left. In particular, the topological entropy of this symbolic system was shown to be equal to the usual Bowen's entropy of the underlying interval map (see [13]). Thus, it is a natural idea to do the same for billiards. Thus, for a point x ∈ M, whose trajectory is disjoint from S, we take its itinerary (code) ω(x) = (ω n ), where ω n = i if and only if F(x) ∈ M i . The problem is that the set of itineraries obtained in such a way is usually not closed (in the product topology). Therefore we have to take the closure of this set. Then the question one has to deal with is whether there is no essential dynamics (for example, invariant measures with positive entropy) on this extra set. A rigorous approach for coding, including the definition of topological entropy and a proof of a theorem analogous to the one from [13], can be found in the recent paper of Baladi and Demers [3] about Sinai billiards. The Sinai billiard maps are simpler for coding than the Bunimovich stadium maps. There are finitely many obstacles on the torus, so the pieces of the boundary, used for the coding, are pairwise disjoint. This property is not shared by the Bunimovich stadium billiards. The stadium billiard is hyperbolic, but not uniformly. Moreover, here we have to deal with the trajectories that are bouncing between the straight line segments of the boundary. To complete the list of problems, the coding with four pieces of the boundary seems to be not sufficient (as has been noticed in [4]). The papers dealing with the topological entropy of Bunimovich stadium billiards use different definitions. In [4] and [11], topological entropy is explicitly definied as the exponential growth rate of the number of periodic orbits of a given period. In [8], first coding is performed in a different way, using rectangles defines by stable and unstable manifolds. This coding uses an infinite alphabet. Then various definitions of topological entropy for the obtained symbolic system are used. In [3], topological entropy is defined as the topological entropy of the corresponding symbolic system, that is, as the exponential growth rate of the number of nonempty cylinders of a given length in the symbolic system. As we mentioned, it is shown that the result is the same as when one is using the classical Bowen's definition for the original billiard map. In [2], topological entropy is not formally defined, but it seems that the authors think of the entropy of the symbolic system. In this paper, we will be considering a subsystem of the full billiard map. This subsystem is a continuous map of a compact space to itself, and is conjugate to a subshift of finite type. Thus, whether we define the topological entropy of the full system as the entropy of the symbolic system or as the growth rate of the number of periodic orbit, our estimates will be always lower bounds for the topological entropy. Subsystem and coding We consider the Bunimovich stadium billiard table, with the radius of the semicircles 1, and the lengths of straight segments ℓ > 1. The phase space of this billiard map will be denoted by M ℓ , and the map by F ℓ . The subspace of M ℓ consisting of points whose trajectories have no two consecutive collisions with the same semicircle will be denoted by K ℓ . The subspace of K ℓ consisting of points whose trajectories have no N + 1 consecutive collisions with the straight segments will be denoted by K ℓ,N . We will show that if ℓ > 2N + 2, then the map F ℓ restricted to K ℓ,N has very good properties. In general, coding for F ℓ needs at least six symbols. They correspond to the four pieces of the boundary of the stadium, and additionally on the semicircles we have to specify the orientation of the trajectory (whether ϕ is positive or negative), see [4]. However, in K ℓ this additional requirement is unnecessary, because there are no multiple consecutive collisions with the same semicircle. This also implies that in K ℓ for a given ℓ the angle ϕ is uniformly bounded away from ±π/2. While in [2] the statements about generating partition are written in terms of measure preserving transformations, the sets of measure zero that have to be removed are specified. In K ℓ the only set that needs to be removed is the set of points whose trajectories are periodic of period 2, bouncing from the two straight line segments. However, this set carries no topological entropy, so we can ignore it. Thus, according to [2], the symbolic system corresponding to F ℓ on K ℓ is a closed subshift Σ ℓ of a subshift of finite type with 4 symbols. We say that there is a transition from a state i to j if it is possible that ω n = i and ω n+1 = j. In our subshift here are some transitions that are forbidden: one cannot go from a symbol corresponding to a semicircle to the same symbol. There are of course also some transitions in many steps forbidden; they depend on ℓ. For every element of Σ ℓ there is a unique point of K ℓ with that itinerary. However, the same point of K ℓ may have more than one itinerary, because there are four points on the boundary of the stadium that belong to two pieces of the boundary each. Thus, the coding is not one-to-one, but this is unavoidable if we want to obtain a compact symbolic system. Another solution would be to remove codes of all trajectories that pass through any of four special points, and at the end take the closure of the symbolic space. This problem disappears when we pass to K ℓ,N with ℓ > 2N + 2. Namely, then the angle ϕ at any point of K ℓ,N whose first coordinate is on the straight line piece, is larger than π/4 in absolute value. Let us look at the geometry of this situation. Let C be the right unit semicircle in R 2 (without endpoints), A ∈ C, and let L 1 , L 2 be half-lines emerging from A, reflecting from C (like a billiard flow trajectory) from inside at A (see Figure 2). Assume that for i = 1, 2 the angles between L i and the horizontal lines are less than π/4, and that L i intersects C only at A. Consider the argument arg(A) of A (as in polar coordinates on in the complex plane). Proof. If | arg(A)| ≥ π/4, then both lines L 1 and L 2 are on the same side of the origin, so the incidence and reflection angle cannot be the same. Therefore, | arg(A)| < π/4. Suppose that L 1 passes through the lower endpoint of C (the other cases are similar). Then arg(A) < 0, so L 2 intersects the semicircle also at the In view of the above lemma, the collision points on the semicircles cannot be too close to the endpoints of the semicircles (including endpoints themselves). Thus, the correspondence between K ℓ,N and its coding system Σ ℓ,N is a bijection. Standard considerations of topologies in both systems show that this bijection is a homeomorphism, say ξ ℓ,N : K ℓ,N → Σ ℓ,N . If σ is the left shift in the symbolic system, then by the construction we have ξ ℓ,N • F ℓ = σ • ξ ℓ,N . In such a way we get the following lemma. We can modify our codings, in order to simplify further proofs. The first thing is to identify the symbols corresponding to two semicircles. This can be done due to the symmetry, and will result in producing symbolic systems Σ ′ ℓ and Σ ′ ℓ,N , which are 2-to-1 factors of Σ ℓ and Σ ℓ,N respectively. Since the operation of taking a 2-to-1 factor preserves topological entropy, this will not affect our results. With this simplification, Σ ′ ℓ is a closed, shift-invariant subset of the phase space of a subshift of finite type Σ. Subshift of finite type Σ looks as follows. There are three states, 0, A, B (where 0 corresponds to the semicircles), and the only forbidden transitions are from A to A and from B to B. Geometric meaning of the recoding is simple. We unfold the stadium by using reflections from the straight parts (see Figure 3). We will label the levels of the semicircles by integers. Our new coding translates to this picture as follows. We start at a semicircle, then go to a semicircle on the other side and m levels up or down, etc. For symbolic systems, recoding in such a way amounts to the topological conjugacy of the original and recoded systems (see [12]). This means that the system (Σ ′ ℓ,N , σ) is topologically conjugate to a subsystem of Σ N , which is the subshift of finite type defined as follows. The states are −N, −N + 1, . . . , N − 1, N , and the transitions are: from 0 to every state, from i to i + 1 and 0 if 1 ≤ i ≤ N − 1, from N to 0, from −i to −i − 1 and 0 if 1 ≤ i ≤ N − 1, and from −N to 0. Proof. Both sets Σ ′ ℓ,N and Σ N are closed and Σ ′ ℓ,N ⊂ Σ N . Therefore, it is enough to prove that Σ ′ ℓ,N is dense in Σ N . For this we show that for every sequence (ρ 0 , ρ 1 , . . . , ρ k ) appearing as a block in an element of Σ N there is a point (r 0 , ϕ 0 ) ∈ K ℓ,N for which after coding and recoding a piece of trajectory of length k + 1, we get (ρ 0 , ρ 1 , . . . , ρ k ). By taking a longer sequence, we may assume that ρ 0 = ρ k = 0. Consider all candidates for such trajectories in the unfolded stadium, when we do not care whether the incidence and reflection angles are equal. That is, we consider all curves that are unions of straight line segments from x 0 to x 1 to x 2 . . . to x k in the unfolded stadium, such that x 0 is in the left semicircle at level 0, x 1 is in the right semicircle at level n 1 , x 2 is in the left semicircle at level n 1 + n 2 , etc. Here n 1 , n 2 , . . . are the numbers of nonzero elements of the sequence (ρ 0 , ρ 1 , . . . , ρ k ) between a zero element and the next zero element, where we also take into account the signs of those non-zero elements. In other words, this curve is an approximate trajectory (of the flow) in the unfolded stadium that would have the recoded itinerary (ρ 0 , ρ 1 , . . . , ρ k ). Additionally we require that x 0 and x k are at the midpoints of their semicircles. The class of such curves is a compact space with the natural topology, so there is the longest curve in this class. We claim that this curve is a piece of the flow trajectory corresponding to the trajectory we are looking for. If we look at the ellipse with foci at x i and x i+2 to which x i+1 belongs, then x i+1 has to be a point of tangency of that elipse and the semicircle. Since for the ellipse the angles of incidence and reflection are equal, the same is true for the semicircle. Now we have to prove three properties of our curve. The first one is that any small movement of one of the points x 1 , . . . , x m−1 gives us a shorter curve. The second one is that none of those points lies at an endpoint of a semicircle. The third one is that none of the segments of the curve intersects any semicircle at any other point. The first property follows from the fact that any ellipse with foci on the union of the left semicircles at levels −N through N , which is tangent to any right semicircle, is tangent from outside. This is equivalent to the fact that the maximal curvature of such ellipse is smaller than the curvature of the semicircles (which is 1). The distance between the foci of our ellipse is not larger than 2(2N + 1), and the length of the large semi-axis is larger than ℓ. Elementary computations show that the maximal curvature of such ellipse is smaller than ℓ ℓ 2 −(2N +1) 2 . Thus, this property is satisfied if ℓ 2 −ℓ > (2N +1) 2 . However, by the assumption, The second property is clearly satisfied, because if x i lies at an endpoint of a semicircle, then an infinitesimally small movement of this point along the semicircle would result in both straight segments of the curve that end/begin at x i to get longer. The third property follows from the observation that if ℓ ≥ 2N + 2 then the angles between the segments of our curve and the straight parts of the billiard table boundary are smaller than π/4. Suppose that the segment from x i to x i+1 intersects the semicircle C to which x i+1 belongs at some other point y (see Figure 4). Then x i+1 and y belong to the same half of C. By the argument with the ellipses, at x i+1 the incidence and reflection angles of our curve are equal. Therefore, the segment from x i+1 to x i+2 also intersects C at some other point, so x i+1 should belong to the other half of C, a contradiction. This completes the proof. y x i +1 Figure 4. Two intersections. Remark 3.4. By Lemmas 3.2 and 3.3 (plus the way we obtained Σ ′ ℓ,N from Σ ℓ,N ) it follows that if ℓ > 2N + 2 then the natural partition of K ℓ,N into four sets is a Markov partition. Computation of topological entropy In the preceding section we obtained some subshifts of finite type. Now we have to compute their topological entropies. If the alphabet of a subshift of finite type is {1, 2, . . . , k}, then we can write the transition matrix M = (m ij ) n i,j=1 , where m ij = 1 if there is a transition from i to j and m ij = 0 otherwise. Then the topological entropy of our subshift is the logarithm of the spectral radius of M (see [12,1]). Proof. The transition matrix of (Σ ′ ℓ , σ) is   1 1 1 The characteristic polynomial of this matrix is (1 − x)(x 2 − 2x − 1), so the entropy is log(1 + √ 2). In the case of larger, but not too complicated, matrices, in order to compute the spectral radius one can use the rome method (see [5,1]). For the transition matrices of Σ N this method is especially simple. Namely, if we look at the paths given by transitions, we see that 0 is a rome: all paths lead to it. Then we only have to identify the lengths of all paths from 0 to 0 that do not go through 0 except at the beginning and the end. The spectral radius of the transition matrix is then the largest zero of the function x −p i − 1, where the sum is over all such paths and p i is the length if the i-th path. Lemma 4.2. Topological entropy of the system ( Σ N , σ) is the logarithm of the largest root of the equation Proof. The paths that we mentioned before the lemma, are: one path of length 1 (from 0 directly to itself), and two paths of length 2, 3, . . . , N each. Therefore, our entropy is the logarithm of the largest zero of the function so our entropy is the logarithm of the largest root of equation (4.1). Now that we computed topological entropies of the subshifts of finite type involved, we have to go back to the definition of the topological entropy of billiards (and their subsystems). As we mentioned earlier, the most popular definitions either employ the symbolic systems or use the growth rate of the number of periodic orbits of the given period. For subshifts of finite type that does not make difference, because the exponential growth rate of the number of periodic orbits of a given period is the same as the topological entropy (if the systems are topologically mixing, which is the case here). As the first step, we get the following result, that follows immediately from Lemmas 3.2, 3.3 and 4.2. Theorem 4.3. If ℓ > 2N +2 then topological entropy of the system (K ℓ,N , F ℓ ) is the logarithm of the largest root of the equation (4.1). Now, independently of which definition of the entropy h(F ℓ | K ℓ ) of (K ℓ , F ℓ ) we choose, we get the next theorem. Proof. On one hand, K ℓ,N is a subset of K ℓ , so h(F ℓ | K ℓ ) ≥ h(f ℓ | K ℓ,N ) for every N . Therefore, by Theorem 4.3, where y N is the largest root of the equation (4.1). The largest root of x 2 −2x−1 = 0 is 1+ √ 2. In its neighborhood the right-hand side of (4.1) goes uniformly to 0 as N → ∞. Thus, lim N →∞ y N = 1 + √ 2, so we get (4.2). If we choose the definition of the entropy via the entropy of the corresponding symbolic system, then, taking into account Lemma 4.1, we get a stronger theorem. Of course, the same lower estimates hold for the whole billiard.
2020-02-21T03:53:59.935Z
2020-06-06T00:00:00.000
{ "year": 2020, "sha1": "a4aea7d1755b22e6aa3d6835cb6c124221cc4f98", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "43d49c7fcfb02d2812f7c6d5dff226c258c5cc3e", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
233137821
pes2o/s2orc
v3-fos-license
Hyperspectral Image Denoising Using 3-D Geometrical Kernel With Local Similarity Prior The majority of existing hyperspectral (HS) image denoising methods exploit local similarity in HS images by rearranging them into the matrix or vector forms. As the typical 3-D data, the inherent spatial and spectral properties in HS images should be simultaneously explored for denoising. Therefore, a 3-D geometrical kernel (3DGK) is developed in this article to describe the local structure. The proposed method assumes that the pixel can be represented by other pixels within a 3-D block efficiently owing to the local similarity with adjacent positions. Then, the HS image is modeled by the 3-D kernel regression with L1-norm constraint, in which the local similarity is captured by the proposed 3DGK. To efficiently compute the parameters in 3DGK, geometrical structures, such as scale, shape, and orientation in the 3-D block, are estimated from the gradient information approximately. Finally, the noises are effectively removed while preserving the structures in HS images. Moreover, experimental results on simulated and real datasets demonstrate that the performance of 3DGK is better than those of the methods based on local similarity prior. I. INTRODUCTION W ITH the rapid development of imaging sensors, hyperspectral (HS) remote sensing technology has been attracting much attention and HS images have also been applied in many scene interpretation tasks, such as land-use classification [1], target detection [2], and unmixing [3]- [6], due to containing abundant spectral information. However, during acquiring or transmitting HS images are always corrupted by some undesired noises [7], which have some impacts on the performance of the above-mentioned tasks. Therefore, it is vital to efficiently remove the noises and recover the HS image accurately. Over the past two decades, many methods are proposed to achieve the denoising of HS images by adopting different tools. For example, Li et al. [8] proposed an HS image denoising method based on distributed sparse representation [9], which exploited the intraband and interband information for dictionary learning. Qian and Ye [10] introduced the spatial and spectral structure into sparse representation for HS denoising, in which similar patches are collected to ensure the sparsity. Fu et al. [11] learned an adaptive spatial-spectral dictionary from the noisy HS images for noise removal. Subsequently, Zhuang and Bioucas-Dias [12] combined low-rank and sparse representation to capture the spatial and spectral correlation in HS images. In [13], Xiong et al. embedded the low-rank prior into tensor to model the correlations in both spectral and spatial domains, which is regularized by L 0 gradient term. Besides, to capture the nonlocal self-similarity [14], Zhang et al. [15] proposed a total variation (TV) regularized nonlocal low-rank tensor decomposition model to restore the clean HS images. For accurate weight estimation, Hu et al. [16] proposed an adaptive nonlocal means denoising method for the denoising of HS images. Xue et al. [17] further explored the global correlation across the spectrum and nonlocal similarity in HS images to regularize the low-rank tensor model. Besides, Xie et al. [18] proposed an intrinsic tensor sparsity measure to encode the low-rank prior in the Tucker decomposition model, which also considered the nonlocal similarity. Chen et al. [19] utilized the tensor-ring decomposition model to construct high-order tensors, which improves the ability for noisy HS images. Moreover, other priors, such as nonnegativity [20], are also incorporated with the tensor model for HS image denoising. Besides, a large number of methods are also specifically developed for HS image denoising by integrating local information priors, which has been widely focused and extensively applied to HS image denoising. Local prior mainly captures the inherent information of HS image locally, because it is difficult to describe the global prior in the whole HS images directly. As a popular model, TV [21] is proposed to explore the local gradient information in images, which is extended for HS images by considering the spectral property. For example, a spectral-spatial adaptive hyperspectral TV (SSAHTV) is proposed in [22] to further exploit the noise intensity differences and spatial property differences. Further, the anisotropic spatial and spectral TV is designed to deal with the piecewise structures in [23] and low-rank tensor decomposition is considered to capture the global spatial and spectral correlation of HS images. Meanwhile, He et al. [24] also employed This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ TV to regularize the local low-rank decomposition model for denoising of HS images. Fan et al. [25] presented a spatial and spectral TV model to deal with spectral structure loss led by the bandwise TV. In [26], 1-D TV along the spectral dimension is introduced to preserve the spectral signatures. Besides, local similarity (LS) [27] is also an efficient prior commonly used for HS image denoising. It is assumed that the pixel values in a local region are similar and they can represent each other due to the close location among them. For example, Zhao and Yang [28] utilized sparse and low-rank constraints to capture the local redundancy and correlation in HS images, which can well preserve the spatial and spectral details in the denoised HS images. Lu et al. [29] divided all bands in HS images into several groups according to spectral correlation and then spatial adaptive LS is considered to find related pixels. In [30], the local similar pixels are defined by superpixel segmentation, which avoids dividing HS images into square patches. Recently, some HS image denoising methods based on deep neural networks [31]- [34] are also proposed and acquire satisfactory results. These methods hierarchically extracted image features from local regions, which are also inspired by local prior. For example, the local structures in HS images are extracted in convolutional neural networks level by level [35]. As a powerful tool to model local relationships, kernel-based methods are usually employed to describe the LS prior among adjacent pixels and have been generally used for natural image denoising. For example, Takeda et al. [36] analyzed the local structures in images by kernel regression and obtained better denoising results. In [37], Bouboulis et al. adopted the semiparametric formulation to establish the adaptive kernel for image denoising. Subsequently, the local geometrical structures are further exploited in [38] and the sparsity of neighbors is cast on the steerable kernel. Then, the LS prior captured by the kernel is also applied to the denoising of HS images. For example, Zhong and Wang [39] considered the conditional random field to handle the edges and textures in a local region, in which a filter bank is used to represent the structures. Moreover, kernel regression is also utilized in [40] and a structure tensor is defined to explore the local gradient orientation information. However, the spectral information is represented by a fixed vector in [40], which cannot adaptively estimate the features in the spectral direction. These methods obtain better denoised HS images, the inner structures in 3-D blocks of HS images are not analyzed carefully and deeply. The LS between different pixels is not only related to the distance in position but also decided by the geometrical structures, such as edges and textures. As shown in Fig. 1(a), the pixel p locating in the sharp edge is not similar to pixel p 1 although the distance in position between them is close, where the pixels are labeled by a circle. Inversely, p looks very similar to the pixel p 2 in intensity because they are in the same edge structure. Moreover, we compute the correlation coefficients (CC) of the adjacent bands of the HS image in Fig. 1(a) and the CC curve is displayed in Fig. 1(b). From Fig. 1(b), one can see that the similarity in adjacent bands is high owing to the high spectral resolution. Thus, the similarity in the spectral domain should also be considered. Therefore, to simultaneously exploit the local geometrical structure and spectral correlation in HS images, we proposed a new 3-D geometrical kernel (3DGK) to remove the noises in HS images from the perspective of kernel regression in this article. It is assumed that the other pixels in a 3-D block centered by the pixel to be denoised are similar to the center pixel. Then, the similarity weights between the center pixel and the other neighboring pixels are measured by the 3DGK. In the proposed method, the inner structures within a local 3-D block of HS images can be approximated adaptively by the 3DGK, where the scale, shape, and orientation of the kernel are deformable and can be adjusted by elongation, rotation, and scaling. The estimated kernel not only considers the distance information but also makes full use of the spatial and spectral structures. In 3DGK, the similarity weights can be inferred more reasonably. Through the geometrical structure prior, the noises in HS images can be reduced efficiently while preserving the spectral and spatial details. The contributions of the proposed method can be summarized as follows. 1) We present a novel 3DGK regression model for HS images denoising, which adopts the geometrical structures in 3-D blocks to capture the LS in spatial and spectral domains. The geometrical structures are reflected by the weights assigned to each pixel within the 3-D blocks. 2) We analyze the geometrical structures in the 3-D blocks deeply and infer the scaling, elongation, and rotation factors from the local gradient information in the 3-D blocks to estimate more accurate weights. The rest of this article is organized as follows. In Section II, the 3DGK is constructed and its denoising process is described. In Section III, experimental results on different datasets and noise distributions are given. Finally, Section IV concludes this article. II. 3DGK DENOISING In this section, according to the LS assumption, the pixel to be denoised in a 3-D block is estimated by the neighboring pixels under the 3-D kernel regression framework. Then, the inner geometrical structures are adaptively analyzed by elongation, rotation, and scaling. Finally, the denoised HS images are produced by the proposed 3DGK. A. Kernel Regression in HS Images Similar to the model in [36], the denoising model can be written as (1) for the noisy HS image according to regression theory [41] p where p i is the noised pixel value. T denotes the position of the pixel. f (·) is a regression function. Here, the two spatial dimensions correspond to horizontal and longitudinal axes, respectively. The spectral dimension is viewed as the vertical axis. So, x h i , x l i , and x v i are coordinate values of p i . ε i is the noise. N equals to the number of estimated pixels and N = b 3 for a 3-D block with size b × b × b. In a local 3-D block of HS image, the center pixel to be denoised is similar to the neighboring pixels. Then, the center pixel can be well represented. Thus, in the local 3-D block, the Taylor expansion of (1) is calculated as where ∇ and H stand for Gradient and Hessian operators, respectively. Due to the symmetry property of the Hessian matrix, (2) can be rewritten as where ltr lexicographically rearranges all elements in the lower triangular part of a matrix into a column vector. β 0 is equivalent to f (x), which is the pixel value to be estimated. β 1 and β 2 are computed by (4) and (5), respectively Then, β 0 , β 1 , and β 2 can be obtained by minimizing (6). Besides, to produce clearer spatial details, we adopt L 1 -norm in (6) to replace the L 2 -norm used in [36]; then, the optimization of (6) becomes the minimization of the weighted where denotes elementwise multiplication. The related symbols in (6) are defined as follows: is a kernel determined by a smoothing factor H whose size is 3 × 3. To estimate the denoised pixel values, an auxiliary variable is introduced for efficient optimization of (6), which is formulated as Then, the augmented Lagrangian multiplier method (ALM) [42] is considered to solve (11). So, the augmented Lagrange function of (11) is written as where ρ is the penalty parameter and y is the Lagrangian multiplier. According to the iterative scheme, q is obtained by solving the subproblem Equation (13) can be optimized efficiently by the soft threshold operator in [43]. For b, its subproblem is By replacing the elementwise multiplication with the matrix multiplication and simplification, (14) can be reformulated as where W x = diag(w x ). diag(·) transforms a vector into a diagonal matrix. This is a least-squares optimization problem, whose closed-form solution can be obtained by setting the derivate of (15) w.r.t. b as 0. The derivate of (15) w.r.t. b is Finally, the multiplier y is updated by Taking the computation complexity into consideration, the iteration number is set as 20 due to the efficiency and rapid convergence of ALM. It is obvious that the value of the estimated pixel is β 0 , which is the first element in b. Naturally, if the weights in W x are given, the denoised pixel value can be computed easily. B. 3-D Geometrical Kernel It can be found that the denoising performance of (6) depends upon the weight matrix W x , which is derived from the kernel K H (x). So, the choice of the kernel is critical. A typical kernel is a Gaussian function, which uses the information of location distance between the center pixel and the neighboring pixels. For example, the Gaussian kernel is employed in [44] to find the smooth or texture components of HS images. It is obvious that the similarity may be low for the close pixels in location distance if they have different spatial or spectral appearances. In addition to the distance information, the pixels are very similar in a 3-D block of HS images, which not only have similar intensity in the spatial domain but also are similar in spectral response due to the high spectral resolution of HS images. Besides, there exist abundant geometrical structures, such as edges or textures in the 3-D block, which also should be considered for HS image denoising. As shown in Fig. 1(a), the pixels along the edges have a more similar intensity and the pixel values are closer in the smooth regions. For spectral domain, the structure similarity also can be found. Although the geometrical structure prior is utilized in [36] for image denoising, only spatial information is calculated to estimate the weights that cannot take full advantage of the spectral information in HS images. For K H (x), its property is packaged inside H. Therefore, H should carry the location distance, intensity, and structure information in a 3-D block of HS images. Considering the requirement, 3DGK is constructed to simultaneously capture the geometrical information of HS images in the spatial and spectral domains, which is formulated as h is a smoothing parameter. μ i denotes the local density. C i contains the geometrical information, such as the orientation and magnitude of structures in a 3-D block, which can be reflected roughly by the covariance matrix of gradients in spatial and spectral domains. Thus, C i is defined by (19) shown at the bottom of this page, where g h (x j ), g l (x j ), and g v (x j ) are the gradients of the pixel in x j along horizontal, longitudinal, and vertical directions, respectively. B i is the 3-D block whose central point is the pixel p i . However, it is time-consuming to compute C i directly. Fortunately, C i can be decomposed approximately by (20) owing to the symmetry in where γ i is the scaling factor of the 3-D kernel. According to the elongation matrix Λ i , the principal axes of the 3-D kernel are elongated. t i , v i , and w i are the elongation factors on the principal axes of the 3-D kernel, respectively. The rotation of the 3-D kernel is denoted by U i , which can be modeled as (22) shown at the bottom of the next page, where R x (θ i ), R y (φ i ), and R z (ψ i ) are the rotation matrices along with different directions. Thus, the rotation of principal axes is achieved by U i . θ i , φ i , and ψ i are the rotation angles on horizontal, longitudinal, and vertical axes, which can be efficiently from U i by dcm2angle in MATLAB. For an intuitive understanding, the scaling, elongation, and rotation of a 3-D kernel are shown in Fig. 2, which gradually matches the spatial and spectral structures in the 3-D block. The 3-D ball kernel is first resized by γ i . Then, the lengths of the three axes are adjusted by Λ i . Finally, a triaxial ellipsoid kernel is rotated along with different directions through U i and the 3DGK is obtained. Through the transformation of the above operations, the structures in the 3-D block can be efficiently represented. According to the image local analysis in [45], the orientation of the structure in the 3-D block can be derived from its gradients. block. g h , g l , and g v are column vectors, which are comprised of the gradients of the 3-D block in different directions. The relationship between G i and C i can be written as Then, U i and Λ i can be calculated from the singular value decomposition (SVD) of G i , which is (24) where V i , Σ i , and U i are the corresponding decomposed results of G i after SVD. U i in (20) is the same as that U i in (24) and Σ i = diag ([s i,1 , s i,2 , s i,3 ]). So, the parameters in (20) are efficiently inferred from (24) due to the numerical stability and efficiency of SVD. For example, Λ i can be regarded as the amplitude of the principal axes of the 3-D kernel, whose elements are calculated by where α is a regularization parameter to avoid meaningless values. By (25), the elongation factors can be normalized to avoid the large variation caused by pixel values. The shape of the 3-D kernel is controlled by Λ i . In homogeneous or smooth regions, the 3-D kernel is close to a ball in shape. In some heterogeneous or edges regions, the 3-D kernel is adjusted to capture the structure information. Therefore, the geometrical structure is embedded into the 3-D kernel naturally. Besides, the scaling factor γ i is defined as where M is the number of pixels in the 3-D blocks. γ i is also related to the intrinsic structure in the 3-D block. For smooth areas, the values of s i,1 , s i,2 , and s i,3 will be close because the pixel values are similar in smooth areas. So, the kernel size should be large, which is decided by γ i . Through a large kernel, more pixels can be used to reconstruct the central pixel in the 3-D block. In edge or texture areas, the pixel values in the 3-D block will have large differences and s i,1 , s i,2 , and s i, 3 in different directions are also quite different. Then, the kernel will be small. Fig. 3 displays the estimated 3DGK from some typical geometrical structures in the Pavia University (PaviaU) dataset, such as smoothness, edge, and texture, for visual analysis. It can be observed that the structure of the estimated kernel is consistent with that in the corresponding 3-D block in Fig. 3. Because the weights within the 3-D kernel are invisible in 3-D space, we project the estimated 3-D kernels into 2-D space along with three straightforward directions. The horizontal and longitudinal directions are along the spatial dimensions of the 3-D blocks, respectively. The vertical direction is consistent with the spectral dimension. For example, we can see that larger weights mainly concentrate on the center. In smooth blocks, most pixel values are similar. Thus, the weights are mainly dependent upon distance in position, which means the weights are larger for the pixels closer to the center. For the edge block, the projections on horizontal and longitudinal directions reflect that the neighboring pixels are more similar to the center pixel along the vertical direction, namely spectral dimension. Because the similarity is high among adjacent bands. In the vertical projection, the structure looks like the spatial information in the edge block. For the texture block, larger weights are obtained along spectral dimension due to the complex structures in the spatial domain. We can see that the structures in 3-D blocks are dominated by the edge for edge blocks. For texture blocks, their structures are complex and there are no obvious principal directions. Naturally, the kernel area in the vertical projection of the texture blocks is small. Through 3DGK, the structure in 3-D blocks can be well exploited. Therefore, β 0 that is the desired pixel value can be obtained through (6) once the 3-D kernel is referred from the local structures of HS images. Finally, HS image denoising is realized by the proposed 3DGK. C. Complexity Analysis The proposed denoising method is composed of two parts: the estimation of 3DGK for each pixel and the reconstruction of the denoised image. In the first part, gradient calculation and the approximation of covariance matrix are involved. The complexity is O(3W HB) for computing the gradients of the image with size W × H × B. W and H are the width and height of HS images, respectively. B denotes the number of bands in HS images. When estimating the covariance matrix, SVD is utilized whose complexity is O(18NW HB) for total pixels. N stands for the number of pixels in the 3-D block. Thus, the computational complexity is O ((3 + 18N )W HB). For the second part, the complexity is dominated by the update of q and matrix multiplication in (15). The complexity of the update of q is O(NW HB) for the involved elementwise operation. For b, the complexity is O ((10N 2 + 100N )W HB) to solve (15). Therefore, it can be found that the second part of 3DGK dominates most of the computing time, and the computational complexity increases with the increasing of the 3-D block size for a given HS image. Fortunately, the method can be achieved in parallel for acceleration because the pixels in the HS image can be estimated independently. III. EXPERIMENTAL RESULTS AND COMPARISONS In this section, experiments on simulated and real datasets are conducted to verify the effectiveness of the proposed method. The denoised results of the proposed methods are compared with those of some kernel-based denoising methods, including SK [36], SDGNLM [38], and AKR [40]. Besides, some methods based on local or global similarity are also considered for comparisons, such as SSAHTV [22], 3DNLM [46], LRMR [47], and E3DTV [48]. The denoised results of all methods are analyzed in visual quality and then some numerical indicators are employed to evaluate the denoised images objectively. The results of simulated datasets are assessed by three indicators [49], i.e., mean peak signal-to-noise ratio (MPSNR), mean structural similarity (MSSIM), mean feature similarity (MFSIM), and spectral angle mapper (SAM). These indicators are the average of PSNR [50], SSIM [51], FSIM [52], and SAM on all bands, which are defined as where C equals to the number of bands in HS images. i i and j i are the spectral vectors of the ith pixel in the reference HS images and the denoised HS images, respectively. For the three indicators, the larger the values, the better the denoised results. CEIQ [53] and blind score (BS) [54] are considered to assess the denoised results of all methods on the real dataset. BS extracts the statistics features in HS images for assessment and has been used in many fields [55], [56]. The denoised images will be better if CEIQ is larger. However, smaller values mean better quality for BS. In the following experiments, h in 3DGK has an important influence on denoising results, which is related to the noise level. According to the formulation in [38], we find a linear function between h and noise standard deviation σ, expressed as: h = 4σ + 0.175. μ i about kernel is set as 1. The regularization parameters α and β are set as 0.01. The size of the 3-D block is 9 × 9 × 9 in 3DGK. For a comprehensive analysis, the influences of some key parameters on denoised images are given in Section III-D for 3DGK. For real datasets, the 3-D block size is set as 11 × 11 × 11. Because the noise level is unknown in real datasets, h is empirically set as 0.4 and 0.25 for Indian Pines (IP) data and Urban data, respectively. A. Simulated Data Experiments In the simulated data experiments, PaviaU and Washington DC Mall (WDCM) datasets are used to evaluate the performance of the proposed methods visually and quantitatively. PaviaU dataset is collected by the reflective optics system imaging spectrometer sensor, whose spatial resolution is 1.3 m. The spatial resolution of the WDCM dataset is 2.8 m, which is acquired by the hyperspectral digital imagery collection experiment (HYDICE) sensor in 1995. Owing to space limitation, a subimage with the size 256 × 256 × 191 is cropped from WDCM data as the reference image. For the PaviaU data, some bands are corrupted by heavy noises, which cannot be viewed as reference images. So, these bands in PaviaU data are removed and the size of the selected subimage is 256 × 256 × 98. The two datasets are shown in Fig. 4. For the simulated data, the pixel values of all bands are normalized into [0, 1]. In the simulated Table I when the noise level increases from 0.05 to 0.2. For different noise levels, the proposed method 3DGK can provide a better performance in the indicators on the whole. For the noise HS image in Case 2, the denoised results have a better performance than the noise HS images with noise levels 0.15 and 0.2 for almost all methods. Besides, Fig. 5 also presents the PSNR values of all bands of the denoised results from all methods. From Fig. 5, one can see that the proposed method has better performance for the first 70 bands. LRMR [47] behaves well on the last several bands. reference HS image and the denoised HS images from different methods on the 661-nm band. The reference image and noise image are displayed in Fig. 6(a) and (b), respectively. The denoised images for all methods are shown in Fig. 6(c)--(j). From Fig. 6, we can see that the noises in the HS image are removed visibly for all methods when compared with the noise image in Fig. 6(b). For SSAHTV [22], the noises are suppressed but some artificial effects are introduced, especially in the smoothing areas. For Fig. 6(d), the blurring of the image can be observed and the noises in Fig. 6(e) are not eliminated well. For the result of SDGNLM [38], although the noises are efficiently removed, the spatial details and textures are also oversmoothed. For the denoised images in Fig. 6(g) and (h), some noises can be still found. The result of the proposed method has a better visual performance. Besides, a region containing buildings and vegetation is selected for a comparison, which is magnified and put on the bottom right corner of the image. From the enlarged region, we can see that the details in Fig. 6(d) and (e) are more blurry than those in Fig. 6(g) and (h). For the region in Fig. 6(j), the spatial information is blurred and the noises are suppressed well. Besides, we can see that the reconstruction errors of the result from the proposed method are relatively small from the error maps in Fig. 6. Moreover, some pixels are chosen and their spectral residual curves are plotted in Fig. 7. From Fig. 7, it can be found that the spectral differences are considerable between the reference curve and that of SDGNLM [38]. The curve of 3DNLM [46] also has an obvious difference from the reference curve. Compared with other difference curves, the spectral residual of 3DGK is relatively smaller, which can preserve the spectral features more efficiently. For the WDCM dataset, the numerical indexes are reported in Table II. Similar results can be seen in Table II, where the denoising performance degrades with the increase of noise level. Compared with other methods, the proposed method provides better performance in the three indicators on the whole. The PSNR curves from different methods are also displayed in Fig. 8. In Fig. 8, it can be seen that the PSNR values of AKR [40] are lower than those of other methods. The proposed method can produce better performance. The denoised images of all methods are compared and one band is selected and exhibited in Fig. 9, in which the falsecolor composites of HS images are made up of three bands (2320, 1910, 590 nm). Moreover, Fig. 9 also presents the error maps between the reference HS image and the denoised HS images from different methods on the 2320-nm band. The reference band and the noised band are placed in Fig. 9(a) and (b). In Fig. 9(d) and (e), the blurring effects are visible, which may be caused by improper spatial and spectral structure estimation. Because the spectral structures of HS images are ignored in SK [36] and AKR [40]. For the result of SDGNLM [38], some textures in flat areas are obviously suppressed. In Fig. 9(g) and (h), the noises in some homogenous regions are not eliminated adequately. For the proposed method, the spatial structures are maintained well. A homogeneous area is chosen and enlarged for visual analysis in Fig. 9 and the enlarged area is put on the bottom right corner of the denoised image. For the local area in Fig. 9(c), some spatial information is distorted. The results in Fig. 9(d) and (e) are also blurred. It can be observed that many details are lost in the selected area of Fig. 9(f) although the noises are removed. Besides, [22]. (d) SK [36]. (e) AKR [40]. (f) SDGNLM [38]. (g) 3DNLM [46]. (h) LRMR [47]. (i) E3DTV [48]. (j) 3DGK. some noises can be seen in the area of Fig. 9(h). For the proposed method, some blur effects can be found in the magnified region. From Fig. 9, it can be observed that smaller reconstruction errors are achieved for the proposed method Fig. 10 displays the residual curves on different locations compared with the reference spectral signatures. From Fig. 10, we can see that although the noises are removed for SDGNLM [38], the spectral features are also damaged. For most methods, the consistency is better when the band number is larger than 100 except SDGNLM [38]. For the proposed method, the residual curve is relatively stable when the band number is smaller than 100. B. Real Data Experiments In this part, two real datasets shown in Fig. 11 are utilized for performance analysis. The first data, IP data, is collected by AVIRIS in 1992, whose size is 145 × 145 × 220. The second data, named Urban, is obtained by HYDICE over an urban area, whose size is 307 × 307 × 210. For the IP dataset, some bands, such as the 488-, 1772-, and 2418-nm bands, are selected from the denoising results of all methods and presented in Fig. 10, where the results in one column are produced by the same methods and the images in one row have the same band index. From Fig. 12, it can be found that the results from SDGNLM [38] are blurry severely. For the images with heavy noises, the visual performance of the results from SSAHTV [22] is unsatisfactory. For the results in Fig. 12(c) and (d), the noises are removed but some blurring effects arise. Fig. 12(g) provides better denoising results, which may be benefited from the constraint of low rank. For the results of the proposed method, spatial information is preserved but some noises can be still found from the image in the middle row. Besides, different regions in different bands are considered, which are enlarged for a direct comparison. In the selected regions of the images in the first row, we can see some spatial details or weak targets are oversmoothed in Fig. 12(f)-(i). For the images with heavy noises in the second row, the spatial information is enhanced well in Fig. 12(f)-(h), but the hue of Fig. 12(f) has an obvious difference from that of Fig. 12(a). In the last row of Fig. 12, the results of AKR [40], LRMR [47], and 3DGK behave well in visual performance. But the region in Fig. 12(d) is more blurry. The denoising results of the Urban dataset are shown in Fig. 13 and three bands (1470, 1550, and 1650 nm) are provided for comparison. There exist some striping noises and mixed noises in the Urban dataset. From Fig. 13, we can see that the striping noises are suppressed in the results of different methods. But the spatial details are oversmoothed in the results of SDGNLM [38] and 3DNLM [46]. In Fig. 13(i), the stripes can be still viewed from the denoised result in the first row. For the results of AKR [40], blurring effects can be seen in the building areas. For the images in the last row, better visual performance is provided. Similarly, some interesting regions are selected in the denoised results for comparison. From different regions, we can see a similar performance with the results in Fig. 12. In the second row of Fig. 13, some subtle textures are smoothed in Fig. 13(e) and (f) when removing the strips. The results of LRMR [47] have a better performance in spatial details but the noises are not suppressed well. The proposed method has a better performance in noise removal. Besides, Table III lists the numerical results of these methods on IP and Urban datasets. Larger CEIQ represents better denoised results. For BS, the higher score means lower denoising quality. For the IP dataset, the proposed method provides better numerical values in CEIQ and BS, which means the denoised result of the proposed method behaves well in general. In the Urban dataset, the best CEIQ is from the proposed method, but LRMR [47] provides the best BS. C. Robustness of 3DGK To verify the robustness of the kernel estimation on different noise levels, we use the simulated PaviaU and WDCM datasets and then calculate the difference values of the rotation angles θ i , φ i , and ψ i of the kernel for noise pixels with the corresponding angles of the kernel for clear pixels. Then, the means of all relative errors for different noise levels are computed and plotted in Fig. 14. Fig. 14(a) demonstrates the estimation errors of the PaviaU dataset on rotation angles θ i , φ i , and ψ i when compared with its clear HS image. It can be observed that the relative errors vary with the increase of noise level. However, the relative errors of different angles are less than 0.07, which indicates the estimated kernel is robust to the noises. The relative errors of the WDCM dataset on rotation angles are shown in Fig. 14(b). From Fig. 14(b), we also can see that the relative errors are smaller. Although, the relative error of φ is greater than 0.1 for noise level with 0.2, which is affected by the heavy noises, the relative errors of θ and ψ are still less than 0.1. Compared with the PaviaU dataset, the relative errors of the WDCM dataset are greater, which may be caused by the complex structures of different objects in the WDCM dataset. E. Running Time In Table IV, we report the running time of all methods on different datasets for a comprehensive analysis. All methods are implemented on the same computer with Intel Core i7-6700 processor, 3.4 GHz, and 16 GB memory by MATLAB R2017a. From Table IV, it can be observed that regression-based methods, such as SK [36] and AKR [40], spend more time. Besides, 3DNLM [46] is also time-consuming because of the neighbor search in local areas. SSAHTV [22] and LRMR [47] are faster when compared with the proposed method. Compared with SDGNLM [38] and 3DNLM [46], 3DGK has low computational complexity. IV. CONCLUSION In this article, an HS image denoising method is developed by extending the kernel regression model to the 3-D formulation, which makes effective use of the spatial and spectral geometrical structures in HS images. In the proposed method, 3DGK is established by estimating the structure and orientation in a 3-D block, whose information is inferred from its gradients. Then, 3DGK is adjusted adaptively by scaling, elongation, and rotation to approximate the inner structure of the 3-D block. So, more proper weights can be assigned to the adjacent pixels in the 3-D block. Finally, the denoised pixels are obtained by efficiently minimizing the weighted L 1 -norm constraint through ALM. Experiments are conducted on simulated and real datasets. Compared with some methods based on LS prior, the proposed method has a better performance in qualitative and quantitative evaluations. In future work, the geometrical structure captured by 3DGK will be further explored and combined with other efficient priors, such as TV.
2021-04-07T13:19:10.287Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "799e1e86dfa5177e0278a47a27b2615395f8f97b", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/4609443/9314330/09372798.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "799e1e86dfa5177e0278a47a27b2615395f8f97b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
219070284
pes2o/s2orc
v3-fos-license
F-Gases: Trends, Applications and Newly Applied Gases in the Czech Republic : Emissions of fluorinated greenhouse gases (F-gases), which are used as replacements for ozone-depleting substances, have risen sharply since 1995. The rapid increase in F-gas emissions coupled with their global warming potential (GWP) has led to increased worldwide attention to monitoring emission levels and subsequently regulating the use of F-gases. These restrictions apply in particular to applications for which alternative technologies are available that are more economically e ffi cient and have minor or no impact on the Earth’s climate system. This paper brings new information about changes in composition of consumed F-gases in the Czech Republic. Since no F-gases are produced in the country, data about F-gas consumption are obtained from three resources which give information about import and export. The paper also describes implementation of newly used F-gases, which are used as replacements for specific F-gases, into emission calculation models. Emissions are estimated according to the methodology developed by the Intergovernmental Panel on Climate Change (IPCC). Although consumption of F-gases with high GWP has already started decreasing, it will have no e ff ect on F-gas emissions for several years. Introduction Fluorinated greenhouse gases (F-gases) are anthropogenic gases used mainly as substitutes for ozone-depleting substances. Although F-gases do not damage the atmospheric ozone layer, they contribute significantly to the global greenhouse effect [1]. Two main groups of F-gases can be distinguished; hydrofluorocarbons (HFCs) and perfluorocarbons (PFCs) [2]. The difference between the HFCs and PFCs groups is in the degree of fluorination. The first group consists in partly fluorinated F-gases, whereas the second group contains fully fluorinated molecules [3]. There is an important difference between these two groups, especially in terms of the greenhouse effect, i.e., their lifetimes in the atmosphere. While the lifetime of HFCs varies between a few days and 250 years, most PFCs can remain in the atmosphere for thousands of years [4]. F-gases are used, for instance, as fire suppressors, aerosols, refrigerants in refrigerators, in air conditioning systems, etc. Since global warming potential (GWP) of F-gases is many times greater than that of carbon dioxide (CO 2 ), the EU is taking regulatory action to control them [4]. GWP measures how much energy 1 ton of a gas will absorb over a certain time period, relative to 1 ton of CO 2 [5]. Regulation (EU) 517/2014 outlines guidelines and information about reporting by companies. Directive 2006/40/EC (and also the MAC (mobile air conditioning) Directive) is focused on prohibition of using F-gases in mobile air conditioning. But it does not suggest any way to accomplish this goal [6]. F-gases are subject to a reporting duty. Facilities that produce, import or export 1 metric ton or 100 tones CO 2 eq. and more F-gases must report such information [7]. In the following chapters, European legislative measures are described in more detail, it is shown how they affect composition of consumed F-gases in the Czech Republic, and information about newly used F-gases is added in the same way as information about their integration into the F-gas emission calculation model PHOENIX, which is a country specific estimation model for F-gas emissions. Newly applied F-gases change the results of the F-gas emissions in the Czech Republic and affect in this way also the total emission from greenhouse gases. The PHOENIX model is continuously improved to provide accurate and transparent results. The paper provides an overview of the trends of F-gases used in the Czech Republic including recent updates and changes following application of the newly developed F-gases with low GWP. Regulation (European Union, EU) No 517/2014 and Mobile Air Conditioning (MAC) Directive This regulation was published after approval by the European Parliament (EP) and the Council in April 2014. The Regulation is based on Regulation (EC) No 842/2006 on certain fluorinated greenhouse gases. The aim of the Regulation is to reduce emissions of F-gases, which have to be decreased by 72-73% by 2030 against the reference year of 1990. The regulation therefore provides rules on containment, use, recovery and destruction of F-gases and conditions on the placing on the market complemented by quantitative limits [8]. The objective of the regulation could be achieved, e.g., by replacing current gases with high GWP to gases with lower impact or, preferably, with no impact on the climate. In this context, the regulation also provides rules for training responsible persons for the safe-handling of alternative refrigerants, because some alternatives to F-gases have undesirable properties, such as lower flash point and toxicity [8]. The measures which entered into force on 1 January 2020 affect a wide range of equipment. From this date, it is prohibited to place on the market: • Mobile room air-conditioning equipment that contains HFCs with GWP of 150 or more; • Refrigerators and freezers for commercial use that contain HFCs with GWP of 2 500 or more (starting 2022, limit of GWP will be 150); • Stationary refrigeration equipment that contains HFCs with GWP of 2500 or more. These measures have an impact on the use of the mixtures R-404a (GWP 3943 [4]) and R-507a (GWP 3985 [4]), which are mainly used in stationary commercial refrigeration [9]. The subject of the MAC Directive is to lay down the requirements for air-conditioning systems fitted into vehicles. The directive applies to passenger cars comprising no more than eight seats in addition to the driver's seat and vehicles designed and constructed for the carriage of goods and having a maximum mass not exceeding 1305 kilograms. Since 1 January 2017, mobile air-conditioning systems in new vehicles, which are brought into service in the EU, must not contain gases with GWP of 150 and more [6]. Trends and Applications in the Czech Republic The base year for reporting obligation of F-gases in the Czech Republic is 1995. F-gases are not produced in this country and all of them are imported. Data about direct import/export, use and destruction are obtained from ISPOP (Integrated system of reporting obligations), the F-gas register (Questionnaire on production, import, export, feedstock use and destruction of the substances listed in Annexes I or II of the F-gas regulation) and the Customs Administration of the Czech Republic [10]. The major share of F-gases (about 99%) is used for refrigeration and air conditioning systems, approximately 75% of F-gases is used for refrigeration and stationary air conditioning and 25% is used in mobile air conditioning. Manufacturers use a wide range of mixtures containing HFCs. In the past, mixtures containing PFCs were also used, but these mixtures has not been used since 2010 [10]. Refrigeration and Stationary Air-Conditioning Systems ISPOP provides information about import, export, regeneration, destruction and the first placing on the market of F-gases. The ISPOP database contains the EU market data. The threshold for submitting data to ISPOP by importers, exporters and users is 0.1 metric tonne of F-gases. The F-gas register provides data about the imported, exported and disposed amounts of F-gases and also contains information about the average specific charge of equipment, amount of imported, exported or disposed equipment and information about specific use of the equipment. Information in the F-gas register is related to the trade between EU countries and non-EU countries and the threshold for submitting data to the F-gas register is more than 1 metric tonne of F-gases. The threshold refers to the sum of F-gases, not each imported/exported gas separately. Customs data provide information about trading between the Czech Republic and the global market. These data provide information about imported/exported products and containers of fluorinated greenhouse gases; information is classified according to the combined nomenclature, which is regularly updated [10]. The global market is covered in the inventory since the data sources cover trade between the Czech Republic and EU countries and also non-EU countries. Verification of the data by each importer/exporter/user of F-gases in all the data sources is a very important step in the process of inventory preparation, because it is necessary to avoid double counting [10]. In the national inventory, mixtures are compartmentalized into individual gases. The trend of use of particular F-gases is depicted in Figure 1. As can be seen, there was a significant decrease in the use of HFC-125, and HFC-143a even did not appear in the market in 2018. These two gases have high GWP, and their decrease is based on the fact that manufacturers are preparing for limitation of these gases and their mixtures. As can be seen on the second graph in Figure 1a,b, although the amount of F-gases used in 2018 is similar to in 2017, the used gases have a smaller impact on global warming. The EU regulations force manufacturers to find alternatives to F-gases with high GWP. One option for them is to develop new mixtures with lower GWP. Another option is to use natural refrigerants, such as ammonia, isobutene, propane and CO 2 , which have already been used in the past [11]. The coming years will show which option is more convenient for manufacturers; nevertheless, we can expect a gradual decrease in the use of F-gases in this field. Mobile Air Conditioning A different approach than for refrigeration and stationary air conditioning is used for estimation of the amount of F-gases employed in mobile air conditioning. The data collection is based on knowledge of the number of vehicles, percentage of vehicles with air conditioning and average amount of refrigerants in these vehicles. Data about production are obtained from the Automotive Industry Association. These data contain the production figures for the Czech automobile industry since 1995. Three car producers (ŠKODA AUTO Inc., Hyundai Motor Manufacturing Czech Ltd. and TPCA), bus producers (SOR Libchavy Ltd., Iveco Czech Republic Inc. and others) and one truck producer (TATRA TRUCKS Inc.) are currently operating in the Czech Republic. More detailed data about production (e.g., production of particular models) are obtained directly from ŠKODA AUTO Inc. and TPCA, whose production covers approximately 85% of all new passenger cars produced in the Czech Republic. Knowledge of the production of particular models makes determination of the average initial charge more accurate. The initial charge of passenger cars decreased over the years from 750 g per unit to 500 g per unit [10]. The only refrigerant used for mobile air conditioning from 1995 to 2015 was HFC-134a (GWP 1300 [4]). Since 2015, it has been prohibited to use this gas in mobile air conditioning and manufacturers have started filling cars for the EU market with hydrofluoroolefin (HFO)-1234yf and this gas has been the main refrigerator in mobile air conditioning since then. This change is the result of implementation of the MAC Directive. It is assumed that HFO-1234yf will not be replaced by some other refrigerant in next few years, so its consumption will depend solely on the number of cars produced. As can be seen from the second graph in Figure 2a,b, the impact of HFO-1234yf on global warming is more than thousand times smaller than the impact of HFC-134a. More detailed comparison of these two gases is presented in the following section. Newly Employed F-Gases in the Czech Republic HFO-1234yf and HFO-1234ze are newly used as alternatives to HFC-134a. Both new HFCs have an olefinic structure and low GWP. Because of their structure they are called hydrofluoroolefins (HFOs) [12]. There are apparent some similarities between HFO-1234ze(E), HFO-1234yf and the older F-gas HFC-134a summarized in Table 1. Their chemical structures are also similar. A slight difference is caused by the position of the F atom in the olefinic part of the molecule (which is shown in Figure 3). Because of their harmful potential, the American Industrial Hygiene Association (AIHA) set workplace environmental exposure levels (WEEL), for HFO-1234ze equal to 800 ml.m-3 and, for HFO-1234yf, 500 ml.m-3 per 8 hours shift [13]. Other thermodynamic properties of mentioned F-gases are captured in the Table 1 [14][15][16][17]. HFO-1234yf belongs to third-generation refrigerants. Its GWP value is smaller than 1 [4] which is very low compared, e.g., with HFC-134a, whose GWP is 1300 [4]. Both these F-gases have quite similar vapour pressures under laboratory conditions and also similar boiling points and critical points. HFO-1234yf is poorly combustible but is slightly flammable. The flammability of the gas is a reason for concern. This behaviour of HFO-1234yf is the reason why greater specialisation is required for employees working with it, like safety training and how to handle flammable things, etc. [18]. HFO-1234yf shows slight activity in the Ames test, which is used for the testing mutagenic and carcinogenic activity of the tested substances. Further investigations did not find any mutagenic activity. Thus, HFO-1234yf does not cause any genetic changes and the doses are low for all the tests performed in the study by [19]. It has very low GWP and consequently it is used more often, as it has only small or no impact on the planet Earth. HFO-1234ze has been tested for its toxic and harmful potential for humans. The research of Rusch and his colleagues (2012) [22] was related to the inhalation toxicity of HFO-1234ze, showing that the effective dose is very high, i.e., HFO-1234ze has very low toxic potential. Nonetheless, they found that its target organs are the liver and kidneys. Some histopathological changes were found in these organs. Mixtures At the present time, manufactures can choose between mixtures which were used less often in the past, or mixtures with new gases. R-407 represents mixtures which have long been available but were of little interest to manufactures until the present time. Its GWP is still rather large; however, this mixture has a broad range of applications. It is used in commercial, industrial and transport refrigeration. R-449a, which includes HFO-1234yf, is used in the same area. Mixture R-452a, also with HFO-1234yf, is used chiefly in transport refrigeration. Mixtures that replace HFC-134a still contain this gas to which new kinds of gases are added. These include HFO-1234yf for R-513a and HFO-1234ze for R-450a. These mixtures are used in mediumand high-temperature commercial and industry refrigeration, air conditioning and heat pumps. Table 2 shows overview of mostly widely used replacement mixtures in the Czech Republic. Other newly developed mixtures, which haven't been introduced on the market yet, commonly combine formerly used HFCs with new HFOs in various ratios. Calculation of Emissions from Refrigeration and Stationary Air Conditioning in the Czech Republic Emissions from refrigeration and stationary air-conditioning systems are estimated with the national PHOENIX calculation model defined according to the methodology developed by the Intergovernmental Panel on Climate Change (IPCC). In the inventory, five sub-applications are defined: commercial, domestic, industrial, and transport refrigeration and stationary air conditioning [24]. When calculating emissions of F-gases, time lag between consumption and emissions is taken in account. Time lag results from the fact that a chemical placed into a new product may only slowly leak out over time, not being released until end-of-life. Thanks to this and according to data availability, emissions can be estimated in a various ways with varying degrees of complexity and data intensity [24]. In the Czech Republic, Tier 2a, called the Emission-factor approach, is used. It takes in account the national and regional regulations governing the use of F-gases, defines the emission factors for refrigerant charge, during operation, at servicing and at equipment end of life [24]. The calculation model is divided into four main parts: input, divider, emission estimates and output. In input, there are emission factors, legislative measures and annual data about consumption of F-gases for the initial filling of new equipment and for servicing equipment in use. Collecting of the data from annual consumption is described in Section 3.1 and legislative measures are described in Section 2. Emission factors are defined for each sub-application and life-cycle stage of gas. Emission factors used for emission estimates are shown in Table 3. Their selection should be based on the national information provided by manufacturers, service providers, disposal companies and other organizations. However, obtaining such detailed information is very difficult under the current state of administration in the Czech Republic and thus the emission factors are based on the expert judgement in the default ranges proposed by IPCC 2006 Gl., Table 7.9 [24]. Each gas is divided into six groups according its area of application. The percentage share of each gas in the area of application, as can be seen in Table 4, is currently based on sectoral expert judgement, which is supported by the data obtained from the Association of Refrigeration and Air Conditioning. As we can see, F-gases are mostly used for commercial refrigeration and have not been used for domestic refrigeration since 2015. For 2018 emission estimates, two new gases were included into calculation model: HFO-1234yf and HFO-1234ze. Their distribution by application area is based on information about distribution of gases and mixtures which these two gases replace. where M t is the amount of chemical used for the first fill and k is the emission factor for assembly losses charged into new equipment (see Table 3). Emissions during the lifetime E lifetime, t are calculated according to the equation: where B t is the amount of chemical banked in the system and x represents the annual emission rate (see Table 3). B t is calculated as: where S t is the amount of chemical charged into equipment (amount of gas consumed in year t). Emissions at the end of life E end of life, t are calculated according to the equation: where η rec,d is recovery efficiency at disposal (see Table 3). H t is the amount of chemical remaining in the system at decommissioning and is calculated by using a Gaussian model with mean at the lifetime expectancy [25]. As can be seen in Figure 4, although the trend in F-gas consumption varies, emissions are consistently increasing, which is caused by the time lag mentioned above. In the coming years, we can expect no change in the increasing trend since the effect of reducing F-gas consumption will appear with delay. Concluding Remarks Because F-gases have been used as substitutes for ozone-depleting substances, their consumption has been increasing strongly since 1995. The main sector where F-gases are employed is in refrigeration and air-conditioning systems. Because of the large global warming potential of F-gases, the EU has adopted legislative measures to prevent a further increase in their emissions. Alternative solutions to F-gases are already on the market. On the basis of the collected data, it can be seen that consumption of F-gases stopped increasing even before main part of EU legislative measures came into force. Therefore, in the coming years, especially from 2020, we expect a gradual decrease in the consumption of F-gases in refrigeration and stationary air-conditioning systems. No decrease is expected in mobile air-conditioning systems. A more important factor than consumption itself is the composition of the consumed F-gases, because there can be big differences in the impact on global warming between individual gases. There, we can see great progress. In recent years, manufacturers started using new gases, especially HFO-1234yf with very low global warming potential, which is the main refrigerant in mobile air conditioning at the present time and we can expect that it will also be used more widely in other sectors in the coming years. Currently, HFO-1234yf is mainly used in transport refrigeration (except mobile air conditioning), whereas HFO-1234ze is mainly used in commercial refrigeration. Since HFO-1234yf and HFO-1234ze will certainly become an integral part of the refrigeration industry, these two gases were implemented into the PHOENIX model, the country-specific estimation model for F-gas emissions. The base year of their use in refrigeration and stationary air conditioning is 2018. Newly applied F-gases changes the results of the F-gas emissions in the Czech Republic and affects in this way also the total emission from green-house gases. However, since time lag between consumption and emissions is taken in account, changes in emission trend will appear with delay. The new F-gases have lower global warming potential than those used earlier, which is a great benefit. But they are also subject to some scepticism since their characteristics have not been examined in sufficient detail yet. One already known problem is their flammability, which ranks them in category A2L. Category A2Lcompounds are slightly flammable and exhibit low toxicity. Their low toxicity has been demonstrated by their high effective doses in toxicological experiments conducted on mammals (mice, dogs, etc.). This toxicological information was also employed for adjustment of WEEL values, which are higher than 400 ml.m −3 for the newly employed F-gases [26]. Despite their low toxic potential, they should not be taken lightly. Their harmful effect could be hidden in the future and better information will be gained from future epidemiologic studies.
2020-05-07T09:17:17.472Z
2020-04-30T00:00:00.000
{ "year": 2020, "sha1": "02b696ecae36350f215be57f2c6f8c5bed844775", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/11/5/455/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a9bc0e824547e47fa4fe6c27d77585d7c5bb0a0a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
246541754
pes2o/s2orc
v3-fos-license
A Generalized Kinetic Model for Compartmentalization of Organometallic Catalysis Compartmentalization is an attractive approach to enhance catalytic activity by retaining reactive intermediates and mitigating deactivating pathways. Such a concept has been well explored in biochemical and more recently, organometallic catalysis to ensure high reaction turnovers with minimal side reactions. However, a scarcity of theoretical framework towards confined organometallic chemistry impedes a broader utility for the implementation of compartmentalization. Herein, we report a general kinetic model and offer design guidance for a compartmentalized organometallic catalytic cycle. In comparison to a non-compartmentalized catalysis, compartmentalization is quantitatively shown to prevent the unwanted intermediate deactivation, boost the corresponding reaction efficiency (𝛾), and subsequently increase catalytic turnover frequency (𝑇𝑂𝐹). The key parameter in the model is the volumetric diffusive conductance (𝐹 ) that describes catalysts’ diffusion propensity across a compartment’s boundary. Optimal values of 𝐹 for a specific organometallic chemistry are needed to achieve maximal values of 𝛾 and 𝑇𝑂𝐹. Our model suggests a tailored compartment design, including the use of nanomaterials, is needed to suit a specific organometallic catalysis. This work provides justification and design principles for further exploration into compartmentalizing organometallics to enhance catalytic performance. INTRODUCTION Compartmentalization has been well documented in biochemical literature as one method for achieving efficient in vivo tandem catalysis by encapsulating enzymes in well-defined microand nano-structures. [1][2][3][4][5][6][7] By controlling the diffusion of species in and out of compartment boundaries, nature is able to retain reactive or toxic intermediates, increase local substrate concentration, and mitigate deactivating or competing pathways. [1][2][3][4][5][6][7] For example, carboxysome microcompartments enhance the rate of CO2 fixation by encapsulating the cascade of carbonic anhydrase and ribose 1,5-bisphosphate carboxylase/oxygenase to generate high local concentration of CO2 and exclude deactivating O2 within their polyhedral structures. 8,9 Also, the last two steps of tryptophan biosynthesis -the conversion of indole-3-glycerol-phosphate to indole and then to tryptophan -takes advantages of the substrate-channeling effect bestowed by compartmentalized subunits of tryptophan synthase. 10,11 Here, a hydrophobic tunnel between the two subunits retains the indole intermediate, which prevents its free diffusion and participation in deactivating side reactions. 10 With billions of years of evolution, compartmentalization appears the mainstay of biology to manage the complex network of biochemical reactions that are frequently competing and incompatible with each other in a homogenous solution. The success of natural compartmentalized enzyme cascades inspires the development of bio-mimetic synthetic catalysis with organometallic chemistry being the latest frontier. Multiple groups have employed well-defined spatial organization at the nano-and microscopic levels to construct in vitro biocatalytic and organometallic cascades with enhanced catalytic performance. 2,3,[12][13][14][15][16] Encapsulating NiFe hydrogenase in virus capsids improves its proteolytic and thermal stability as well as enhances the rate of H2 production. 12 Confining a biochemical cascade of βgalactose, glucose oxidase, and horse radish peroxidase in metal-organic frameworks led to an enhancement of reaction yield in comparison to a freely diffusing analogue. 13,14 The extent to which reaction yields are enhanced in confined enzyme cascades is reported to correlate with the distance among active sites, suggesting that spatial organization or localization of catalysts is beneficial in tandem or cascade reactions. 15 In addition to biocatalysis, recently compartmentalization of organometallic catalysts has been experimentally demonstrated. [17][18][19][20][21][22][23] For example, our group employed a nanowire-array electrode to pair seemingly incompatible CH4 activation based on O2-sensitive rhodium (II) metalloradical (Rh(II)) with O2-based oxidation for CH3OH formation. 17,24 The application of a reducing potential to the nanowire array electrode created a steep O2 gradient within the wire array electrode, such that an anoxic region was established at the bottom of the wires. As a result, a catalytic cycle was formed in which the airsensitive Rh(II) activated CH4 in the O2-free region of the wire array electrode, while CH3OH synthesis proceeded in the aerobic domain. Moreover, by substituting a planar electrode (no anoxic region) for the nanowire array, CH activation and CH3OH generation are negligible. 17 The retainment of the ephemeral Rh(II) intermediate by the nanowire electrode for catalytic CH4-to-CH3OH conversion 17, 24 encourages us to further explore the design principles of compartmentalizing organometallic cascades for higher turnovers with mitigated deactivation pathways. We envision that a theoretical framework for organometallic catalysis will expand the use of compartmentalization for organometallic chemistry. In biochemistry, mathematical modeling of confined enzyme cascades has been well developed and offers the design principles in natural systems 11,25 and for engineered bio-compartments. 11,16,25,26 The models pinpoint a key parameter, volumetric diffusive conductance ( ! ), which describes the diffusion propensity across a compartment's boundary. ! is determined by a compartment's surface-to-volume ratio and its boundary's permeability. 26,27 An optimal value of ! tailored to the specific biochemical reactions are needed in order to achieve better reactivity in comparison to the non-compartmentalized alternative. Similarly, we note that further development of compartmentalized organometallic chemistry demands a quantitative design principle applicable towards a model catalytic cycle that includes oxidative addition (OA), isomerization/migratory insertion (Iso/MI), and reductive elimination (RE) along with undesirable deactivation pathways ( Figure 1A). Yet there has been a paucity of theoretical treatment despite progress in experimental demonstration. 17-23 22 Such a lack of theoretical treatment motivates us to establish a general kinetic model and quantitatively investigate how compartmentalization will affect the competing reaction pathways and the corresponding turnover of the desired organometallic catalysis. Here we report a general kinetic model and offer design guidance for a compartmentalized organometallic catalytic cycle. We took advantage of the established theoretical frameworks in biochemistry 16,25,26 and applied such kinetic frameworks to a model compartmentalized organometallics with competing deactivation pathways ( Figure 1A), 28 and an analogous noncompartmentalized cycle ( Figure 1B). We examined three metrics in the catalytic cycle in Figure 1C: 1) reaction efficiency ( ) that gauges the percentage of intermediates funneled towards desirable catalytic turnover over deactivation pathways, 2) the flux of catalytic intermediates out of the compartment to be deactivated ( " ), and 3) turnover frequency ( ) that measures the steady-state catalytic rate despite intermediate deactivation. A compartmentalized system can significantly outperform a homogeneous counterpart with respect to and with a lower value of " , at ! values smaller than the intrinsic kinetics of the organometallic cycle in question. We showcased how the developed model can serves as a guiding design principle for specific organometallic catalysis for maximal and . The established kinetic model can be adapted to suit a plethora of catalytic cycles or materials-based compartments, offering a framework to be expanded on for advanced compartmentalization of chemical catalysis. Establishing a general kinetic framework of compartmentalization for an organometallic catalytic cycle We modelled a three-step catalytic cycle consisting of oxidative addition (OA), isomerization/migratory insertion (Iso/MI), and reductive elimination (RE) steps in the context of a compartmentalized system with multiple deactivation pathways in the solution ( Figure 1A) We assign volumetric diffusive conductance ( ! ) to quantitatively describe the extent of mass transport, predominantly diffusion-based, between the compartment and the surrounding bulk solution. As a measure of diffusion across the compartment's boundary, ! equals the product of compartment boundary's permeability ( ) and its total surface area ( ), while normalized by the volume ( ) of the corresponding compartment and Avogadro's number ( ' ) ( Figure 1C). 26 ! describes a molecule's tendency to contribute a diffusion flux across the compartment under a given concentration difference across the compartment's boundary. As is linearly proportional to the species' diffusion coefficient ( ) and inversely proportional to the distance of diffusion path across the boundary, 35 , , and depend on not only the compartment's geometric dimensions (for , , and ) but also the materials' property of the compartment (for ). Since ! governs ( ) ), and the deactivation rate of intermediates − ( " ). Moreover, in both compartmentalized and non-compartmentalized scenarios, we aim to analyze the rate of reaction (in the form of ) and the efficacy of transforming the substrate into targeted product (in the form of ) that is defined as the percentage of intermediates funneled towards desirable catalytic turnover. 16,26 In both cases, is calculated as the ratio between the formation rate of product and the consumption rate of substrate . In the case of pseudo-first-order kinetics towards in oxidative addition (m = 1), , ",23# , and 23# in a compartmentalized system can be expressed as, in which, While the , ",23# , and in a non-compartmentalized scenario, denoted as 4 , ",23# 4 , and 23# 4 are expressed as, The Figure 2B) and lower value ( Figure 2C); alternatively when $ is much larger than &$ and the deactivation step is less relevant, plateaus towards unity with concomitant increase in . Despite the dominant role of $ , whether or not the system is compartmentalized strongly affects the values of , " , and ( Figure 2D-F). While the trend is generally applicable for all values of ! , a specific case when ! = 320 s −1 , corresponding to the nanowire array electrode for CH4-to-CH3OH conversion in our previous work, 17 illustrates under which situation the advantages of compartmentalization will be observed. As the value of $ increases, reaction efficiency (red trace in Figure 2D) increases in a sigmoidal fashion when $ approaches the value of ! with compartmentalization, while in a non-compartmentalized case (black trace in Figure 2D The above noted effects can be mathematically justified based on our derived equations. When the value of ! is similar to or even larger than &$ or &% ( ! ≳ &$ or &% ), This will lead to ≈ 4 , i.e. the reaction efficiency is not significantly altered with compartmentalization in comparison to the non-compartmentalized case. Alternatively, when ! ≪ &$ or &% , we have This leads to The equations noted above suggest that optimal, near-unity reaction efficiency , high , and low " values would be obtained when ! ≪ $ and % , which is consistent with our observations in Figure 2. Lastly, we explored " and as functions of ! and $ when accounting for .,-,-0-,1 , as a function of ! and $ , and ! alone in this alternate scenario. In comparison to plots in Figure 2C and 3C, Figures S5B and 6B also predict to decrease exponentially with ! , and increase with $ , with a difference in actual value of when accounting for .,-,-0-,1 . Figure S7 displays ",23# and as a function of $ alone, compared to the corresponding non-compartmentalized metrics for model .,-,-0-,1 are also plotted. Similar to Figure 3B, compartmentalized " ( Figure S7A) is predicted to be much smaller than non-compartmentalized " at high $ . However, at low $ , it is predicted that a compartmentalized system will have greater " , when accounting for .,-,-0-,1 , suggesting that compartmentalization may be marginally should be considered when $ < &$ and % < &% , when the intrinsic catalytic reactivity cannot outcompete the deactivation pathway. The efficacy of compartmentalization will be observable, as long as the compartment's volumetric diffusive conductance ! is much smaller than &$ or &% ( ! ≪ &$ or &% ). Nonetheless one interesting conclusion from our analysis is that maximal efficacy of compartmentalization (reaction efficiency → 1) demands ! to be smaller not only than the rate constants of deactivation steps ( &$ and &% ) but also than the rate constants of steps in the catalytic cycle ( $ and % ). This requirement for maximal stems from the fact that a "leaky" compartment with large ! is not sufficient to conserve the yielded intermediates and is prone to deactivation. Practically, such a requirement is indeed a blessing for organometallic chemistry. As typical organometallic studies do not commonly characterize the deactivating side reactions, there lacks detailed kinetic information for compartment design, as the values of &$ or &% were needed to determine the range of desirable ! values. Though we posit that designing a confined catalytic cycle to have ! < $ and ! < % is sufficient for a compartment to "revive" a proposed, unfunctional catalytic cycle, future design of compartmentalization can be simplified. The feasibility of obtaining the range of ! from the kinetics of the proposed catalytic cycle offers more guidance for the materials design for the compartment. As ! equals the product of compartment boundary's permeability ( ) and its total surface area ( ), while normalized by the volume ( ) of the corresponding compartment, 26 multiple synthetic handles could be applied to achieve a desirable ! value. A less permeable interface at the boundary of compartment as well as smaller surface-to-volume ratio will help to reduce the mass transport hence the value of ! . Characterization techniques that help determine encapsulation geometry and assess permeability, such as electron microscopies and chromatographic methods should be welcomed for more detailed mechanistic investigations in experimental demonstration. [46][47][48][49] One interesting result from this argument is that a compartment of extremely small dimension, for example of nanoscopic scale, may not be necessarily beneficial, since nanoscopic dimensions can create an equivalently "leaky" compartment when normalized to the compartment volume. Careful design is recommended before experimental implementation. Last, we cautioned that our established model only considers the mass transport of catalysts and assumes an unconditionally fast supply of substrate and quick removal of product . While such assumptions have their real-life correspondence under certain circumstances (vide supra), the established model is incapable of accounting for the possible mass-transport limitation from substrate and products, which could be induced by a small ! value recommended by the model. Given that, we cautioned that a lower bound of ! exists for optimal performance in practical applications, and an unnecessarily small value of ! could be detrimental to the compartment design. CONCLUSION Here we have developed a kinetic framework for compartmentalizing organometallic catalysis, using a classical three step cycle consisting of OA, Iso/MI, and RE in that order. Under the same kinetic and diffusive parameters, the kinetic model predicts that key reaction metrics, derived from solving steady state equations of catalytic species, are significantly enhanced versus a homogenous counterpart. Furthermore, we demonstrated that careful design of a structured material to produce ideal volumetric diffusive conductance ( ! ) values in relation to kinetic parameters is a viable approach to optimization by plotting key reaction metrics as functions of ! and rate constants # and $ . From this, we conclude that confinement essentially induces a reaction to compete with influx and outflux instead of deactivation, provided deactivating media are adequately barred from the compartment. As diffusion into and out of a compartment can be tuned by confinement geometry, this offers a clear handle for optimization that freely diffusing systems do not possess. We also derived an additional kinetic framework accounting for both total catalyst concentration and catalyst in the bulk (model .,-,-0-,1 ), which ultimately yielded the same conclusions as when not accounting for total catalyst concentration. Lastly, we offered insight into accounting for various approaches to compartmentalization, where rigorous definition of confinement will be instrumental. The results from this study will assist in the a priori design of compartmentalized organometallics for enhanced catalytic performance.
2022-02-05T16:43:40.024Z
2021-09-08T00:00:00.000
{ "year": 2021, "sha1": "adce9a1f1ce2a78e9026be1b5d46fdaf15802c3b", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/sc/d1sc04983f", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4fa2dfed92fccea1fe5db75eebbe84f29efc8d9e", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
232128764
pes2o/s2orc
v3-fos-license
Assessment of areca nut use, practice and dependency among people in Guwahati, Assam: a cross-sectional study Background Areca nut is the fourth most commonly used psychoactive substance worldwide after tobacco, alcohol and caffeine. In India, it is perceived in various ways, ranging from a ‘fruit of divine origin’ in Hindu religious ceremonies to a mouth freshener. Areca nut use both on its own and with tobacco additives is addictive. The aim of this study was to understand the pattern of areca nut consumption, to determine the Knowledge, Attitude and Practices (KAP) among areca nut users and the dependency associated with areca nut use. Methods A cross sectional study was conducted in Guwahati, Assam using a self-administered questionnaire eliciting the pattern of areca nut consumption, KAP among users and understanding their dependency using Betel Quid Dependence Scale. The chewers of areca nut alone with or without betel quid, gutkha and tobacco participated in the study. Areca nut users were selected using purposive sampling method from the vendor shops of all the four assembly areas of the city. Their participation was voluntary and free not to answer or quit the survey. The data was analysed using SPSS software. Results A total of 500 participants were approached in all four areas, 479 completed the survey (response rate 95%). The people who participated in the study were mostly male with an average age of 40 years, educated to secondary level or higher, married and self-employed. Betel quid with tamul was the most prevalent form of areca nut chewing in both men and women. About 441 (92%) participants experienced pleasure when chewing areca nut and 327 (68%) chewed it to relieve stress. Only 86 (18%) of subjects had ever tried to quit chewing areca nut and 387 (81%) thought that it was highly addictive. The results revealed relatively high levels of endorsement for ‘physical and psychological urgent need’ (mean = 43%) and ‘increasing dose’ (mean = 50%), whereas endorsement level for ‘maladaptive use’ was low (mean = 16%). Conclusion Areca nut use (tamul) is of major concern in India and many Southeast Asian countries and its use has been increasing across the globe. The evidence suggests a dependence similar to tobacco use and policy makers need to refine its strategy for control of its use by engaging with multiple stakeholders and adapting it to local context with surveillance and cessation guidelines in order to address this issue. Background Areca nut is estimated to be used by 600 million people, particularly among South-east Asian communities [1][2][3][4]. Epidemiological surveys have estimated areca nut use in 20%-40% of the population above the age of 15 years in India, Nepal and Pakistan [5]. Areca nut, often referred to as the betel nut (as it is commonly chewed along with the Piper betel leaf), is the seed of the endosperm of the oriental palm [4,6]. The areca nut and juice play an important ceremonial and cultural role in many countries including Myanmar, the Solomon Islands and Vietnam. It is common practice to offer these products to guests at important social gatherings, weddings and other religious events [3,4]. Due to this cultural tradition, the use of the areca nut is widespread and considered a part of daily life, even among women and young children. In southern Asia, it is perceived to have medicinal values [4], including as an aphrodisiac, appetite suppressant, digestive aid and diuretic. Areca nut is often used in India as an alternative means of treatment in many diseases such as asthma, cough, dermatitis (topical), fainting, glaucoma, impotence, intestinal worms, leprosy, toothache, leucorrhoea (vaginal discharge) and vaginal laxity [4]. It is often used as self-medication for the alleviation of symptoms. Areca nut can be chewed raw or processed by roasting, drying in the sun, soaking or boiling prior to chewing [6]. Other than betel nut, Areca has also been known as 'kwai' and 'gue' in Meghalaya. In Assam and Nagaland, it is known as 'tamul', whereas in Manipur and Mizoram it is 'kua' and 'kuhva' [6]. In India, different varieties of Areca nut preparations include Neetadaka, Chali or kottapak, Parcha (Pareha), Kalipak, Iylon, Nayampak, Nuli, Supari, Tamol and Bura Tamul. Chikani and Bura tamul are commonly used forms in Assam. Bura Tamul is prepared by preserving ripe fresh nuts for 3 to 4 months covered with bark from the betel tree, cow dung and soil which results in a moist chew [6,7]. In the paper the terms Tamul and Areca nut have been used synonymously. In most basic form, betel quid is a combination of betel leaf, areca nut and slaked lime (aqueous calcium hydroxide paste). Betel quid has a relaxing and stimulant effect by acting on the autonomic nervous system [1]. If tobacco is not added, areca nut is the main psychoactive substance in the betel quid [5]. Areca nut contains several alkaloids belonging to the pyridine group, the most important being arecoline. The others are arecaidine, guvacine and isoguvacine [4]. Nitrosated derivatives of arecal alkaloids are associated with increased risk of cancer [8]. Betel quid has been classified as a Group 1 carcinogen by the International Agency for Research on Cancer, while chewing areca is the single most important aetiologic factor for the development of oral submucous fibrosis. It has also shown to increase the risk of development of oral squamous cell carcinoma especially when the quid contains tobacco [4]. The malignant transformation of the Oral Submucous Fibrosis (OSF) was observed from 3% to 7.6 % [9,10]. The stimulant and anxiolytic effects of areca have been associated with escalation of its use and dependency. However, disentangling the independent effects of areca addiction is challenging, given that many users concomitantly use tobacco [2]. Overall, areca nut plays multiple important roles in the socio-cultural context and economic conditions of the people in Assam. The aim of the study was to understand the pattern of areca nut consumption in Assam, India, to determine the knowledge, attitude and practice (KAP) among areca nut users and their dependency on areca nut. Methodology A cross sectional study was conducted among areca nut users in the city of Guwahati (Assam), during March-April 2018. Guwahati (Dispur) is the capital of Assam and the largest city in North-East India. The city is divided into four legislative assemblies (Dispur, Jalukbari, Guwahati East and Guwahati West). Participants were included from all these four areas. A nonprobability purposive sampling method was planned to produce a sample that can be assumed to be representative of the population having a shared characteristic. The sample size was calculated based on the following assumption: 50% prevalence, 95% confidence level and 5% margin of error using the following formula N= 4pq/d 2 . The estimated sample size required was 384 subjects. The chewers of areca nut alone with or without betel quid, gutkha and tobacco were considered for this study. Twenty-six vendor shops of all the four areas were approached and users who agreed to participate in the survey were included. All the participants were briefed about the aim, objectives and tools for around 2 minutes and mentioned that they are free to quit anytime. The correct understanding of the participants has been confirmed, and they encouraged to complete every item in order without skipping any. The principal investigator assisted with three interns from Tata Institute of Social Sciences, Guwahati collected the data. It took 20-25 minutes to complete one questionnaire. A pre-tested close-ended questionnaire was used to elucidate the pattern of consumption. Special focus was on assessing three domains of learned behaviour towards tamul, viz. knowledge (cognitive), attitude (affective) and practice (psychomotor). The questionnaire was used to understand the prevalence, KAP pertaining to areca nut use and dependence assessment using Betel Quid Dependence Scale (BQDS). The validated BQDS, is a 16-item scale and employs a dichotomous outcome (Yes/No), used to assess the dependence on areca nut use. BQDS is a three-factor structure scale focussing on 'physical and psychological urgent need' (seven items), 'increasing dose' (five items) and 'maladaptive use' (four items) [9]. The BQDS scale was analysed by scores range from 0 to 1, so that each score represented the proportion of items endorsed (e.g. a score of 0.50 would mean that half of items were endorsed). The questionnaire was translated into Assamese and Hindi. Informed consent was sought from the eligible participants before data collection in their preferred language, either Assamese or Hindi. Confidentiality (no identifiers) and privacy (auditory) were maintained during and after the study. Statistical analysis Survey data analysis involved descriptive statistics of the participants with demographics, areca nut use, knowledge and attitude and BQDS score. Point estimates were calculated with 95% confidence intervals for all measures (percentages). SPSS version 15 was used to analyse the data. BQDS score analysis The total score of the BQDS ranged from 0 to 16 and the median (interquartile range) score was calculated in the study population [11]. The score ≥ 4 indicated dependence on areca nut. The percentage of participants who endorsed each item in the current study is presented in addition to rank order of each item (the most frequently endorsed item is ranked as '1'). Results The questionnaire was administered to 500 participants, out of which 479 agreed to participate generating a response rate of 96%. The results of the study revealed that areca nut was consumed in four different formulations, namely, tamul/areca alone, betel quid (tamul with betel leaf), betel quid with tobacco and gutkha (gutkha = pan masala (nut) + tobacco). The study participants' age ranged between 18 and 80 years (mean = 40 years), and age at initiation of areca habit ranged from 12 to 20 years (mean = 15 years). The duration of use of the product ranged from 10 to 30 years (mean = 18 years) and the frequency of areca nut use ranged from 3 to 6 times per day (mean = 4) ( Table 1). On an average 100 Rs ($1.4) were spent daily with a range from 70 to 200 Rs/day ($1-$2.8) ( Table 1). Betel quid with tamul was the most prevalent form of areca chewing in both men and women followed by tamul alone. Gutkha with tamul was found to be the least common habit among all age groups. Among the four different formulations of areca as mentioned above, betel quid with tamul was the most common variant chewed among all the occupational groups ( Table 2). Knowledge pertaining to areca use among the study participants Tamul was perceived to have beneficial effects by 108 (23%) participants, while 208 (43%) perceived tamul to have harmful effects on health ( Table 3). Most of the participants felt that tamul did not cause staining of teeth and agreed that tamul was addictive. Approximately, a quarter of participants had noticed sores, white patches or gum problems at the site of tamul placement. Participants believed that tamul can cause oral cancer and almost half of them were aware that oral cancer is preventable. In all, less than half of the total participants were aware about a government programme for prevention of cancer. Attitude pertaining to areca nut use among the study participants Majority of the participants experienced pleasure when chewing areca nut and chewed it to relieve stress (Table 3). Only 18% of participants had ever tried to quit chewing areca nut, while 78% were prepared to quit if they were made aware of harmful effects of areca nut use. Practices pertaining to areca nut use among the study participants In all, majority of users indicated that their family members used areca nut (Table 3). Only a few users said they were thinking of quitting use of areca nut or had tried to quit in the last 6 months and 77% of participants indicate they would quit if they were informed that areca nut causes cancer. Majority of participants were willing to undergo oral cancer screening. BQDS analysis Among the study population, 10% reported a total score of 0 points (without any dependent symptoms), 20% scored 1-3 points and 70% scored 4 points or above ( Table 4). The results reveal relatively high levels of endorsement for 'physical and psychological urgent need' (mean percentage = 43%) and 'increasing dose' (mean percentage = 50%), whereas endorsement level for 'maladaptive use' was low (mean percentage = 16%). Most participants reported of having strong craving and felt they could not do without tamul. Discussion The study highlights the use of areca nut in the study participants, pattern of use and its perceived effect and dependence on overall health. The study revealed areca nut was consumed in four different variations. There was reported an early age of initiation among the study participants with an increased frequency and duration of use. Most of the participants agreed that they derive pleasure and use areca nut to relieve stress. They also found it to be highly addictive and difficult to quit. The influence of family members using areca nut influenced use, but users showed a willingness to quit if services were available. The BQDS showed high levels of endorsement for 'physical and psychological urgent need' and increasing dose whereas endorsement level for maladaptive use was low. The current study revealed that overall, tamul with betel quid was the most commonly consumed form of practice which is similar to findings reported by Sein et al [12], that there were different patterns of chewing areca and the most common form was chewing betel quid, which usually consists of a leaf of betel-vine, areca nut, slaked lime and some aroma [10]. In the present study, chewing areca nut in any form was almost 2.3 times more prevalent in men than women. Most of the studies report this difference to be lower while some studies found that the use of areca nut was more common in women. In a study conducted on 99,598 permanent residents of Mumbai, who were 35 years or older [5], areca nut in all forms was used by 29.7% of women and 37.8% of men [14], while in a study conducted in Karachi, Pakistan [13], 27.9% of men and 37.8% of women chewed areca nut in the form of betel quid. In the present study, the average age of initiation to chew tamul was 15 years with a mean duration of use being 18 years. In a survey conducted by Schonland and Bradshaw [15], amongst Natal Indians with special reference to betel quid chewing habits, the age at which chewing initiated was between 20 and 24 years. Two-fifths of chewers began the habit before the age of 20 years and a negligible number after the age of 40 years [15]. Self-employed groups contributed to 32% of study population while only 3% of them were homemakers. In this study, we observed a significant difference in areca use by occupation and ethnicity, whereas a study conducted among 589 Taiwanese prison inmates on betel nut chewing among males showed no correlation between occupation, ethnicity and betel nut chewing behaviour [16]. The knowledge questionnaire revealed that more than half of the study population were aware that tamul can have harmful effects on health and almost 80% believed it to have addictive properties. Tamul was perceived to have beneficial health effects by 22.5% of participants. This result is in contrast to a study conducted in Dakshina Kannada district of Karnataka on 90 areca nut chewers, wherein, the majority of respondents (69%) thought that chewing had beneficial effects and only a third of the sample knew about harmful effects of chewing [17]. In a further study in 2005 which surveyed 59 daily areca-only chewers from Karnataka State, India, areca nut chewing appeared to be perceived as beneficial when consumed in moderate amounts and having addictive properties equal to caffeine [18]. Chewing was largely viewed as a healthy practice if tobacco was not introduced into the quid. Approximately one fourth of the participants had noticed sores, white patches or gum problems at the site of tamul placement at some point of time. There is enough literature evidence to demonstrate that betel nut has deleterious effects on oral soft tissues [19]. These deleterious effects include benign to precancerous and malignant changes in the oral mucosa. A review of betel quid chewing in mainland China reported the prevalence of oral sub mucous fibrosis among betel quid chewers ranged from 0.9% to 4.7% [20]. More than 60% participants believed that tamul could cause oral cancer while almost 50% of them felt oral cancer is preventable. There is sufficient evidence in humans for the carcinogenicity of betel quid with or without tobacco [14]. A hospital based case control study conducted by Mahapatra et al [21], showed that out of 134 cases of oral cancer ad 268 controls, adjusted odds ratio for getting oral cancer among supari users was 11.4, 6.4 for betel quid, 6.0 for chewing tobacco and 5.1 for gutka users. Dikshit and Kanhere [22] conducted a case control study in Bhopal and showed that out of 32 (13%) cases and 152 (8%) of control, odds of oral cancer was 1.7. In the current study, more than 90% of participants used areca nut for pleasure and 70% believed it relieved their stress. In a study conducted among the population in North India that included 1,500 young college students aged between 15 and 22 years, the most common reason put forth by users of areca nut was peer pressure, followed by advertisements, general stress and academic pressure [23]. The reported effects of areca nut chewing included relaxation, improved concentration, mild lifting of mood and enhanced satisfaction after eating [10]. Almost 82% of users revealed areca nut chewing was practised by their family members. In a cross-sectional school-based survey of 2,200 adolescents from 26 schools in Karachi, Pakistan, participants considered it to be rude not to chew if their friends or family members were chewing betel quid [3]. This supports the fact that areca nut chewing is often influenced by someone in the family or in peer group. Despite the awareness that areca nut can cause oral cancer, the attempted quit rate among participants was relatively low. This may be an indication, well supported by the literature, that users find it difficult to quit these behaviours [5] , as betel quid shares many features with smokeless tobacco and both products are addictive [4]. The BQDS is the first instrument designed specifically to measure betel quid dependence [11]. In the present study, the results reveal relatively high levels of endorsement for 'physical and psychological urgent need' and 'increasing dose' whereas endorsement level for 'maladaptive use' was low. Conclusion In conclusion, the current study demonstrates that preventive efforts need to focus upon the attitude of users alongside increasing their knowledge on the harmful effects of areca use. Areca nut is the fourth most commonly used psychoactive substance and chewing is a socially acceptable and widely practised habit amongst youth. Participants were not aware about the harmful effects of areca nut use. Long term use has potential to change from precancerous condition to malignancy. Recommendation The behaviour change communication about tamul and effect on health can be communicated through the National Service Scheme, religious leaders, woman groups and community-based organisations. The dependence level is evident among the participants with difficulty in quitting and increasing its dosage to achieve desired effect. There is an urgent need to integrate areca nut use cessation services in the guidelines for cessation, develop policy measures on reducing the demand by communication and behaviour change strategies, and change the norms of its use in the community. Strength of the study There are very few studies that describe areca nut user's consumption pattern, knowledge, attitudes and dependence level among the general population. The current study included a broad range of users in terms of gender, age, education and occupation and explored pattern, KAP and dependence on the same population. Limitations of the study The sample of chewers of areca nut was selected purposively so it may not be generalisable to the rest of India. Some of the respondents had difficulty recalling their ages and the exact year when they started chewing of areca nut, which may lead to recall bias. The cross-sectional design of the study precludes conclusions regarding causality of areca nut and dependence. Future longitudinal research is needed to assess the predictive validity of the BQDS. Study implication for policy The study results have some implications on policy and practice of the use of tamul. Misconceptions regarding use of tamul need to be tackled through culturally sensitive messaging via multiple channels of communication. There is a need to develop a policy that may help to assist tamul chewers to quit as well as bring attention to the often-neglected issue of oral cancer, which is the commonest cancer among males and fourth most common in females in India. Furthermore, there is a need to encourage members of the community to get themselves screened for oral cancer under the national cancer screening programme.
2021-03-05T19:21:01.314Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "b22943758e12f27dc0a6f8de75ca6cf37f1e8fac", "oa_license": "CCBY", "oa_url": "https://ecancer.org/en/journal/article/1198-assessment-of-areca-nut-use-practice-and-dependency-among-people-in-guwahati-assam-a-cross-sectional-study/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b22943758e12f27dc0a6f8de75ca6cf37f1e8fac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8823408
pes2o/s2orc
v3-fos-license
Quantum rate distortion, reverse Shannon theorems, and source-channel separation We derive quantum counterparts of two key theorems of classical information theory, namely, the rate distortion theorem and the source-channel separation theorem. The rate-distortion theorem gives the ultimate limits on lossy data compression, and the source-channel separation theorem implies that a two-stage protocol consisting of compression and channel coding is optimal for transmitting a memoryless source over a memoryless channel. In spite of their importance in the classical domain, there has been surprisingly little work in these areas for quantum information theory. In the present paper, we prove that the quantum rate distortion function is given in terms of the regularized entanglement of purification. We also determine a single-letter expression for the entanglement-assisted quantum rate distortion function, and we prove that it serves as a lower bound on the unassisted quantum rate distortion function. This implies that the unassisted quantum rate distortion function is non-negative and generally not equal to the coherent information between the source and distorted output (in spite of Barnum's conjecture that the coherent information would be relevant here). Moreover, we prove several quantum source-channel separation theorems. The strongest of these are in the entanglement-assisted setting, in which we establish a necessary and sufficient codition for transmitting a memoryless source over a memoryless quantum channel up to a given distortion. Quantum rate distortion, reverse Shannon theorems, and source-channel separation Nilanjana Datta, Min-Hsiu Hsieh, and Mark M. Wilde Abstract-We derive quantum counterparts of two key theorems of classical information theory, namely, the rate distortion theorem and the source-channel separation theorem. The rate-distortion theorem gives the ultimate limits on lossy data compression, and the source-channel separation theorem implies that a two-stage protocol consisting of compression and channel coding is optimal for transmitting a memoryless source over a memoryless channel. In spite of their importance in the classical domain, there has been surprisingly little work in these areas for quantum information theory. In the present paper, we prove that the quantum rate distortion function is given in terms of the regularized entanglement of purification. We also determine a single-letter expression for the entanglement-assisted quantum rate distortion function, and we prove that it serves as a lower bound on the unassisted quantum rate distortion function. This implies that the unassisted quantum rate distortion function is non-negative and generally not equal to the coherent information between the source and distorted output (in spite of Barnum's conjecture that the coherent information would be relevant here). Moreover, we prove several quantum sourcechannel separation theorems. The strongest of these are in the entanglement-assisted setting, in which we establish a necessary and sufficient codition for transmitting a memoryless source over a memoryless quantum channel up to a given distortion. Index Terms-quantum rate distortion, reverse Shannon theorem, quantum Shannon theory, quantum data compression, source-channel separation I. INTRODUCTION Two pillars of classical information theory are Shannon's data compression theorem and his channel capacity theorem [49], [21]. The former gives a fundamental limit to the compressibility of classical information, while the latter determines the ultimate limit on classical communication rates over a noisy classical channel. Modern communication systems exploit these ideas in order to make the best possible use of communication resources. Data compression is possible due to statistical redundancy in the information emitted by sources, with some signals being emitted more frequently than others. Exploiting this redundancy suitably allows one to compress data without losing essential information. If the data which is recovered after the compression-decompression process is an exact replica of the original data, then the compression is said to be lossless. The simplest example of an information source is a memoryless one. Such a source can be characterized by a random variable U with probability distribution {p U (u)} and each use of the source results in a letter u being emitted with probability p U (u). Shannon's noiseless coding theorem states that the entropy H (U ) ≡ − u p U (u) log 2 p U (u) of such an information source is the minimum rate at which we can compress signals emitted by it [49], [21]. The requirement of a data compression scheme being lossless is often too stringent a condition, in particular for the case of multimedia data, i.e., audio, video and still images or in scenarios where insufficient storage space is available. Typically a substantial amount of data can be discarded before the information is sufficiently degraded to be noticeable. A data compression scheme is said to be lossy when the decompressed data is not required to be identical to the original one, but instead recovering a reasonably good approximation of the original data is considered to be good enough. The theory of lossy data compression, which is also referred to as rate distortion theory, was developed by Shannon [50], [11], [21]. This theory deals with the tradeoff between the rate of data compression and the allowed distortion. Shannon proved that, for a given memoryless information source and a distortion measure, there is a function R(D), called the rate-distortion function, such that, if the maximum allowed distortion is D then the best possible compression rate is given by R(D). He established that this rate-distortion function is equal to the minimum of the mutual information I In the above d(U,Û ) denotes a suitably chosen distortion measure between the random variable U characterizing the source and the random variableÛ characterizing the output of the stochastic map. Whenever the distortion D = 0, the above rate-distortion function is equal to the entropy of the source. If D > 0, then the rate-distortion function is less than the entropy, implying that fewer bits are needed to transmit the source if we allow for some distortion in its reconstruction. Alongside these developments, Shannon also contributed the theory of reliable communication of classical data over classical channels [49], [21]. His noisy channel coding theorem gives an explicit expression for the capacity of a memoryless classical channel, i.e., the maximum rate of reliable communication through it. A memoryless channel N is one for which there is no correlation in the noise acting on successive inputs, and it can be modelled by a stochastic map N ≡ p Y |X (y|x). Shannon proved that the capacity of such a channel is given by Any scheme for error correction typically requires the use of redundancy in the transmitted data, so that the receiver can perfectly distinguish the received signals from one another in the limit of many uses of the channel. Given all of the above results, we might wonder whether it is possible to transmit an information source U reliably over a noisy channel N , such that the output of the information source is recoverable with an error probability that is asymptotically small in the limit of a large number of outputs of the information source and uses of the noisy channel. An immediate corollary of Shannon's noiseless and noisy channel coding theorems is that reliable transmission of the source is possible if the entropy of the source is smaller than the capacity of the channel: The scheme to demonstrate sufficiency of (2) is for the sender to take the length n output of the information source, compress it down to nH (U ) bits, and encode these nH (U ) bits into a length n sequence for transmission over the channel. As long as H (U ) ≤ C (N ), Shannon's noisy channel coding theorem guarantees that it is possible to transmit the nH (U ) bits over the channel reliably such that the receiver can decode them, and Shannon's noiseless coding theorem guarantees that the decoded nH (U ) bits can be decompressed reliably as well in order to recover the original length n output of the information source (all of this is in the limit as n → ∞). Given that the condition in (2) is sufficient for reliable communication of the information source, is it also necessary? Shannon's source-channel separation theorem answers this question in the affirmative [49], [21]. The most important implication of the source-channel separation theorem is that we can consider the design of compression codes and channel codes separately-a two-stage encoding method is just as good as any other method, whenever the source and channel are memoryless. Thus we should consider data compression and error correction as independent problems, and try to design the best compression scheme and the best error correction scheme. The source-channel separation theorem guarantees that this two-stage encoding and decoding with the best data compression and error correction codes will be optimal. Now what if the entropy of the source is greater than the capacity of the channel? Our best hope in this scenario is to allow for some distortion in the output of the source such that the rate of compression is smaller than the entropy of the source. Recall that whenever D > 0, the rate-distortion function R (D) is less than the entropy H (U ) of the source. In this case, we have a variation of the source-channel separation theorem which states that the condition R (D) ≤ C (N ) is both necessary and sufficient for the reliable transmission of an information source over a noisy channel, up to some amount of distortion D [21]. Thus, we can consider the problems of lossy data compression and channel coding separately, and the two-stage concatenation of the best lossy compression code with the best channel code is optimal. Considering the importance of all of the above theorems for classical information theory, it is clear that theorems in this spirit would be just as important for quantum information theory. Note, however, that in the quantum domain, there are many different information processing tasks, depending on which type of information we are trying to transmit and which resources are available to assist the transmission. For example, we could transmit classical or quantum data over a quantum channel, and such a transmission might be assisted by entanglement shared between sender and receiver before communication begins. There have been many important advances in the above directions (some of which are summarized in the recent text [57]). Schumacher proved the noiseless quantum coding theorem, demonstrating that the von Neumann entropy of a quantum information source is the ultimate limit to the compressibility of information emitted by it [45]. Hayashi et al. have also considered many ways to compress quantum information, a summary of which is available in Ref. [30]. Quantum rate distortion theory, that is the theory of lossy quantum data compression, was introduced by Barnum in 1998. He considered a symbol-wise entanglement fidelity as a distortion measure [4] and, with respect to it, defined the quantum rate distortion function as the minimum rate of data compression, for any given distortion. He derived a lower bound on the quantum rate distortion function, in terms of well-known entropic quantity, namely the coherent information. The latter can be viewed as one quantum analogue of mutual information, since it is known to characterize the quantum capacity of a channel [38], [52], [23], just as the mutual information characterizes the capacity of a classical channel. It is this analogy, and the fact that the classical rate distortion function is given in terms of the mutual information, that led Barnum to consider the coherent information as a candidate for the rate distortion function in the quantum realm. He also conjectured that this lower bound would be achievable. Since Barnum's paper, there have been a few papers in which the problem of quantum rate distortion has either been addressed [25], [20], or mentioned in other contexts [60], [31], [40], [39]. However, not much progress has been made in proving or disproving his conjecture. In fact, in the absence of a matching upper bound, it is even unclear how good Barnum's bound is, given that the coherent information can be negative, as was pointed out in [25], [20]. There are also a plethora of results on information transmission over quantum channels. Holevo [32], Schumacher, and Westmoreland [48] provided a characterization of the classical capacity of a quantum channel. Lloyd [38], Shor [52], and Devetak [23] proved that the coherent information of a quantum channel is an achievable rate for quantum communication over that channel, building on prior work of Nielsen and coworkers [47], [46], [6], [5] who showed that its regularization is an upper bound on the quantum capacity (note that the coherent information of a quantum channel is always non-negative because it involves a maximization over all inputs to the channel). Bennett et al. proved that the mutual information of a quantum channel is equal to its entanglement-assisted classical capacity [10] (the capacity whenever the sender and receiver are given a large amount of shared entanglement before communication begins). In Ref. [10], the authors also introduced the idea of a reverse Shannon theorem, in which a sender and receiver simulate a noisy channel with as few noiseless resources as possible (later papers rigorously proved several quantum reverse Shannon theorems [1], [12], [8]). Although such a task might initially seem unmotivated, they used a particular reverse Shannon theorem to establish a strong converse for the entanglementassisted classical capacity. 1 Interestingly, the reverse Shannon theorems can also find application in rate distortion theory [60], [31], [40], [39], and as such, they are relevant for our purposes here. In this paper, we prove several important quantum rate distortion theorems and quantum source-channel separation theorems. Our first result in quantum rate distortion is a complete characterization of the rate distortion function in an entanglement-assisted setting. 2 This result really only makes sense in the communication paradigm (and not in a storage setting), where we give the sender and receiver shared entanglement before communication begins, in addition to the uses of the noiseless qubit channel. The idea here is for a sender to exploit the shared entanglement and a minimal amount of classical or quantum communication in order for the receiver to recover the output of the quantum information source up to some distortion. Our main result is a single-letter formula for the entanglement-assisted rate distortion function, expressed in terms of a minimization of the input-output mutual information over all quantum operations that meet the distortion constraint. This result implies that the computation of the entanglement-assisted rate distortion function for any quantum information source is a tractable convex optimization program. It is often the case in quantum Shannon theory that the entanglement-assisted formulas end up being formally analogous to Shannon's classical formulas [10], [28], and our result here is no exception to this trend. We next consider perhaps the most natural setting for quantum rate distortion in which a compressor tries to compress a quantum information source so that a decompressor can recover it up to some distortion D (this setting is the same as Barnum's in Ref. [4]). This setting is most natural whenever sufficient quantum storage is not available, but we can equivalently phrase it in a communication paradigm, where a sender has access to many uses of a noiseless qubit channel and would like to minimize the use of this resource while transmitting a quantum information source up to some distortion. We 1 A strong converse demonstrates that the error probability asymptotically approaches one if the rate of communication is larger than capacity. This is in contrast to a weak converse, which only demonstrates that the error probability is bounded away from zero under the same conditions. 2 One might consider these entanglement-assisted rate distortion results to be part of the "quantum reverse Shannon theorem folklore," but Ref. [8] does not specifically discuss this topic. prove that the quantum rate distortion function is given in terms of a regularized entanglement of purification [55] in this case. In spite of our characterization being an intractable, regularized formula, our result at the very least shows that the quantum rate distortion function is always non-negative, demonstrating that Barnum's conjecture from Ref. [4] does not hold since his proposed rate-distortion function can become negative. Furthermore, we prove that the entanglement-assisted quantum rate distortion function is a single-letter lower bound on the unassisted quantum rate distortion function (one might suspect that this should hold because additional resources such as shared entanglement should only be able to improve compression rates). This bound implies that the coherent information between the source and distorted output is not relevant for unassisted quantum rate distortion, in spite of Barnum's conjecture that it would be. We finally prove three source-channel separation theorems that apply to the transmission of a classical source over a quantum channel, the transmission of a quantum source over a quantum channel, and the transmission of a quantum source over an entanglement-assisted quantum channel, respectively. The first two source-channel separation theorems are singleletter, in the sense that they do not involve any regularised quantities, whenever the Holevo capacity or the coherent information of the channel are additive, respectively. The third theorem is single-letter in all cases because the entanglementassisted quantum capacity is given by a single-letter expression for all quantum channels [2], [10]. We also prove a related set of source-channel separation theorems that allow for some distortion in the reconstruction of the output of the information source. From these theorems we infer that it is best to search for the best quantum data compression protocols [16], [13], [9], [3], [42], [43], the best quantum error-correcting codes [51], [19], [18], [41], [44], [37], and the best entanglementassisted quantum error-correcting codes [17], [33], [36], [58] independently of each other whenever the source and channel are memoryless. The theorems then guarantee that combining these protocols in a two-stage encoding and decoding is optimal. We structure this paper as follows. We first overview relevant notation and definitions in the next section. Section III introduces the information processing task relevant for quantum rate distortion and then presents all of our quantum rate distortion results in detail. Section IV presents our various quantum source-channel separation theorems for memoryless sources and channels. Finally, we conclude in Section V and discuss important open questions. II. NOTATION AND DEFINITIONS Let H denote a finite-dimensional Hilbert space and let D(H) denote the set of density matrices or states (i.e., positive operators of unit trace) acting on H. Let ρ A ∈ D(H A ) denote the state characterizing a memoryless quantum information source, the subscript A being used to denote the underlying quantum system. We refer to it as the source state. Let is a pure state density matrix of a larger composite system RA, such that its restriction on the system A is given by ρ A , i.e. ρ A := Tr R ψ ρ RA , with Tr R denoting the partial trace over the Hilbert space H R of a purifying reference system R. The pure state |ψ ρ RA is entangled if ρ is a mixed state. The von Neumann entropy of ρ A , and hence of the source, is defined as The quantum mutual information of a bipartite state ω AB is defined as The coherent information I(A B) σ of a bipartite state σ AB is defined as follows: In quantum information theory, the most general mathematical description of any allowed physical operation is given by a completely positive trace-preserving (CPTP) map, which is a map between states. We let id A denote the trivial (or identity) CPTP map which keeps the state of a quantum system A unchanged, and we let N ≡ N A→B denote the CPTP map The entanglement of purification of a bipartite state ω AB is a measure of correlations [55], having an operational interpretation as the entanglement cost of creating ω AB asymptotically from ebits, while consuming a negligible amount of classical communication. It is equivalent to the following expression: where µ BE (ω) = Tr A {φ ω ABE }, φ ω ABE is some purification of ω AB , and the minimization is over all CPTP maps N E acting on the system E. (The original definition in Ref. [55] is different from the above, but one can check that the definition given here is equivalent to the one given there.) In this paper we make use of resource inequalities (see e.g., [26]), to express information-processing tasks as interconversions between resources. Let [c → c] denote one forward use of a noiseless classical bit channel, [q → q] one forward use of a noiseless qubit channel, and [qq] one ebit of shared entanglement (a Bell state). A simple example of a resource inequality is entanglement distribution: meaning that Alice can consume one noiseless qubit channel in order to generate one ebit between her and Bob. Teleportation is a more interesting way in which all three resources interact [7] 2 The above resource inequalities are finite and exact, but we can also express quantum Shannon theoretic protocols as resource inequalities. For example, the resource inequality for the protocol achieving the entanglement-assisted classical capacity of a quantum channel is as follows: The meaning of the above resource inequality is that there exists a protocol exploiting n uses of a memoryless quantum channel N and nH (A) ebits in order to transmit nI (A; B) classical bits from sender to receiver. The resource inequality becomes exact in the asymptotic limit n → ∞ because it is possible to show that the error probability of decoding these classical bits correctly approaches zero as n → ∞ [10]. III. QUANTUM RATE-DISTORTION A. The Information Processing Task The objective of any quantum rate distortion protocol is to compress a quantum information source such that the decompressor can reconstruct the original state up to some distortion. Like Barnum [4], we consider the following distortion measure d(ρ, N ) for a state ρ A ∈ D(H A ) with purification |ψ ρ RA and a quantum operation N ≡ N A→B : where F e is the entanglement fidelity of the map N : The entanglement fidelity is not only a natural distortion measure, but it also possesses several analytical properties which prove useful in our analysis. The state ρ n := (ρ A ) ⊗n ∈ D(H ⊗n A ) characterizes n successive outputs of a memoryless quantum information source. A source coding (or compression-decompression) scheme of rate R is defined by a block code, which consists of two quantum operations-the encoding and decoding maps. The encoding E n is a map from n copies of the source space to a subspace , and the decoding D n is a map from the compressed subspace to an output Hilbert space H ⊗n A : . The average distortion resulting from this compressiondecompression scheme is defined as [4]: n is the "marginal operation" on the i-th copy of the source space induced by the overall operation F n ≡ D n • E n , and is defined as The quantum operations D n and E n define an (n, R) quantum rate distortion code. For any R, D ≥ 0, the pair (R, D) is said to be an achievable rate distortion pair if there exists a sequence of (n, R) quantum rate distortion codes (E n , D n ) such that The quantum rate distortion function is then defined as The most general protocols for (a) unassisted and (b) assisted quantum rate distortion coding. In (a), Alice acts on the tensor power output of the quantum information source with a compression encoding E. She sends the compressed qubits over noiseless quantum channels (labeled by "id") to Bob, who then performs a decompression map D to recover the quantum data that Alice sent. In (b), the task is similar, though this time we assume that Alice and Bob share entanglement before communication begins. In the communication model, if the sender and receiver have unlimited prior shared entanglement at their disposal, then the corresponding quantum rate distortion function is denoted as R q eac (D) or R q eaq (D), depending on whether the noiseless channel between the sender and the receiver is classical or quantum. Figure 1 depicts the most general protocols for unassisted and assisted quantum rate distortion coding. B. Reverse Shannon Theorems and Quantum Rate-Distortion Coding Before we begin with our main results, we first prove Lemma 1 below. This lemma is similar in spirit to Lemma 26 of Ref. [39] and Theorem 19 of Ref. [60], and like them, it shows that to generate a rate-distortion code, it suffices to simulate the action of a noisy channel on a source state such that the resulting output state meets the desired distortion criterion. Unlike them, however, it is specifically tailored to the entanglement fidelity distortion measure. Let Furthermore, let {F n } n denote a sequence of quantum operations such that for n large enough, where σ R n B n := (id R n ⊗ F n ) (ψ ρ RA ) ⊗n . Then for n large enough, the average distortion under the quantum operation F n satisfies the bound By monotonicity of the trace distance under partial trace, we have that Hence, the average distortion under the quantum operation F n is given by Recall the following inequality from Ref. [15]: where 0 ≤ P ≤ I is any positive operator and (A − B) − denotes the negative spectral part of the operator (A − B). We then have the following inequalities: where the inequality follows from (13) and the definition of entanglement fidelity: N ). Hence, from (12), (14) and (11), we have which concludes the proof of the lemma. The above lemma illustrates a fundamental connection between quantum reverse Shannon theorems and quantum rate-distortion protocols. In particular, if a reverse Shannon theorem is available in a given context, then it immediately leads to a rate-distortion protocol. This is done simply by choosing the simulated channel to be the one which, when acting on the source state, yields an output state which meets the distortion criterion for the desired rate-distortion task. This is our approach in all of the quantum rate-distortion theorems that follow, and it was also the approach in Refs. [25], [60], [39]. There is, however, one caveat with the above approach. The reverse Shannon theorems often require extra correlated resources such as shared randomness or shared entanglement [10], [1], [8], [12], and the demands of a reverse Shannon theorem are much more stringent than those of a rate-distortion protocol. A reverse Shannon theorem requires the simulation of a channel to be asymptotically exact, whereas a ratedistortion protocol only demands that a source be reconstructed up to some average distortion constraint. The differences in these goals can impact resulting rates if sufficient correlated resources are not available [22]. In the entanglement-assisted setting considered in the next subsection, the assumption is that an unlimited supply of entanglement is available, and thus the entanglement-assisted quantum reverse Shannon theorem suffices for producing a good entanglement-assisted rate-distortion protocol. In the unassisted setting, no correlation is available, and exploiting the unassisted reverse Shannon theorem leads to rates that are possibly larger than necessary for the task of quantum rate distortion. Nevertheless, we still employ this approach and discuss the ramifications further in the forthcoming subsections. C. Entanglement-Assisted Rate-Distortion Coding 1) Rate-Distortion with noiseless classical communication: The quantum rate distortion function, R q eac (D), for entanglement-assisted lossy source coding with noiseless classical communication, is given by the following theorem. Theorem 2: For a memoryless quantum information source defined by the density matrix ρ A , with a purification |ψ ρ AA , and any given distortion 0 ≤ D < 1, the quantum rate distortion function for entanglement-assisted lossy source coding with noiseless classical communication, is given by where N ≡ N A →B denotes a CPTP map, , and I (A; B) ω denotes the mutual information. Proof: We first prove the converse (optimality). Consider the most general protocol for entanglement-assisted lossy source coding that acts on many copies (ρ ⊗n ) of the state ρ ∈ D(H A ) (depicted in Figure 1(b)). We take a purification of ρ as |ψ ρ RA . Let Φ T A T B denote an entangled state, with the system T A being with Alice and the system T B being with Bob. Alice then acts on the state ρ ⊗n and her share T A of the entangled state with a compression map E n ≡ E A n T A →W , where W is a classical system of size ≈ 2 nr , with r being the rate of compression (in Figure 1(b), W corresponds to the outputs of the noiseless quantum channels). Then Bob acts on both the classical system W that he receives and his share T B of the entangled state with the decoding map D n ≡ D W T B →B n . The final state should be such that it is distorted by at most D according to the average distortion criterion in the limit n → ∞ (8). With these steps in mind, consider the following chain of inequalities: The first inequality follows because the entropy nr of the uniform distribution is the largest that the entropy H (W ) can be. The second inequality follows because conditioning cannot increase entropy. The third inequality follows because H (W |R n T B ) ≥ 0 from the assumption that W is classical. The first equality follows from the definition of mutual information, and the second equality follows from the fact that R n and T B are in a product state. The third equality is the chain rule for quantum mutual information. The final inequality is from quantum data processing. Continuing, we have In the above, F n is the marginal operation on the i-th copy of the source space induced by the overall operation F n ≡ D n • E n , and is given by (7). The first inequality follows from superadditivity of quantum mutual information (see Lemma 15 in the appendix). The second inequality follows from the fact that the map D i • E i has distortion d (ρ, D i • E i ) and the information rate-distortion function is the minimum of the mutual information over all maps with this distortion. The last two inequalities follow from convexity of the quantum ratedistortion function R q eac (D), (see Lemma 14 in the appendix), from the assumption that the average distortion of the protocol is no larger than the amount allowed: and from the fact that R q eac (D), is non-increasing as a function of D (see Lemma 14 in the appendix). The direct part of Theorem 2 follows from the quantum reverse Shannon theorem, which states that it is possible to simulate (asymptotically perfectly) the action of a quantum channel N on an arbitrary state ρ, by exploiting noiseless classical communication and prior shared entanglement between a sender and receiver [10], [1], [8], [12]. The resource inequality for this protocol is where the entropies are with respect to a state of the following form: is an isometric extension of the channel N A →B . Our protocol simply exploits this theorem. More specifically, for a given distortion D, we take N to be the CPTP map which achieves the minimum in the expression (16) of R q eac (D). Then we exploit classical communication at the rate given in the resource inequality (18) to simulate the action of the channel N on the source state ρ. For any arbitrarily small ε > 0 and n large enough, the protocol for the quantum reverse Shannon theorem simulates the action of the channel up to the constant ε (in the sense of (9)). This allows us to invoke Lemma 1 to show that the resulting average distortion is no larger than D + ε. The main reason that we can use the quantum reverse Shannon theorem as a "black box" for the purpose of quantum rate distortion is from our assumption of unlimited shared entanglement. It is likely that this protocol uses much more entanglement than necessary for the purpose of entanglementassisted quantum rate distortion coding with classical channels, and it should be worthwhile to study the trade-off between classical communication and entanglement consumption in more detail, as previous authors have done in the context of channel coding [53], [34], [35], [59]. Such a study might lead to a better protocol for entanglement-assisted rate distortion coding and might further illuminate better protocols for other quantum rate distortion tasks. We think that our protocol exploits more entanglement than necessary from considering what is known in the classical case regarding reverse Shannon theorems and rate-distortion coding [21], [10], [22]. First, as reviewed in (1), the classical mutual information minimized over all stochastic maps that meet the distortion criterion is equal to Shannon's classical rate-distortion function [21]. Bennett et al. have shown that the classical mutual information is also equal to the minimum rate needed to simulate a classical channel whenever free common randomness is available [10]. Thus, a simple strategy for achieving the task of rate distortion is for the parties to choose the stochastic map that minimizes the rate distortion function and simulate it with the classical reverse Shannon theorem. But this strategy uses far more classical bits than necessary whenever sufficient common randomness is not available [22]. Meanwhile, we already know that the mutual information is achievable without any common randomness if the goal is rate distortion [21]. 2) Rate-Distortion with noiseless quantum communication: The quantum rate distortion function, R q eaq (D), for entanglement-assisted lossy source coding with noiseless quantum communication, is given by the following theorem. Theorem 3: For a memoryless quantum information source defined by the density matrix ρ A , with a purification |ψ ρ AA , and any given distortion 0 ≤ D < 1, the quantum rate distortion function for entanglement-assisted lossy source coding with noiseless quantum communication, is given by where N ≡ N A →B denotes a CPTP map, and I (A; B) ω denotes its mutual information. Proof: We first prove the converse (optimality). The setup is similar to that in the converse proof of Theorem 2, with the exception that W is now a quantum system and we let E denote the environment of the compressor. Consider the following chain of inequalities: The first inequality is because the entropy nr of the uniform distribution is the largest that the entropy H (W ) can be. The first equality follows from the fact that the state on systems W R n T B E is pure. The second inequality follows by subtracting the positive quantity H (W R n T B E). The second equality is from the definition of quantum mutual information. The third inequality is from quantum data processing (tracing over system E). The third equality is a useful identity for quantum mutual information. The fourth equality follows from I (R n ; T B ) = 0 since R n and T B are in a product state. The second-to-last inequality is from I (W ; T B ) ≥ 0, and the final inequality is from the quantum data processing inequality. The rest of the proof proceeds as in (17). The direct part follows from a variant of the quantum reverse Shannon theorem known as the fully quantum reverse Shannon theorem (FQRS) [1], [24]. This theorem states that it is possible to simulate (asymptotically perfectly) the action of a channel N on an arbitrary state ρ, by exploiting noiseless quantum communication and prior shared entanglement between a sender and receiver. It has the following resource inequality: where the entropies are with respect to a state of the following form: |ψ ρ AA is a purification of ρ, and U A →BE N is an isometric extension of the channel N A →B . Our protocol exploits this theorem as follows. For a given distortion D, take N to be the map which realizes the minimum in the expression (19) of R q eaq (D). Then we exploit quantum communication at the rate given in the resource inequality (22) to simulate the action of the channel N on the source state ρ. For any arbitrarily small ε > 0 and n large enough, the protocol for the fully quantum reverse Shannon theorem simulates the action of the channel up to the constant ε (in the sense of (9)). This allows us to invoke Lemma 1 to show that the resulting average distortion is no larger than D + ε. We could have determined that the form of the entanglement-assisted quantum rate distortion function R q eaq (D) in Theorem 3 follows easily from Theorem 2 by combining with teleportation. Though, the above proof serves an important alternate purpose. A careful inspection of it reveals that the steps detailed in (21) for bounding the quantum communication rate still hold even if the system T B is trivial (in the case where there is no shared entanglement between the sender and receiver before communication begins). Thus, we obtain as a corollary that the entanglementassisted quantum rate distortion function is a single-letter lower bound on the unassisted quantum rate distortion function. This makes sense operationally as well because the additional resource of shared entanglement should only be able to improve a rate distortion protocol. Corollary 4: The entanglement-assisted quantum rate distortion function R q eaq (D) in Theorem 3 bounds the unassisted quantum rate distortion function R q (D) from below: The above corollary firmly asserts that the coherent information I (A B) of the state in (20) is not relevant for quantum rate distortion, in spite of Barnum's conjecture that it would play a role [4]. That is, one might think that there should be some simple fix of Barnum's conjecture, say, by conjecturing that the quantum rate distortion function would instead be max {0, I (A B)}. The above lower bound asserts that this cannot be the case because half the mutual information is never smaller than the coherent information: D. Unassisted Quantum Rate-Distortion Coding The quantum rate distortion function R q (D) for unassisted lossy source coding is given by the following theorem. Theorem 5: For a memoryless quantum information source defined by the density matrix ρ A , and any given distortion 0 ≤ D < 1, the quantum rate distortion function is given by, denotes the entanglement of purification, with Like its classical counterpart, lossy data compression includes lossless compression as a special case. If the distortion D is set equal to zero in (24), then the state ω RB becomes identical to the state ψ ρ RA . Equivalently, the quantum operation N is given by the identity map id A . Since the entanglement of purification is additive for tensor power states [55]: , we infer that, for D = 0, R q (D) reduces to the von Neumann entropy of the source, which is known to be the optimal rate for lossless quantum data compression [45]. To prove the achievability part of Theorem 5, we can simply exploit Schumacher compression [45] (which is a special type of reverse Shannon theorem). Alice feeds each output A of the source into a CPTP map N that saturates the bound in (24) (for now, we do not consider the limit and set k = 1). This leads to a state of the form in (26), to which Alice can then apply Schumacher compression. This protocol is equivalent to the following resource inequality: We note that this is a simple form of an unassisted quantum reverse Shannon theorem. Now, a subtle detail of the simulation idea is that we are interested in simulating the channel N A→B from Alice to Bob, and Alice can actually simulate an isometric extension U A→BE N of the channel where Alice receives the system E and just traces over it. Though, instead of simulating U A→BE N , we could consider Alice to simulate the isometry U A→BE B E A N locally, Schumacher compressing the subsystems B and E B so that Bob can recover them, while the subsystem E A remains with Alice. This leads to the following protocol for unassisted simulation: The best protocol for unassisted channel simulation is therefore the one with the minimum rate of quantum communication, the minimum being taken over all possible isometries V : E → E A E B . This rate can only be less than the rate of quantum communication required for the original naive protocol in (27) since the latter is a special case in the minimization. This is the form of the unassisted quantum reverse Shannon theorem given in Ref. [8] and is related to a protocol considered by Hayashi [29]. One could then execute the above protocol by blocking k of the states together and by having the distortion channel be of the form N (k) : A k → B (k) , acting on each block of k states. By letting k become large, such a protocol leads to the following rate for unassisted communication: The above quantity is equal to the entanglement of purification of the state (id R ⊗ N A k →B (k) )((ψ ρ RA ) ⊗k ) [29], [8]: We are now in a position to prove Theorem 5. Proof of Theorem 5: Fix the map N such that the minimization on the RHS of (24) is achieved. The quantum reverse Shannon theorem (in this case, Schumacher compression) states that it is possible to simulate such a channel N acting on ρ with the amount of quantum communication equal to E p (ω RB ). Since the protocol simulates the channel up to some arbitrarily small positive ε, the distortion is no larger than D + ε by invoking Lemma 1. This establishes that R q (D) ≥ E p (ω RB ). We can have a regularization as above to obtain the expression in the statement of the theorem. The converse part of the theorem can be proved as follows. Figure 1(a) depicts the most general protocol for unassisted quantum rate-distortion coding. Let E 1 denote the environment of the encoder, and let E 2 denote the environment of the decoder, while W again denotes the outputs of the noiseless quantum channels labeled by "id." For any rate distortion code The first inequality follows because the entropy of the maximally mixed state is larger than the entropy of any state on system W . The first equality follows because the isometric extension of the decoder maps W isometrically to the systems E 2 and B n . The second inequality follows because the entropy minimized over all CPTP maps on systems E 1 and E 2 can only be smaller than the entropy on E 2 B n (the identity map on E 2 and partial trace of E 1 is a CPTP map included in the minimization). The second equality follows from the definition of entanglement of purification. The third inequality follows by minimizing the entanglement of purification over all maps that satisfy the distortion criterion (recall that we assume our protocol satisfies this distortion criterion). Our characterization of the unassisted quantum rate distortion task is unfortunately up to a regularization. It is likely that this regularized formula is blurring a better quantum rate-distortion formula, as has sometimes been the case in quantum Shannon theory [61]. This is due in part to our exploitation of the unassisted reverse Shannon theorem for the task of quantum rate distortion, and the fact that the goal of a reverse Shannon theorem is stronger than that of a rate distortion protocol, while no correlated resources are available in this particular setting (see the previous discussion after Theorem 2). It would be ideal to demonstrate that the regularization is not necessary, but it is not clear yet how to do so without a better way to realize unassisted quantum rate distortion. Nevertheless, the above theorem at the very least disproves Barnum's conjecture because we have demonstrated that the quantum rate distortion function is always positive (due to the fact that entanglement of purification is positive [55]), whereas Barnum's rate distortion function can become negative. 3 Furthermore, Corollary 4 provides a good single-letter, non-negative lower bound on the unassisted quantum rate distortion function, which is never smaller than Barnum's bound in terms of the coherent information. IV. SOURCE-CHANNEL SEPARATION THEOREMS This last section of our paper consists of five important quantum source-channel separation theorems. The first two theorems apply whenever a sender wishes to transmit a memoryless classical source over a memoryless quantum channel, whereas the third applies when the information source to be transmitted is a quantum source. The second theorem deals with the situation in which some distortion is allowed in the transmission. All these three theorems are expressed in terms of single-letter formulas whenever the corresponding capacity formulas are single-letter. The last two theorems correspond to the cases in which a quantum source is sent over an entanglement-assisted quantum channel, with and without distortion. The formulas in these are always single-letter, demonstrating that it is again the entanglement-assisted formulas which are in formal analogy with Shannon's classical formulas. A. Shannon's source-channel separation theorem for quantum channels Shannon's original source-channel separation theorem applies to the transmission of a classical information source over a classical channel. Despite the importance of this theorem, it does not take into account that the carriers of information are essentially quantum-mechanical. So our first theorem is a restatement of Shannon's source-channel separation theorem for the case in which a classical information source is to be reliably transmitted over a quantum channel. Figure 2 depicts the scenario to which this first sourcechannel separation theorem applies. The most general protocol for sending the output of a classical information source over a quantum channel consists of three steps: encoding, transmission, and decoding. The sender first takes the outputs U n of the classical information source and encodes them with Fig. 2. The most general protocol for transmitting a classical information source over a memoryless quantum channel. some CPTP encoding map E U n →A n , where the systems A n are the inputs to many uses of a noisy quantum channel N A→B . The sender then transmits the systems A n over the quantum channels, and the receiver obtains the outputs B n . The receiver finally performs some CPTP decoding map D B n →Û n to recover the random variablesÛ n (note that this decoding is effectively a POVM because the output systems are classical). If the scheme is any good for transmitting the source, then the following condition holds for any given ε > 0, for sufficiently large n: Theorem 6: The following condition is necessary and sufficient for transmitting the output of a memoryless classical information source, characterized by a random variable U , over a memoryless quantum channel N ≡ N A →B , with additive Holevo capacity: where Proof: Sufficiency of (31) is a direct consequence of Shannon compression and Holevo-Schumacher-Westmoreland (HSW) coding. The sender first compresses the information source down to a set of size ≈ 2 nH(U ) . The sender then employs an HSW code to transmit any message in the compressed set over n uses of the quantum channel. Reliability of the scheme follows from the assumption that H (U ) ≤ χ * (N ), the HSW coding theorem, and Shannon compression. The first equality follows from the assumption that the classical information source is memoryless. The second equality is a simple identity. The first inequality follows from applying Fano's inequality. The second inequality follows from the quantum data processing inequality and the assumption that (30) holds. The third inequality follows because I (U n ; B n ) must be smaller than the maximum of this quantity over all classical-quantum states that can serve as an input to the tensor power channel N ⊗n . The final equality follows from the assumption that the Holevo capacity is additive for the particular channel N . Thus, any protocol that reliably transmits the information source U should satisfy the following inequality which converges to (31) as n → ∞ and ε → 0. Remark 7: If the Holevo capacity is not additive for the channel, then the best statement of the source-channel separation theorem is in terms of the regularized quantity: but it is unclear how useful such a statement is because we cannot compute such a regularized quantity. (The above statement follows by applying all of the inequalities in the proof of Theorem 6 except the last one.) What if the condition H (U ) > χ * (N ) holds instead? We can prove a variant of the above source-channel separation theorem that allows for the information source to be reconstructed at the receiving end up to some distortion D. We obtain the following theorem: Theorem 8: The following condition is necessary and sufficient for transmitting the output of a memoryless classical information source over a quantum channel with additive Holevo capacity (up to some distortion D): where R (D) is defined in (1). Proof: Sufficiency of (33) follows from the rate distortion protocol and the HSW coding theorem. Specifically, the sender compresses the information source down to a set of size 2 nR(D) and then uses an HSW code to transmit any element of this set. The reconstructed sequenceÛ n at the receiving end obeys the distortion constraint E{d(U,Û )} ≤ D, with d(U,Û ) denoting a suitably defined distortion measure. Necessity of (33) follows from the fact that nR (D) ≤ I(U n ;Û n ), (34) and by applying the last four steps in the chain of inequalities in (32). A proof of (34) is available in (10.61-10.71) of Ref. [21]. B. Quantum source-channel separation theorem We now prove a source-channel separation theorem which is perhaps more interesting for quantum computing/communication applications. Suppose that a sender would like to transmit a quantum information source faithfully over a quantum channel, such that the receiver perfectly recovers the transmitted quantum source in the limit of many copies of the source and uses of the channel. Figure 3 depicts the scenario to which our second source-channel separation theorem applies. As before, we characterize a memoryless quantum information source by a density matrix ρ A ∈ D(H A ), and consider |ψ ρ RA ∈ H R ⊗ H A denote its purification. The entropy of the source H(A) ρ is given by (3). Let N A →B denote a memoryless quantum channel. Suppose Alice has access to multiple uses of the source, and she and Bob are allowed multiple uses of the quantum channel. Since Alice needs to act on many copies of the state ρ, we instead suppose that she is acting on the A systems of the tensor power state |ψ ρ RA ⊗n . The most general protocol is one in which Alice performs some CPTP encoding map E n ≡ E A n →A n on the A systems of the state |ψ ρ RA ⊗n , producing some output systems A n which can serve as input to many uses of the quantum channel N A →B . Alice then transmits the A n systems over the channels, leading to some output systems B n for the Bob. Bob then acts on these systems with some decoding map D n ≡ D B n →Â n . If the protocol is any good for transmitting the quantum information source, then the following condition should hold for any ε > 0 and sufficiently large n: The relation between trace distance and entanglement fidelity [57] implies that where Λ n is the composite map Λ n ≡ D n • N ⊗n • E n . We can now state our first variant of a quantum sourcechannel separation theorem. Theorem 9: The following condition is necessary and sufficient for transmitting the output of a memoryless quantum information source, characterized by a density matrix ρ A , over a quantum channel N ≡ N A →B with additive coherent information: where H (A) ρ is the entropy of the quantum information source, and Q (N ) is the coherent information of the channel N : Fig. 3. The most general protocol for transmitting a quantum information source over a memoryless quantum channel. Proof: Sufficiency of (37) follows from Schumacher compression and the direct part of the quantum capacity theorem [38], [52], [23]. Specifically, the sender compresses the source down to a space of dimension ≈ 2 nH(R) with the Schumacher compression protocol. She then encodes this subspace with a quantum error correction code for the channel N . The condition in (37) guarantees that we can apply the direct part of the quantum capacity theorem, and combined with achievability of Schumacher compression, the receiver can recover the quantum information source with asymptotically small error in the limit of many copies of the source and many uses of the quantum channel. Fix ε > 0 and note that H(A) ρ = H(R) ψ since ψ ρ RA is a pure state. Then the necessity of (37) follows from the chain of inequalities given below. Note that the subscripts denoting the states have been omitted for simplicity: The first equality follows from the assumption that the initial state |ψ ρ RA ⊗n is a tensor power state. The first inequality follows from (7.34) of Ref. [6] a fundamental relation between the input entropy, the coherent information of a channel, and the entanglement fidelity of any quantum error correction code. Now, the encoding that Alice employs may in general be some CPTP encoding map (and not an isometry). However, Alice can simulate any such CPTP map by first performing an isometry and then a von Neumann measurement on the system not fed into the channel (the environment of the simulated CPTP). Let M denote the classical system resulting from measuring the environment of the simulated CPTP map. We can write the state after the channel acts as a classical-quantum state of the following form: Then the second inequality follows from quantum data processing inequality and (36). The second equality follows because whenever the conditioning system is classical [57]. The third inequality follows because the channel's coherent information is never smaller than any individual I (R n B n ) ρm (and thus never smaller than the average). The final inequality follows from the assumption that the channel has additive coherent information (this holds for degradable quantum channels [27] and is suspected to hold for two-Pauli channels [54]). Thus, any protocol that reliably transmits the quantum information source should satisfy the following inequality which converges to (37) as n → ∞ and ε → 0. Remark 10: A similar comment as in Remark 7 holds whenever it is not known that the channel has additive coherent information. C. Entanglement-assisted quantum source-channel separation theorem Our final source-channel separation theorem applies to the scenario where Alice and Bob have unlimited prior shared entanglement. The statement of this theorem is that the entropy of the quantum information source being less than the entanglement-assisted quantum capacity of the channel [10], [26], [57] is both a necessary and sufficient condition for the faithful transmission of the source over an entanglementassisted quantum channel. This theorem is the most powerful of any of the above because the formulas involved are all single-letter, for any memoryless source and channel. Figure 4 depicts the scenario to which this last theorem applies. The situation is nearly identical to that of the previous section, with the exception that Alice and Bob have unlimited prior shared entanglement. Alice begins by performing some CPTP encoding map E n ≡ E A n T A →A n on the systems A n from the quantum information source and on her share T A of the entanglement, producing some output systems A n which can serve as input to many uses of a quantum channel N A →B . Alice then transmits the A n systems over the channels, leading to some output systems B n for Bob. Bob then acts on these systems and his share T B of the entanglement with some decoding map D n ≡ D B n T B →Â n . If the protocol is any good for transmitting the quantum information source, then the following condition should hold for any ε > 0 and sufficiently large n: where Φ T A T B is the entangled state that they share before communication begins (it does not necessarily need to be maximally entangled). This leads to our final source-channel separation theorem: Theorem 11: The following condition is necessary and sufficient for transmitting the output of a memoryless quantum information source, characterized by a density matrix ρ A , over any entanglement-assisted quantum channel N ≡ N A →B : where H (A) ρ is the entropy of the quantum information source, and Proof: Sufficiency of (40) follows from reasoning similar to that in the proof of Theorem 9. We just exploit Schumacher compression and the entanglement-assisted quantum capacity theorem [10], [26], [57]. Fix ε > 0 and note that H(A) ρ = H(R) ψ since ψ ρ RA is a pure state. Then necessity of (40) follows from the following chain of inequalities. Once again, the subscripts denoting the states have been omitted for simplicity: The first inequality follows by applying the same reasoning as the first inequality in (38). The second inequality follows by applying H (R n ) + I (R n B n T B ) = I (R n ; B n T B ) and the fact that 1 − F e ≤ ε for a protocol satisfying (39). The second inequality follows from a useful identity for quantum mutual information. The third equality follows from the assumption that systems R n and T B begin in a product state. The third inequality follows because I (T B ; B n ) ≥ 0. The fourth inequality follows from the reasoning, similar to that used in the proof of Theorem 9, that Alice simulates an isometry and measures the environment (also exploiting the quantum data processing inequality). The next inequality follows because the state on R n T B M B n is a state of the form where we identify R n T B with A, and M with X. Thus, the information quantity I (R n T B M ; B n ) can never be larger than the maximum over all such states of that form. The second-to-last equality was proved in Refs. [59], [57]. The final equality follows from additivity of the channel's quantum mutual information [2], [10], [57]. Thus, any entanglementassisted protocol that reliably transmits the quantum information source should satisfy the following inequality H (A) ρ ≤ 1 2 I (N ) + (1/n + 2ε log |R|) , which converges to (40) as n → ∞ and ε → 0. What if the condition H (A) ρ > 1 2 I (N ) holds instead? We can prove a variant of the above source-channel separation theorem that allows for the information source to be reconstructed at the receiving end up to some distortion D. We obtain the following theorem: Theorem 12: The following condition is necessary and sufficient for transmitting the output of a memoryless quantum information source over an entanglement-assisted quantum channel (up to some distortion D): where R q eaq (D) is defined (19). Proof: Sufficiency of (42) follows from the entanglementassisted rate distortion protocol from Theorem 3 and the entanglement-assisted quantum capacity theorem [10], [26]. That is, the sender compresses the information source down to a space of size 2 nR q eaq (D) and then uses an entanglementassisted quantum code to transmit any state in this subspace. The reconstructed state at the receiving end obeys the distortion constraint. Necessity of (42) follows from the fact that nR q eaq (D) ≤ 1 2 I(R n ;Â n ), by applying the quantum data processing inequality to get I(R n ;Â n ) ≤ I(R n ; B n T B ), and finally by applying the last seven steps in the chain of inequalities in (41). A proof of (43) is available in (17) of the proof of Theorem 2. V. CONCLUSION We have proved several quantum rate-distortion theorems and quantum source-channel separation theorems. All of our quantum rate-distortion protocols employ the quantum reverse Shannon theorems [10], [1], [24], [8], [12]. This strategy works out well whenever unlimited entanglement is available, but it clearly leads to undesirable regularized formulas in the unassisted setting. Our quantum source-channel separation theorems demonstrate in many cases that a twostage compression-channel-coding strategy works best for memoryless sources and for quantum channels with additive capacity measures. Again, our most satisfying result is in the entanglement-assisted setting, where the pleasing result is that the entanglement-assisted rate distortion function being less than the entanglement-assisted quantum capacity is both necessary and sufficient for transmission of a source over a channel up to some distortion. The most important open question going forward from here is to determine better protocols for quantum rate distortion that do not rely on the reverse Shannon theorems. The differing goals of a reverse Shannon theorem and a rate distortion protocol are what lead to complications with regularization in Theorem 5. Another productive avenue could be to explore scenarios where the unassisted quantum source-channel separation theorem does not apply. In the classical case, it is known that certain sources and channels without a memoryless structure can violate the source-channel separation theorem [56], and similar ideas would possibly demonstrate a violation for the quantum case. Though, in the quantum case, it very well could be that certain memoryless sources and channels could violate source-channel separation, but we would need a better understanding of quantum capacity in the general case in order to determine definitively whether this could be so. Other interesting questions are as follows: Does the entanglement-assisted quantum source-channel separation theorem apply if sender and receiver are given unlimited access to a quantum feedback channel, given what we already know about quantum feedback [14]? Can anything learned from source-channel separation for classical broadcast or wiretap channels be applied to figure out a more general characterization for quantum channels that are not degradable? The authors thank Jonathan Oppenheim and Andreas Winter for useful discussions, Patrick Hayden for the suggestion to pursue a quantum source-channel separation theorem, and the anonymous referees for helpful suggestions. ND and MHH received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement number 213681. MMW acknowledges financial support from the MDEIE (Québec) PSR-SIIRI international collaboration grant and thanks the Centre for Mathematical Sciences at the University of Cambridge for hosting him for a visit. APPENDIX Lemma 13: For a fixed state ρ, the quantum mutual information is convex in the channel operation:
2012-08-19T13:42:06.000Z
2011-08-24T00:00:00.000
{ "year": 2011, "sha1": "51d6d1f656324f9362afc4f5dfce5508aff08832", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1108.4940", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c6c2b1c395e6cef0a9df7c1a66e74a6b93747931", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Mathematics" ] }
237325252
pes2o/s2orc
v3-fos-license
Plausibility of Early Life in a Relatively Wide Temperature Range: Clues from Simulated Metabolic Network Expansion The debate on the temperature of the environment where life originated is still inconclusive. Metabolic reactions constitute the basis of life, and may be a window to the world where early life was born. Temperature is an important parameter of reaction thermodynamics, which determines whether metabolic reactions can proceed. In this study, the scale of the prebiotic metabolic network at different temperatures was examined by a thermodynamically constrained network expansion simulation. It was found that temperature has limited influence on the scale of the simulated metabolic networks, implying that early life may have occurred in a relatively wide temperature range. Introduction The temperature of the environment where life originated has elicited a long-term debate. Previous genome sequence-based studies on this issue reached inconsistent results (detailed in the Discussion) [1][2][3][4][5][6][7]. New perspectives are needed to address this issue. Metabolism-first world theory suggests that the origin of metabolism was earlier than the appearance of the genetic system and organic catalysts such as enzymes [8][9][10][11]. The feasibility (thermodynamics) and rates (kinetics) of the reactions that make up metabolism networks are associated with temperature. Therefore, studying the effects of temperature on metabolic reactions and networks may shed more light on the environmental conditions for the origin of life. The network expansion algorithm has been used to trace the evolutionary history hidden in modern metabolism networks [12][13][14][15]. Using this method, Goldford et al. extracted a phosphorus-independent subset from the full KEGG metabolism, which represented a biosphere-level primitive metabolic network [13]. This network exhibited ancient features, such as the enzymes and protein folds belonging to the speculative last universal common ancestor (LUCA) proteomes, and thus was considered as a "metabolic fossil". They also showed that thermodynamics is an important limiting factor for the scale of the metabolic network. Interestingly, Goldford et al. found that in standard conditions, a thioester pantetheine instead of phosphates (pyrophosphate or acetyl-phosphate) can significantly alleviate the thermodynamic bottleneck of the expansion of the metabolic network, though the latter is a more common energy source in modern metabolism. Goldford et al. suggested that the phosphorus-independent network provides a solution to the "phosphorus problem", which was caused by the low solubility and reactivity of Earth's main phosphate source-apatite [16]. This network also supports the previously proposed thioester world theory, which proposed that in the very early stage of life's origin, thioesters played multiple important roles in prebiotic metabolism [17]. Recently, based on their phosphorus-independent network, Goldford et al. investigated the impact of several environmental factors on metabolism and found that temperature has a relatively limited influence on the scale of the metabolism network [14]. Nevertheless, phosphorus is an essential element of life and several possible prebiotic phosphorus sources have been proposed [18]. Phosphite, which is more soluble and has higher reactivity than apatite [16], can be generated from volcanic activity [19] or extraterrestrial schreibersite [20]. Another effective phosphorylating agent, polyphosphate, can also be generated in volcanic regions [21]. Phosphite and polyphosphate can be directly used in the synthesis of biomolecules [20,21], or be first converted to orthophosphate in plausible prebiotic environments [22]. Based on these findings, in a recent study [15], we constructed a phosphate-dependent network using Goldford et al.'s method. The obtained phosphatedependent network could be as ancient as the phosphorus-independent counterpart and could synthesize ribose, which implies a connection between the metabolism world and the RNA world. Moreover, several phosphorylated intermediates such as glucose 6-phosphate can play the same role as thioester, significantly alleviating the thermodynamic bottleneck of network expansion. It is thus intriguing to explore how temperature affects the scale of this simulated phosphorus-dependent network. Data Sources The metabolism reactions and compounds were downloaded from the KEGG reaction database (release: 84.0) [23]. The Gibbs free energy of reactions were calculated by eQuilibrator [24,25]. Thermodynamically Constrained Network Expansion Simulation The thermodynamically constrained network expansion simulation method has been described in detail in previous studies [13,15]. In brief, firstly, all KEGG metabolic reactions were downloaded and vague and unbalanced reactions were filtered out. The remaining reactions and their compounds constituted the background metabolism pool, which included 7376 reactions and 6460 compounds (Table S1). Secondly, the network expansion was started with a seed set of compounds that are considered to have existed on the primitive Earth. The seeds used in the phosphate-dependent network expansion were the same as in our previous study [15], which included water, dinitrogen, carbon dioxide, hydrogen sulfide, ammonia, acetate, formate, and glucose 6-phosphate. The first eight compounds are considered to have been abundant on prebiotic Earth and should participate in primitive biochemical reactions [13][14][15]. Glucose 6-phosphate is a phosphorylated intermediate from glycolysis which was speculated to be prebiotically synthesized [26,27]. Our previous study showed that glucose 6-phosphate can significantly alleviate the thermodynamic bottleneck. Without this compound, the network obtained by thermodynamically constrained expansion contained only dozens of metabolites [15]. In the thioester-dependent network expansion, glucose 6-phosphate was replaced by pantetheine. Note that Goldford et al. proposed that pantetheine can function as CoA thioesters [13]. Therefore, CoA and acetyl-CoA were also added into the seed set. Thirdly, the products of thermodynamically reachable reactions from the background metabolism pool enabled by available substrates were iteratively added into the compounds set until no new reactions or compounds could be produced. Previously, the thermodynamically accessible standard of a reaction at 25 • C was defined as Gibbs free energy below 30 kJ/mol [13,15]. However, this standard is not appropriate at other temperatures. In this work, we calculated the lowest reaction free energy at the boundary of the physiological range of metabolite concentrations, referring to Goldford et al.'s recent study [14]. Briefly, the free energy of every reaction at different temperatures (∆G ) was calculated using the following equation: where ∆G • is the Gibbs free energy under standard molar conditions, which was estimated by the eQuilibrator program using a group contribution method [24,25]. ∆G • is affected by pH, ionic strength, and also by the concentration of free Mg 2+ (pMg) for some reactions like ATP hydrolysis [25]. These conditions were set as follows: pH = 7.0, ionic strength = 0.1 M, and pMg = 0.0, which is the same as our previous study [15]. R is the ideal gas constant, T is the temperature, a i is the activity of metabolite i (represented by the metabolite's concentration), and s i is the stoichiometric coefficient for metabolite i in a certain reaction. For all reactions, the reactant and product concentrations were set to 0.1 M and 10 −6 M, respectively. These two values are considered to be the upper and lower bounds of the concentration of metabolites under physiological conditions. The water activity was set to 1 M, which is an essential assumption of eQuilibrator [24,25]. Reactions with positive free energy were removed from the background metabolism pool. About 40% of the biospherelevel metabolic network reactions had no accurate free energy estimation (3122/7376). The background metabolism pools including and excluding these reactions were both analyzed in this study. The free energy data of the reactions can be found in Table S2. It should be noted that the change in Gibbs free energy (∆G) contains two components: entropy change (∆S) and enthalpy change (∆H). However, data of ∆S are not available for most reactions so it was not taken into account when using eQuilibrator to calculate ∆G in different temperatures [24,25]. This simplification may cause some reactions to be incorrectly defined as thermodynamically favored. Results All chemical reactions are constrained by the thermodynamic rules. A reaction with a positive standard change in Gibbs free energy (∆G > 0) cannot proceed spontaneously. In modern organisms, these uphill reactions are usually coupled with exergonic reactions such as ATP hydrolysis to become energetically favorable. However, such stably coupled reaction systems may not have been available on primitive earth [28]. Therefore, the thermodynamic constraints could be an important limiting factor for primitive metabolism [13][14][15]. Existing organisms were found to survive from −25 • C to over 120 • C. Planococcus halocryophilus Or1, a bacterium isolated from the salty water veins of high Arctic permafrost, can remain metabolically active at −25 • C [29]. Hyperthermophilic archaea Methanopyrus kandleri strain 116 can proliferate at 122 • C [30]. Therefore, we simulated the expansion of the phosphatedependent and thioester-dependent networks at the temperatures from −25 • C to 150 • C. Using Goldford et al.'s method [14], we calculated the ∆G of the metabolic reactions at −25 • C, 0 • C, 25 • C, 50 • C, 75 • C, 100 • C, 125 • C, and 150 • C, respectively. As shown in Figure 1, from −25 • C to 150 • C, the scales of both phosphorus-and thioester-dependent networks only exhibit slight increases, regardless of whether the reactions include accurate free energy estimation or not. When including the reactions without accurate free energy estimation, the scale of the phosphate-dependent network obtained at −25 • C is close to the thermodynamically constrained network constructed in our previous study (360 vs. 338 metabolites) [15]. At this temperature, this network is mainly composed of reactions that participate in glycolysis, the tricarboxylic acid (TCA) cycle, and carbon fixation, which supply the basic carbohydrates and energy to living systems. They can also produce eight proteinogenic amino acids (Figure 2A, Table S3). The reactions that are only thermodynamically feasible at temperatures higher than −25 • C provide 17 new metabolites (Table S3). Seven of these metabolites are involved in the amino acid metabolism, but none of them are proteinogenic amino acids. The other metabolites generated at higher temperatures are scattered across different pathways. When excluding the reactions without accurate free energy estimation, the network keeps the functions mentioned above, and the temperature increase has no significant influence on these functions ( Figure S1A, Table S3). The thioester-dependent networks can produce two more proteinogenic amino acids but lack glycolysis-related carbohydrates. These functions are also not greatly influenced by the change in temperature ( Figure 2B and Figure S1B, and Table S3). When including the reactions without accurate free energy estimation, the scale of the phosphate-dependent network obtained at −25 °C is close to the thermodynamically constrained network constructed in our previous study (360 vs. 338 metabolites) [15]. At this temperature, this network is mainly composed of reactions that participate in glycolysis, the tricarboxylic acid (TCA) cycle, and carbon fixation, which supply the basic carbohydrates and energy to living systems. They can also produce eight proteinogenic amino acids (Figure 2A, Table S3). The reactions that are only thermodynamically feasible at temperatures higher than −25 °C provide 17 new metabolites (Table S3). Seven of these metabolites are involved in the amino acid metabolism, but none of them are proteinogenic amino acids. The other metabolites generated at higher temperatures are scattered across different pathways. When excluding the reactions without accurate free energy estimation, the network keeps the functions mentioned above, and the temperature increase has no significant influence on these functions ( Figure S1A, Table S3). The thioester-dependent networks can produce two more proteinogenic amino acids but lack glycolysisrelated carbohydrates. These functions are also not greatly influenced by the change in temperature (Figures 2B and S1B, and Table S3). Nucleotides are the basic components of RNA and constitute the cofactors of many proteins, so they are of great significance in the origin of life [31]. As there is no phosphorus in the thioester-dependent networks, it is obviously impossible for them to synthesize nucleotides ( Figures 2B and S1B). In fact, they cannot even synthesize ribose (Table S3). The phosphate-dependent networks can generate ribose 5-phosphate (KEGG ID C00117) at all tested temperatures, which is a precursor required for the synthesis of nucleotides (Table S3). Moreover, ribose 5-phosphate can be further phosphorylated to 5-phosphoribosyl diphosphate (PRPP) through KEGG reaction R01049, which is necessary for both purine and pyrimidine synthesis ( Figure S2). Under a physiological condition, ATP provides the phosphate group and energy for the reaction. ATP is located at the end of one branch of the KEGG nucleotide synthesis pathway. Therefore, ATP cannot be used in a reaction that is located at the upstream end of the pathway, which prevented the phosphate-dependent networks from expanding along this path. However, there may be other ways to achieve the phosphorylation of ribose 5-phosphate. In fact, glucose-6-phosphate hydrolysis can provide phosphate and energy. We constructed such a reaction: ribose 5phosphate + 2 glucose 6-phosphate => PRPP + 2 glucose. Under the substance concentration condition used in the network expansion simulation, this reaction is thermodynamically favorable even at −25 °C (with a free energy of −33.56 kJ/mol), implying that glucose- Nucleotides are the basic components of RNA and constitute the cofactors of many proteins, so they are of great significance in the origin of life [31]. As there is no phosphorus in the thioester-dependent networks, it is obviously impossible for them to synthesize nucleotides ( Figure 2B and Figure S1B). In fact, they cannot even synthesize ribose (Table S3). The phosphate-dependent networks can generate ribose 5-phosphate (KEGG ID C00117) at all tested temperatures, which is a precursor required for the synthesis of nucleotides (Table S3). Moreover, ribose 5-phosphate can be further phosphorylated to 5-phosphoribosyl diphosphate (PRPP) through KEGG reaction R01049, which is necessary for both purine and pyrimidine synthesis ( Figure S2). Under a physiological condition, ATP provides the phosphate group and energy for the reaction. ATP is located at the end of one branch of the KEGG nucleotide synthesis pathway. Therefore, ATP cannot be used in a reaction that is located at the upstream end of the pathway, which prevented the phosphate-dependent networks from expanding along this path. However, there may be other ways to achieve the phosphorylation of ribose 5-phosphate. In fact, glucose-6-phosphate hydrolysis can provide phosphate and energy. We constructed such a reaction: ribose 5-phosphate + 2 glucose 6-phosphate => PRPP + 2 glucose. Under the substance concentration condition used in the network expansion simulation, this reaction is thermodynamically favorable even at −25 • C (with a free energy of −33.56 kJ/mol), implying that glucose-6-phosphate may play a role like ATP and facilitate the phosphatedependent network to generate the nucleotides. In modern life, ATP plays multiple roles in metabolism. In the background metabolism pool used in this study, ATP is the reactant or product of 471 reactions. These reactions cannot be reached by the network expansion because ATP is not available in the simulation. In primitive metabolism, other phosphorus-containing compounds with simpler structures may be more prevalent than ATP, and played similar roles to it in these reactions [32]. As shown in the fictional reaction to synthesize PRPP, their products may be obtained through reactions that do not rely on ATP. However, the possibility that ATP existed in the prebiotic world cannot be completely ruled out, because the abiotic synthesis of ATP from simple inorganic substances may be achieved in the prebiotic environment [33][34][35]. Therefore, we replaced the glucose-6-phosphate in the "seeds" with ATP and performed network expansions. The obtained metabolic networks have about 200 more metabolites than the phosphorus-dependent networks at the same temperatures. For these ATPdependent networks, from −25 • C to 150 • C, only 24 metabolites were added (including the reactions without accurate free energy estimation, Table S3. In modern life, ATP plays multiple roles in metabolism. In the background metabolism pool used in this study, ATP is the reactant or product of 471 reactions. These reactions cannot be reached by the network expansion because ATP is not available in the Table S3. These findings showed that, at least in terms of thermodynamics, the scale and main functions of the simulated metabolic networks are slightly affected by temperature. The free energy characteristics (positive or negative) of most reactions do not change within the examined temperature range (Table S2), so the feasibility of these reactions is not largely influenced by temperature. These results suggest that, whether it is dependent on phosphate or thioester, metabolism may have originated in a relatively wide temperature range. Discussion There are many different opinions on the temperature at the origin of life. Darwin speculated that the first living organisms evolved in "warm little ponds" [36]. The Miller-Urey experiment showed that a warm, lightning-filled atmosphere that may have existed on primitive Earth can produce important biomolecules [37]. Hydrothermal environments were also "hot" in the area of the origin of life. Terrestrial hydrothermal fields and submarine hydrothermal vents can provide materials, energy, reducing power, and pH gradients, which may facilitate the synthesis of basic biomolecules [38][39][40]. Moreover, terrestrial hydrothermal fields can evaporate water to become dry and get wet by rain, which formed a natural dry-wet cycle [41]. Dry-wet cycles are very important for the synthesis of nucleotides under metal catalysis [42]. These effects are also significant for the polymerization of biological macromolecules such as peptides [43,44] and polynucleotides [41,45]. A criticism of the hydrothermal origin of life is that although the structures of most small molecule metabolites are stable, the high temperature can accelerate their reactions with other substances, such as water, and thus causes their thermal instability [46]. Miller and colleagues argued that important metabolites such as ribose and ATP are thermally unstable, thus life was not likely to originate at high temperatures [47,48]. However, the meaning of "stable" is relative. Some metabolites in metabolic pools have very short turnover times. Metabolites that can exist longer than their turnover time in organisms should have a chance to participate in metabolic reactions as reactants. Based on this principle, Bains et al. predicted the degradation rates of 63 metabolites at different temperatures and compared them with the intracellular half-life of these metabolites [46]. As a result, most of the metabolites were found to be unable to exist longer than their intracellular half-life at temperatures above 150-180 • C. Therefore, Bains et al. suggested that this temperature range is the upper limit of biochemistry. Some basic materials for life also can be generated at cold temperatures. HCN polymerization may be an important procedure in biomolecule synthesis [49]. However, this process not likely to occur in warm environments due to the fast hydrolysis of HCN [50]. It has been found that the eutectic freezing of HCN and water at -21 • C can concentrate the former and promote its polymerization [50]. Moreover, eutectic freezing of NH 4 CN solutions can generate nucleic acid bases and amino acids [51,52]. In addition to the conditions required for the synthesis of biomolecules, the living environment of modern organisms can also be used as a clue to explore the temperature for the origin of life. Previously, thermophilic bacteria and archaea were considered to be located near the roots of the phylogenetic tree, suggesting that life originated in hot temperatures [1][2][3]. However, in a newer rRNA-based phylogenetic tree of bacteria, mesophilic species rather than thermophiles are the closest branches to the root, showing that the thermophiles are latecomers in evolution [4]. Several studies inferred the temperature of the habitat for the LUCA based on the deduced rRNA GC contents or protein amino acid compositions, but no agreement has been reached [5][6][7]. The inconsistent results from gene sequence-based studies can be attributed to the uncertain early life evolution trajectory, which is difficult to properly characterize or experimentally test [53]. Compared with the genetic system, metabolic reactions may have an earlier origin and are more common among different species. Therefore, metabolism could be a window to detect the properties of the earliest life. In this study, the influence of temperature on metabolism was analyzed in terms of thermodynamics. We found that whether glucose 6-phosphate or thioester was used to alleviate the thermodynamic bottleneck of network expansion, there is no significant difference in the function and scale of the networks generated at different temperatures. These results imply that the origin of metabolism could have occurred in a relatively wide temperature range. Although the thermodynamic data of most biochemical reactions at different temperatures are very scarce, several reactions that are crucial for network expansion may have this kind of information. To identify these network scale-limiting reactions, the network expansion was re-performed several times, wherein the reactions from the background metabolism pool were removed one by one. We found that the deletion of most reactions did not cause a significant reduction in the generated network, except for a few (Table S4). When excluding the reactions without accurate free energy estimation, reactions R00874, R01519, R01538, and R08570 are necessary for maintaining the scale of the phosphatedependent network. Without any of these reactions, the network expansion ceased with up to eighty-nine metabolites. These four reactions are related to glucose metabolism. In R00874, glucose reacts with fructose to form gluconolactone and glucitol. The other three reactions are components of the pentose phosphate pathway, which starts with glucose and generates multiple important metabolites. For the thioester-dependent networks, reaction R00212 is necessary for network expansion. Without this reaction, the produced networks contained only dozens of metabolites. In this reaction, acetyl-CoA reacts with formate and generates CoA and pyruvate. These results showed that certain reactions can indeed deeply influence the expansion of metabolic networks. We tried to find out more about their feasibility at different temperatures, but no useful information was found. Further study on the thermodynamics of these key reactions may provide a better understanding of the temperature of the environment where life originated. Thermodynamics is a very fundamental property for chemical reactions. Therefore, this study may also be meaningful for finding exoplanet life. Based on planetary physical parameters such as temperature, Lingam and Loeb formulated likelihood functions estimating the possibility of the existence of life on a certain planet [54]. In the calculation, they referred to the temperature range of the living creatures on Earth (262 to 395 K). Temperatures beyond this range will lead to a rapid decline in the possibility of life. The tested temperature range in this study is a bit wider (−25 to 150 • C, i.e., 248.15 to 423.15 K). The present analysis suggests that metabolic networks may originate in this range. In our simulation, the expansion of metabolic networks does not reject low temperature, which increases the possibility of life on cold celestial bodies. Primitive metabolism may occur in the saltwater lakes on Mars, the liquid methane of Titan, or a liquid ammonia ocean elsewhere. In addition to the above discussion, the interplays between temperature and other factors such as the phase change of water (e.g., dry-wet cycle and eutectic freezing, as discussed above) and redox potential can promote metabolic reactions. In Goldford et al.'s study, redox potential had a decisive influence on the scale of the metabolism networks [14]. Temperature gradients can cause electrochemical potentials which can be used as a thermodynamic driving force. For example, the high temperature, pressure, and pH of deep sea alkaline hydrothermal vents elevate the redox potential of H 2 oxidation, which can be used as an electrochemical driving force for the abiotic reduction of CO 2 [55,56]. Intriguingly, temperature may also shape the structure of peptides [57], which could serve as catalysts in prebiotic metabolism. To date, most studies on the origin of metabolism have not considered the interactions between temperature and other environmental factors. Goldford et al.'s study investigated several environmental factors, but they are still independent of each other. The interplays between temperature and other factors and how they affect metabolism deserve more in-depth studies. Moreover, the feasibility of chemical reactions is also dependent on their kinetics. Thermodynamically favored reactions may have high energy barriers between the reactants and products, which make the reactions unfeasible in terms of kinetics. In future studies, we will explore the influence of temperature on the kinetics of key metabolic reactions by calculating the activation energies for the reactions. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/life11080738/s1, Figure S1: Metabolic networks at different temperatures. Figure S2: The link between phosphate-dependent networks and nucleotide metabolic pathways. Table S1: Background metabolism pool. Table S2: Gibbs free energy of the reactions at different temperatures. Table S3: Metabolites and reactions at different temperatures.
2021-08-28T06:17:17.947Z
2021-07-24T00:00:00.000
{ "year": 2021, "sha1": "681a13a77ae76c9ed33849eedead283675d53481", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1729/11/8/738/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3782c1d1923356efa2d36d49cfea8bcb2ce5fdf0", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248285787
pes2o/s2orc
v3-fos-license
Two Novel Lipophilic Antioxidants Derivatized from Curcumin Tert-butyl curcumin (TBC), demethylated tert-butylated curcumin (1E,6E-1,7-bis(3-tert-butyl-4,5-dihydroxyphenyl)hepta-1,6-diene-3,5-dione, DMTC), demethylated curcumin (DMC), and Cur were synthesized from the starting compound, 2-methoxy-4-methylphenol. TBC and DMTC are two novel lipophilic compounds, and Cur and DMC are polar and hydrophilic. The antioxidant activities of Cur, TBC, DMC, and DMTC were evaluated by using the methods of 2,2-diphenyl-1-(2,4,6-trinitro-phenyl)-hydrazinyl (DPPH), deep-frying, and Rancimat. Tert-butylhydroquinone (TBHQ) and Butylated hydroxytoluene (BHT) were used as comparison compounds. Both Rancimat and deep-frying tests demonstrated that DMTC was the strongest antioxidant, and TBC also had stronger antioxidant activity than Cur. In the DPPH assay, DMC showed the highest scavenging activity, followed by DMTC, TBHQ, Cur, and TBC. DMTC and TBC can be potentially used as strong antioxidants in food industry, especially for frying, baking, and other high temperature food processing. DMTC is the strongest antioxidant in oil to our knowledge. Introduction Oxidation of lipids involves the generation of free radicals and induces food deterioration [1]. Usually, three common methods are used to prevent lipid oxidation: air isolation, temperature lowering, and addition of antioxidants during processing, transport, and storage [2]. However, addition of antioxidants is the most convenient, economical, and operable method in oil and food industry [3]. Free radicals, peroxides, and decomposition products produced by autoxidation are related to a variety of human diseases, such as cancers, cataracts, cardiovascular and cerebrovascular diseases [4][5][6]. Antioxidants are a class of substances that can retard deterioration caused by autoxidation of fats and oils, and fatty foods effectively at minute concentrations (< or =0.02%) [7,8]. Although commercial synthetic antioxidants have been shown to have negligible toxicity at prescribed doses, natural antioxidants are widely considered safer and non-toxic [9,10]. The application of natural antioxidants is limited because most of their hydrophilic polyphenol structures make solubility in lipids difficult [11]. Over the years, a few researchers have modified the structures of natural antioxidants to improve their solubility in lipids [12]. Cur, a polyphenol, is isolated from the rhizome of turmeric (Curcuma longa) [13,14] and has numerous biological activities, such as anti-tumor, anti-oxidation, anti-inflammation, anti-HIV, and so on [15][16][17]. However, its application is limited in food industry because of poor fat solubility [18,19]. Therefore, using Cur as a lead compound to transform the structural properties of it has become a hot research topic, aiming to improve the structural disadvantages and expand the application scope. In recent years, some researchers have modified the structure of natural antioxidants to improve their solubility in lipids. Shi et al. [20] added tert-butyl to the ortho-position of the phenolic hydroxyl group of caffeic acid, which greatly improved the antioxidant activity of lipids. Olajide et al. [12] synthesized a lipophilic derivative of hydroxytyrosol with better steric synergy and stable structure, which enables it to meet the industrial requirements of food processing and bioactive ingredients under high temperature conditions. Some studies [21] have found that the hydrogen atoms on the adjacent α-CH 3 can be used as functional hydrogen, because the hydrogen atoms on it can provide functional hydrogen to the adjacent phenolic hydroxyl group through intramolecular transfer, thereby improving the antioxidant activity of the compound. Our lab has analyzed the soybean-oil-water partition coefficient of TBHQ, caffeic acid, methyl caffeate, and butylated caffeic acid, and found that the introduction of tert-butyl group to the ortho-position of the phenolic hydroxyl group can indeed improve lipid solubility [22]. In order to improve the lipophilicity and antioxidant activity of Cur, two bulky apolar groups, tert-butyl groups were substituted at the ortho-position of the phenolic hydroxyl group (TBC), and the two methyl groups were further removed from TBC to become DMTC. Their antioxidant activities were evaluated by DPPH, deep frying, and Rancimat methods. There are few research reports on Cur as one of the antioxidants in food additives. The purpose of this study is to find an antioxidant with better fat solubility and strong antioxidant capacity in high-temperature-fried foods. Lard used in this experiment was rendered in the laboratory and stored at −18 • C. Commercial soybean oil and potatoes were purchased from Wilmar International Limited (Shanghai, China). Soybean oil was purified using column chromatography (PE as eluent) to remove antioxidants and other polar components, especially tocopherols and other phenolic compounds in the oil. General Synthesis of Compounds 2-Methoxy-4-methyl-6-tert-butyl phenol: 2-methoxy-4-methylphenol (13.8 g; 0.1 mol) and 30 mL phosphoric acid (86%) were dissolved in 100 mL tert-butyl alcohol under stirring at 90 • C for 10 h and then deionized water was added to quench the reaction and the mixed solution was extracted with EtOAc, followed by washing with water (50 mL × 3). Then, the organic phase was collected, dried over anhydrous Na 2 SO 4 , and evaporated to obtain an oily mixture. The oily mixture was purified using column chromatography (PE/EtOAc, 20:1 v/v) and a pure compound (75% yield) was obtained as pale yellowish oily liquid. 1 Figure S1). Tert-butyl vanillin: 2-methoxy-4-methyl-6-tert-butyl phenol (4 g; 0.02 mol) and copper acetate (2 g; 0.01 mol) were dissolved in 80 mL ethylene glycol and heated to 100 • C for 36 h and then the reacted solution was extracted with EtOAc and further washed three times with water. Then, the organic phase was collected, dried over anhydrous Na 2 SO 4 , and evaporated in vacuum. The compound (70%) was obtained as light yellowish crystals after purification using flash chromatography (PE/EtOAc, 10:1 v/v). 1 Figure S2). The compounds Cur, TBC, DMC, and DMTC were synthesized by condensation of vanillin, Tert-butyl vanillin, demethylvanillin and 3-tert-butyl-4,5-dihydroxy-benzaldehyde separately with acetylacetone based on the available methods [23] with some improvement. The condensation reaction was performed as expressed in Scheme 1: benzaldehyde (0.04 mol) was dissolved in anhydrous EtOAc (20 mL), heated to complete dissolution, cooled to 30 • C, and then tributyl borate (0.08 mol), boric anhydride (0.15 mol), and acetylacetone (0.02 mol) were added to the mixture. After stirring for 5 min, 1 mL n-butylamine (4 mL in total) was slowly added to the mixed solution system every 10 min. Then, the mixture was stirred for 4 h and left to stand overnight at 20 • C in the dark. In total, 30 mL of 0.4 N hydrochloric acid solution (60 • C) was added and stirred for 60 min. The mixed solution was extracted with EtOAc and then washed three times with water. The organic phase was collected, dried over anhydrous Na 2 SO 4 , and evaporated to obtain the reacted mixture, which was separated using column chromatography and the individual crude products were recrystallized from methanol at 4 • C. 1.65 Hz, 2H), 9.82 (s, 1H). 13 Figure S2) 3-Tert-butyl-4,5-dihydroxybenzaldehyde: 2-methoxy-4-methyl-6-tert-butyl (1.94 g, 0.01 mol) was dissolved in chloroform (60 mL), followed by the addition (1.86 g, 0.014 mol) under nitrogen condition at 0 °C . After stirring for 5 min, pyr lution (3.48 g) was slowly added. The mixture was refluxed at 80 °C for 24 h, 10% HCl (60 mL × 3) was added at room temperature. The organic phase was c dried over anhydrous Na2SO4, concentrated using rotovap, and the resulting prod ture was purified using column chromatography (PE/EtOAc/ methanol, 4:1:0.02 afford compound (50% yield) as white acicular crystals. 1 The compounds Cur, TBC, DMC, and DMTC were synthesized by conden vanillin, Tert-butyl vanillin, demethylvanillin and 3-tert-butyl-4,5-dihydroxy-b hyde separately with acetylacetone based on the available methods [23] with s provement. The condensation reaction was performed as expressed in Scheme 1 dehyde (0.04 mol) was dissolved in anhydrous EtOAc (20 mL), heated to comple lution, cooled to 30 °C , and then tributyl borate (0.08 mol), boric anhydride (0.15 m acetylacetone (0.02 mol) were added to the mixture. After stirring for 5 min, 1 m tylamine (4 mL in total) was slowly added to the mixed solution system every Then, the mixture was stirred for 4 h and left to stand overnight at 20 °C in the total, 30 mL of 0.4 N hydrochloric acid solution (60 °C ) was added and stirred fo The mixed solution was extracted with EtOAc and then washed three times wi The organic phase was collected, dried over anhydrous Na2SO4, and evaporated the reacted mixture, which was separated using column chromatography and vidual crude products were recrystallized from methanol at 4 °C . DPPH Test The DPPH radical scavenging activities of BHT, TBHQ, Cur, TBC, DMC, and DMTC were evaluated according to a previous method reported by Shi et al. [20] with some modification. The concentrations of compounds in the reaction mixtures in ethanol were 1, 2.5, 5, 10, 20, 40, 50, 80, 100, 200, and 500 mg/L. A total of 0.5 mL of varying concentrations of compounds was mixed with 2.5 mL ethanolic solution of DPPH (0.1 × 10 −3 mol/L). The mixtures were shaken vigorously and stored in a dark chamber to react for 30 min, and then the reducing absorbance (Ai) of DPPH was read at 517 nm on a UV-2450 spectrophotometer (Shimadzu Corp, Kyoto, Japan). The radical scavenging activities, expressed as EC 50 , are the effective concentrations of the compounds required to obtain 50% antioxidant capacity. DPPH free radical scavenging activities of the compounds were calculated as below: where Ai represents the absorbance of the mixture of 0.5 mL varying concentrations of compound solution and 2.5 mL DPPH free radical ethanol solution; Aj represents the absorbance of the mixture of 0.5 mL ethanol solution and 2.5 mL DPPH free radical ethanol solution; A 0 represents the absorbance of 0.5 mL of varying concentrations of compound mixed with 2.5 mL ethanol solution. Rancimat Test The antioxidant activities of BHT, TBHQ, Cur, TBC, DMC, and DMTC in lard were examined based on the previously described method by Shi et al. [20] on Rancimat 743 apparatus (Metrohm, Herisau, Switzerland). Three-gram lard samples containing 0.01% and 0.02% of the compounds mentioned above were, respectively, added into the Rancimat sample tubes. Rancimat test conditions of acceleration oxidation were: air flow rate fixed at 20 L/h and temperatures fixed at 100, 110, 120, 130, and 140 • C. The results of Rancimat tests are expressed as induction period (IP) and protection factor (Pf), which was determined based on the method reported by Olajide et al. [12]: where IP sample and IP control represent the induction period of oxidation in lard containing antioxidant and without antioxidant, respectively. Pf < 1 means that the compound has pro-oxidant activity. Pf = 1 means the compound has neither antioxidant activity nor pro-oxidant activity [24]. The 1 < Pf < 3 indicates that the compound has antioxidant activity and Pf > 3 indicates that the compound has strong antioxidant activity. Pf > 6, the compound is defined here to have very strong antioxidant activity. Deep Frying Test The 0.02% compounds were added to 800 g of purified soybean oil, respectively. There were six experimental samples (BHT, TBHQ, Cur, TBC, DMC, and DMTC) and a control sample. All seven samples were heated to 180 ± 5 • C, then 30 ± 3 g potatoes were fried every hour for 10 min (10 h/day for 3 days) under atmospheric pressure. During the experiment, there was no new oil added. Oil samples (30 g) were taken every 3 h and stored in a −18 • C refrigerator. Finally, the conjugated dienes (CD) and acid values (AV) of the oil sample were determined following the IUPAC method [25]. CD: Oil sample (1 mg) was dissolved in n-hexane (100 mL), then the absorbance was measured at 233 nm using a V-1600PC UV-Vis spectrophotometer. The CD was calculated based on its absorbance value as below: Antioxidants 2022, 11, 796 5 of 11 Among them, A represents the absorbance value, C represents the concentration of oil sample (g/100 mL), and P represents the thickness of the quartz glass dish used for measurement (1 cm). AV: Oil sample (10 g), diethyl ether-isopropanol mixture solution (v/v: 1:1; 80 mL) and 3 drops of phenolphthalein indicator (1 g phenolphthalein dissolved in 95% ethanol solution (100 mL)) were added into the conical flask, then shaken thoroughly to dissolve. Then, 0.01 mol/L KOH standard titrated aqueous solution was used to manually titrate the sample solution until the sample solution was slightly red at first. When there was no obvious fading within 15 s, it was the end point of the titration. The titration was stopped immediately and the number of milliliters of standard titration solution consumed by the titration was recorded. The method for the control is as above without the oil sample. The AV was calculated as below: Among them, V represents the volume of the standard titration solution consumed by the sample measurement (mL); V 0 represents volume of standard titration solution consumed by the corresponding control determination (mL); c represents molarity of standard titration solutions (0.01 mol/L); 56.1 represents molar mass of potassium hydroxide (g/mol); m represents oil sample weight (g). Statistical Analysis The statistical analysis was conducted using IBM SPSS 22.0 and Excel. All experiments were performed in duplicate, and values were expressed as mean ± standard deviation (SD). One-way analysis of variance (ANOVA) with a Tukey's HSD posthoc test were applied to BHT, TBHQ, Cur, TBC, DMC, and DMTC evaluations to determine significant differences (p < 0.05) between samples. DPPH Test Antioxidants can provide hydrogen atoms to the DPPH free radicals to become nonfree radicals [26]. This is observed in real time as the violet DPPH alcoholic solution fades into light yellow or colorless. EC 50 can represent the antioxidant strength of the compounds and the stronger the antioxidant activities, the lower the EC 50 value. TBC, DMC, and DMTC compared to BHT and TBHQ were determined and calculated ( Table 1). The scavenging DPPH free radical ability of antioxidants mainly depends on their number of functional hydroxyl groups and speed to provide hydrogen atoms to DPPH free radicals. The EC 50 of DMC (10.0) was the lowest and meant the highest scavenging activity because of the highest functional hydroxyl group weight per 100 g of compound. DMTC (12.3) exhibits higher than TBHQ (19.5) but weaker activity than DMC (10.0). BHT (81.0) is the least active one. The EC 50 values of DMC (10.0) are higher than DMTC (12.3) which is in turn higher than TBHQ (19.5), whereas that of TBC (73.1) was much lower than Cur (40.5). TBC differs from Cur in that a tert-butyl substituent is inserted at the o-position of the phenolic hydroxyl group, which increased the steric hindered effect of the compound and delayed the release rate of hydrogen greatly, resulting in its slow rate of free radical capturing and also reduced functional hydroxyl group ratio of the compounds. Although phenolic hydroxyl group weight per 100 g of antioxidants (PHW/100 g A) of TBHQ (20.5) is very close to that of DMC (20.0), its diketone structure can easily tautomerize to enol structure. Here, all Cur and its derivatives have enolic hydroxyl groups, but BHT and TBHQ do not; therefore, Cur and its derivatives scavenge one more DPPH free radical as seen in Scheme 2 and Table 1 (Table 1). The EC50 values of DMC (10.0) are higher than DMTC (12.3) which is in turn higher than TBHQ (19.5), whereas that of TBC (73.1) was much lower than Cur (40.5). TBC differs from Cur in that a tert-butyl substituent is inserted at the o-position of the phenolic hydroxyl group, which increased the steric hindered effect of the compound and delayed the release rate of hydrogen greatly, resulting in its slow rate of free radical capturing and also reduced functional hydroxyl group ratio of the compounds. Although phenolic hydroxyl group weight per 100 g of antioxidants (PHW/100 g A) of TBHQ (20.5) is very close to that of DMC (20.0), its diketone structure can easily tautomerize to enol structure. Here, all Cur and its derivatives have enolic hydroxyl groups, but BHT and TBHQ do not; therefore, Cur and its derivatives scavenge one more DPPH free radical as seen in Scheme 2 and Table 1 . The order of scavenging DPPH free radical powers of the antioxidants agrees with that of their FHW/100 g A completely (Table 1). Rancimat Test The antioxidant effects of Cur, TBC, DMC, and DMTC were compared to BHT and TBHQ at various temperatures (100, 110, 120, 130, and 140 °C ) and concentrations of 0.01 and 0.02% by the Rancimat method separately. The results are expressed as Pfs of samples relative to the stability of lard without antioxidants added [24]. At concentrations of 0.01 and 0.02%, the Pfs of Cur and its derivatives rose with the increase in concentrations. Statistical analysis of these results reveals a significant effect (p < 0.05) for Cur, TBC, DMC, and DMTC under different concentrations (0.01% and 0.02%). Pfs and IPs were determined to evaluate the antioxidant activities of the different compounds directly and accurately. All Pfs of the lard samples containing the compounds (Table 2) were larger than 1, indicating that all compounds have antioxidant activity, and higher values correspond to stronger antioxidant activity. Generally, the amounts of antioxidants added to food should be less than or equal to 0.02% in the edible oil industry according to the regulation. Pf of DMTC was 22.0 (>> 6) which shows that DMTC is a very strong antioxidant, and much higher than that of TBHQ. DMTC is the strongest antioxidant in lard, to our knowledge, when 0.02% of it was added. Rancimat Test The antioxidant effects of Cur, TBC, DMC, and DMTC were compared to BHT and TBHQ at various temperatures (100, 110, 120, 130, and 140 • C) and concentrations of 0.01 and 0.02% by the Rancimat method separately. The results are expressed as Pfs of samples relative to the stability of lard without antioxidants added [24]. At concentrations of 0.01 and 0.02%, the Pfs of Cur and its derivatives rose with the increase in concentrations. Statistical analysis of these results reveals a significant effect (p < 0.05) for Cur, TBC, DMC, and DMTC under different concentrations (0.01% and 0.02%). Pfs and IPs were determined to evaluate the antioxidant activities of the different compounds directly and accurately. All Pfs of the lard samples containing the compounds (Table 2) were larger than 1, indicating that all compounds have antioxidant activity, and higher values correspond to stronger antioxidant activity. Generally, the amounts of antioxidants added to food should be less than or equal to 0.02% in the edible oil industry according to the regulation. Pf of DMTC was 22.0 (>> 6) which shows that DMTC is a very strong antioxidant, and much higher than that of TBHQ. DMTC is the strongest antioxidant in lard, to our knowledge, when 0.02% of it was added. DMTC is tert-butylated DMC, so its fat solubility is greatly improved. DMTC also has a perfect steric synergic effect so that the hydroxyl group at the 5-position can easily and quickly provide hydrogen to active free radicals. Then, the antioxidant free radical itself changes to more stable 4-phenoxy free radical by tautomerization as seen in Scheme 3 which explains the steric synergy very well. The antioxidant activities of compounds increased in the following order: Control < BHT < Cur < TBC < TBHQ << DMC << DMTC. Antioxidants 2022, 11, 796 7 of 11 DMTC is tert-butylated DMC, so its fat solubility is greatly improved. DMTC also has a perfect steric synergic effect so that the hydroxyl group at the 5-position can easily and quickly provide hydrogen to active free radicals. Then, the antioxidant free radical itself changes to more stable 4-phenoxy free radical by tautomerization as seen in Scheme 3 which explains the steric synergy very well. The antioxidant activities of compounds increased in the following order: Control < BHT < Cur < TBC < TBHQ << DMC << DMTC. Table 3 shows that Pfs of 0.02% of BHT, TBHQ, Cur, TBC, DMC, and DMTC decreased as the heating temperature increased from 100 to 140 °C. One-way ANOVA followed by Tukey's HSD poshoc test indicated significant differences (p < 0.05) for TBHQ, DMC, and DMTC under different temperatures (100, 110, 120, 130, and 140 °C). The Pfs of BHA, Cur, and TBC showed no significant difference (p > 0.05) by temperature. The Pfs of 0.02% of BHT and TBHQ, which are widely used lipid antioxidants commercially, decreased from 2.2 and 9.2 to 1.4 and 4.6, respectively, when the temperature increased from 100 to 140 °C. The Pfs of Cur, TBC, DMC, and DMTC decreased from 2.7, 3.4, 11.4, and 22.0 to 1.5, 1.9, 6.0, and 12.0, respectively. DMTC still has high Pfs (12.0) even at 140 °C, much higher than the others. The antioxidant capacities of each compound decreased when the temperature increased from 100 to 140 °C. However, the antioxidant activities of the compounds at same temperature were ranked in the following order: Control < BHT < Cur < TBC < TBHQ << DMC << DMTC. The antioxidant activity of DMTC is the strongest in oil to our knowledge. Table 3 shows that Pfs of 0.02% of BHT, TBHQ, Cur, TBC, DMC, and DMTC decreased as the heating temperature increased from 100 to 140 • C. One-way ANOVA followed by Tukey's HSD poshoc test indicated significant differences (p < 0.05) for TBHQ, DMC, and DMTC under different temperatures (100, 110, 120, 130, and 140 • C). The Pfs of BHA, Cur, and TBC showed no significant difference (p > 0.05) by temperature. The Pfs of 0.02% of BHT and TBHQ, which are widely used lipid antioxidants commercially, decreased from 2.2 and 9.2 to 1.4 and 4.6, respectively, when the temperature increased from 100 to 140 • C. The Pfs of Cur, TBC, DMC, and DMTC decreased from 2.7, 3.4, 11.4, and 22.0 to 1.5, 1.9, 6.0, and 12.0, respectively. DMTC still has high Pfs (12.0) even at 140 • C, much higher than the others. The antioxidant capacities of each compound decreased when the temperature increased from 100 to 140 • C. However, the antioxidant activities of the compounds at same temperature were ranked in the following order: Control < BHT < Cur < TBC < TBHQ << DMC << DMTC. The antioxidant activity of DMTC is the strongest in oil to our knowledge. Deep Frying Test Fried foods are quickly and conveniently prepared by deep frying. People love fried foods due to their crispy texture and unique flavor. However, high temperature leads to oxidation and hydrolysis of oils, and other reactions, resulting in harmful substances for human health. Antioxidants can effectively slow down the oxidation rate of oils. In this study, the antioxidant activities of Cur, TBC, DMC, and DMTC were compared with TBHQ which is used very often in commercial frying oils. Autoxidation of unsaturated fatty acids is a free radical chain reaction and poly double bonds will be conjugated to form conjugated dienes (CD) as oxidized [27]. When frying at high temperature (>180 • C), peroxides are decomposed quickly, and free fatty acids are also formed. Therefore, the peroxide value (PV) cannot indicate the stabilities of frying oil at high temperatures (>180 • C) well. The CD value and acid value (AV) are usually used to determine the degree of oxidation of frying oil under high temperature. As can be seen from Figure 1a, the content of CD increased continuously as the frying time lengthened. The CD value of the control group was clearly higher than those antioxidants added and indicated that the addition of antioxidants could effectively inhibit lipid oxidation. During the first 9 h of frying, the CD values of samples changed slowly, because the antioxidant played a vital role. After more than 12 h frying, however, differences in antioxidant activity began to appear as antioxidants were consumed. DMTC (CD = 15.6%) exhibited the best antioxidant activity when the frying time reached 30 h, and Cur and its derivatives were obviously better than TBHQ (33.5%) at high temperature due their higher molecular weight. TBHQ is unstable and volatile at high temperatures, which reduces its antioxidant activity during frying [28,29]. The antioxidant activities of DMC and DMTC in Rancimat tests were much higher than that of Cur and TBC, but in the high temperature frying experiment, DMC does not suppress CD generation efficiently and this result may be related to its low solubility in oil and more DMC was absorbed by potato chips which are mainly carbohydrates, moisture, and proteins. TBC has better solubility in oils than DMC, so shows better antioxidant activity. at high temperatures (>180 °C) well. The CD value and acid value (AV) are usually used to determine the degree of oxidation of frying oil under high temperature. As can be seen from Figure 1a, the content of CD increased continuously as the frying time lengthened. The CD value of the control group was clearly higher than those antioxidants added and indicated that the addition of antioxidants could effectively inhibit lipid oxidation. During the first 9 h of frying, the CD values of samples changed slowly, because the antioxidant played a vital role. After more than 12 h frying, however, differences in antioxidant activity began to appear as antioxidants were consumed. DMTC (CD = 15.6%) exhibited the best antioxidant activity when the frying time reached 30 h, and Cur and its derivatives were obviously better than TBHQ (33.5%) at high temperature due their higher molecular weight. TBHQ is unstable and volatile at high temperatures, which reduces its antioxidant activity during frying [28,29]. The antioxidant activities of DMC and DMTC in Rancimat tests were much higher than that of Cur and TBC, but in the high temperature frying experiment, DMC does not suppress CD generation efficiently and this result may be related to its low solubility in oil and more DMC was absorbed by potato chips which are mainly carbohydrates, moisture, and proteins. TBC has better solubility in oils than DMC, so shows better antioxidant activity. Acid value (AV), as an important index to measure the quality of oil, reflects the content of free fatty acids in oil [30,31]. The AV is generally used as one of the quality parameters. Large AV means that the oil is hydrolyzed and/or oxidized significantly to release fatty acids. The standard of commercial edible oils limits the AV; therefore, smaller AV is required for edible oils. The main methods of increasing AV in oils contain: (1) unsaturated fatty acids undergo oxidation reactions to form hydroperoxides and further decomposed to aldehydes and some of them eventually oxidize to small free fatty acids; (2) fatty acids which are hydrolyzed from acylglycerides. According to Figure 1b, inhibition of free fatty acids produced by rancidity by different antioxidants is shown as follows: Control < TBHQ ≈ DMC < Cur < TBC < DMTC. Acid value (AV), as an important index to measure the quality of oil, reflects the content of free fatty acids in oil [30,31]. The AV is generally used as one of the quality parameters. Large AV means that the oil is hydrolyzed and/or oxidized significantly to release fatty acids. The standard of commercial edible oils limits the AV; therefore, smaller AV is required for edible oils. The main methods of increasing AV in oils contain: (1) unsaturated fatty acids undergo oxidation reactions to form hydroperoxides and further decomposed to aldehydes and some of them eventually oxidize to small free fatty acids; (2) fatty acids which are hydrolyzed from acylglycerides. According to Figure 1b, inhibition of free fatty acids produced by rancidity by different antioxidants is shown as follows: Control < TBHQ ≈ DMC < Cur < TBC < DMTC. Discussion In the present research, Cur always attracted wide attention due to its uniquely rare diketone colored symmetric molecular structure, containing one central methylene, one β-diketone, two unsaturated double bonds, and two phenolic hydroxyl groups [14]. With the improvement in Cur structure, there are more and more studies on the structureactivity relationship between Cur structure and antioxidant activity [32]. Studies have found that the phenolic hydroxyl group of Cur plays an important role in antioxidant activity, and the strength of antioxidant activity is related to the number of phenolic hydroxyl groups [33]. Moreover, the position of the phenolic hydroxyl group on the benzene ring also affects the antioxidant activity of Cur to a certain extent [34]. Rukkumain found that phenolic hydroxyl groups at the 2-position had higher antioxidant activity than hydroxyl groups at the 4-position, and catechol structure had higher antioxidant activity than hydroquinone [35]. Other studies have found that the introduction of a methyl group in the o-position of the phenolic hydroxyl group can also improve the antioxidant activity of Cur [36]. The results were consistent with those of Cur, TBC, DMC, and DMTC in the DPPH experiment, Rancimat and deep-frying tests. In the DPPH experiment, it was obvious that more phenolic hydroxyl groups in the unit molecular weight of the compound would lead to stronger DPPH radical scavenging, and DMC and DMTC with catechol structure had more obvious scavenging effects. DMTC is tert-butylated DMC, so its fat solubility is greatly improved. DMTC also has a perfect steric synergic effect where the hydroxyl group at 5-position can easily and quickly provide hydrogen to active free radicals and then the antioxidant free radical itself changes to a more stable 4-phenoxy free radical by tautomerization as seen in Scheme 3, which explains the steric synergy very well. That is the main reason why DMTC and BTC have stronger antioxidant activity than others in oils. Conclusions In this work, two novel lipophilic derivatives of Cur, TBC and DMTC, were synthesized and their antioxidant activity was evaluated using the methods of DPPH, Rancimat, and deep-frying. Results revealed that the antioxidant activities of the two novel Cur derivatives were better than Cur. DMTC showed the strongest antioxidant activity in oil due to the presence of more phenolic hydroxyls and a bulky tert-butyl group on the o-position of hydroxyl group, both of which play important roles when it acts as an antioxidant in oil. Moreover, DMTC is by far the strongest antioxidant in lard tested by the Rancimat method to date. TBC and DMTC could be potentially used as powerful antioxidants in food industry. Previously, we only studied the antioxidant activity of TBC and DMTC in oil, and their toxicity and effects on oil quality need further study.
2022-04-21T15:15:28.834Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "e08f26c2162be79a4294b2bc06476a2f30cd9419", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/11/4/796/pdf?version=1650294955", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5f952707587369ab6d612f4a4331db3d4555760", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237952186
pes2o/s2orc
v3-fos-license
The Extrapulmonary Manifestations of SARS-CoV-2 SARS-Coronavirus 2 (SARS-CoV-2) is the latest strain of coronavirus that causes the viral infection, Severe Acute Respiratory Syndrome (SARS). The initial studies on the Coronavirus Disease 2019 (COVID-19) focused on respiratory outcomes of this viral infection. More recent research on the mechanism of action of SARS-CoV-2 shows that the virus enters the cells through the Angiotensin-Converting Enzyme-2 (ACE-2) receptor. This receptor is present not just in the cell membranes of respiratory cells but also in the cell membranes of cells present in other organs of the body. This enables the virus to have severe outcomes in the body beyond the respiratory system. Providing a possible immunizing agent against coronavirus is a major challenge pertaining to the fact that ongoing pandemic has already taken millions of lives. This paper discusses the extrapulmonary effects of COVID-19, with an emphasis on clinical manifestations, mechanism of action, and special focus to management considerations in each of these cases. The essential therapeutics and treatments proposed for dealing with the COVID-19 infection have also been discussed. While the answer to whether these therapies work, successfully controlling the immunoinflammatory response is still unclear, ongoing trials of multiple drugs for this purpose are an excellent way to ultimately reach a product that works successfully. INTRODUCTION Coronavirus allegedly started from a wet market in Wuhan, located in Hubei province in China,and spread to more than 188 countries and the viral infection, the SARS-CoV-2 (Severe Acute Respiratory Syndrome Corona Virus-2) has caused a global pandemic termed COVID-19 (Coronavirus Disease 2019), taking more than 1.02 million lives as of October 2, 2020, while the number of active cases has surpassed 34.3 million [1 -3]. This global catastrophe started with the proposed zoonotic transmission of coronavirus in the large seafood market in China [4]. This event led to the human-tohuman transmission of the virus, infecting more people associated with the said wet market in China, to the point that by January 20, 2020, when the first paper documenting this novel coronavirus was published, the virus had reached more than 800 patients in China and international cases were increasing rapidly [5]. The World Health Organization declared COVID-19 a pandemic on March 11, 2020. In that moment, global cases of the novel coronavirus had surpassed 500,000 [6]. SARS-CoV-2 is a member of the Coronaviridae family, a part of the β-coro-struggle to find a viable treatment, more articles must be published that provide the details of the effects of COVID-19 on multiple organ systems of the body so that researchers can easily access this otherwise wildly dispersed information. This review provides an assortment of the extrapulmonary manifestations of COVID-19. This assortment has been presented after conducting the analysis of the current literature on extrapulmonary manifestations of COVID-19, collecting information about the effect of COVID-19 on different body systems and presenting the information at one place, making it easily accessible for scientific and educational use. IMMUNE SYSTEM The immune system is the first to face and subsequently deal with the negative effects of any infection or disease, and COVID-19 is no exception; the disturbance in the immune system due to the infection has been found responsible for dire consequences of the infection for the lungs. The SARS-CoV-2 can activate both innate and adaptive immune system, both of which can cause extra trouble when they are responsible for hyperinflammation and impaired immune response, respectively [12]. It is true that a weak immune system can decrease the chances of successfully fighting the viral infection, but an enhanced immune response can lead to terrible consequences for patients too, by causing hyperinflammation, which in turn is responsible for causing multiple organ damage and further complications [13]. Clinical Symptoms The first and foremost result of the hyper activated immune system is hyperinflammation, whereby an increase in the inflammatory factors like IL-2, IL-7, monocyte chemoattractant protein 1, tumor necrosis factor alpha-1, among others, results in a cytokine storm or hypercytokinaemia [14,15]. This hypercytokinaemia can eventually result in hyperferritinaemia, cytopenia, and, more importantly, Acute Respiratory Distress Syndrome (ARDS) and acute coronary syndrome.In addition to being found to be credible predictors of morbidity and mortality, this hyperinflammation has been considered highly harmful in terms of succeeding in controlling the infection effectively [14]. Higher levels of ferritin, and IL-6 [16], greater neutrophil-lymphocyte ratio, lesser number of basophils, eosinophils, a larger number of inflammatory cytokines and biomarkers, also, suppressor T cells, B cells, and NK cells were found in low amounts in more serious cases and non-survivors of COVID-19 [17]. Pathophysiology SARS-CoV-2 is one of the cytopathic viruses, and for such viruses, it is often the case that the viral replication cycle itself involves pyroptosis of the virus-infected cells [18] leading to vascular leakage [19], while triggering elevation of IL-1beta [9], an important cytokine involved in the pyroptosis process, ultimately resulting in strong local inflammation due to increased levels of multiple cytokines and chemokines like IFNγ,20,21]. These high levels of chemokines and cytokines attract monocytes and Tlymphocytes from the blood [22,23], while leaving neutrophils behind, and this movement of immune cells is a viable reason for the lymphopenia and increased neutrophil-lymphocyte ratio seen in most COVID-19 patients [17,24]. This influx of immune cells, cytokines, and chemokines results in a dysfunctional immune response whereby a cytokine storm is triggered, leading to expansive lung inflammation, and the subsequent difficulty in breathing associated with the infection [25]. In this situation, in addition to the damage being rendered to the lungs by the virus, the unbalanced infiltration of immune cells further damages the lungs by triggering increased secretions of proteases and reactive oxygen species, resulting in alveolar damage, negatively affecting the gas exchange process in the lung, resulting in low oxygen level and difficulty in breathing, making lungs even more prone to secondary infections.The disastrous effect of the infection does not stop here as the increase in the cytokines like TNF results in septic shocks and multi-organ damage, directly resulting in myocardial damage and circulatory failure, decreasing the chances of survival for older patients (aged over 60 years) [16]. However, given that more than fifty percent of child patients and patients under 18 years experienced only mild symptoms, this pattern may not be similar in other patients [26]. That is why, despite the feat of tracking the whole pathway of how COVID-19 results in the pathophysiology of the immune system, there is still a lot left to investigate on how the severity of the inflammatory response in patients is affected by different host immune factors [25]. CARDIOVASCULAR IMPLICATIONS The modest idea of viral infections contributing to the damage to the vital organs has persisted in the scientific community long before the SARS-CoV-2. SARS-CoV-2 affects the cardiovascular system in a way similar to the other members of the 'beta genus' family of coronavirus [27]. The clinical implications for COVID-19 patients include moderate to severe damage to the cardiovascular system owing to a particular mechanism [28,29]. Clinical Symptoms The most commonly observed comorbidity of COVID-19 in hospitals is cardiovascular disease (11.9%). Elevated chances of developing a myocardial injury, cardiac arrest, dysrhythmia, biventricular cardiomyopathy, sinus tachycardia, and bradycardia thrombotic complexities along with other cardiac dysfunction and complete heart failure in COVID-19 patients indicate that COVID-19 causes inevitable destruction to cardiovascular health. The experimental study to analyze the cause of death in the coronavirus infected individuals among 150 patients in Wuhan, China revealed that myocardial damage and heart failure were the eminent causes of demise among 53% of patients with confirmed COVID-19 cases among 68 non-surviving patients [16]. The study conducted in 138 patients hospitalized in Wuhan, China, showed that the manifestation of cardiac arrhythmias, heart blockage, ventricular arrhythmias was more prominent in 17% of mild cases and 44% severe cases in ICU [16,30]. Prolonged QTc (which refers to a slower heart rate indicating increased risk for ventricular arrhythmias) was recorded in 6% of the patients from the cohort of 4,250 patients in New York who were afflicted with the deadly virus. 7-33% of coronavirus infected patients with acute symptoms have been reported to have biventricular cardiomyopathy [16]. Research conducted on 46,248 confirmed COVID-19 patients disclosed that coronary heart disease was present in 1% of surviving and 24% of nonsurviving individuals [8]. Pathophysiology The mechanism which is involved in causing cardiovascular problems in COVID-19 patients is not clear and well elaborated, but it is proposed that multiple factors are responsible for it [31 -39]. SARS-CoV-2 causes severe acute respiratory syndrome as it directly affects the lungs and a complete or partial failure of the respiratory system can produce a strain for the entire body. Hypoxemia results in cardiac stress, which can contribute to indirect myocardial injuries.Immune over-response can also cause cardiovascular diseases, since autopsies of COVID-19 patients show traces of inflammatory infiltrate, which is mainly macrophages [40] and CD4 + T cells, the latter are, however, less common. These traces indicate cardiomyocyte necrosis, which could ultimately lead to cardiac arrest [41]. SARS-CoV-2 uses ACE-2 receptors which coat S protein to enter the host's cell [42]. This receptor is expressed in all types of cells, but their expression is exceptionally more in cardiac cells, which can lead to ACE-2 dependent myocardial infection. Another possible mechanism is the hypoxic state that, in combination with systemic inflammation, increases the production of pro-inflammatory cytokines, which, along with the elevated Ca +2 levels in extracellular space, can result in myocyte apoptosis [13]. To what extent does the heart get affected by a direct viral infection is still unknown. However, the presence and increment of cardiac injury biomarkers highly reflect the possibility of a cardiovascular disorder being a direct and probable cause of SARS-CoV-2 infection in COVID-19 infected people. NEUROLOGIC EFFECTS The nervous system is another vital organ system of the body that is affected by the SARS-CoV-2, with more than 36.4% of the patient's cohort to have shown neurologic symptoms [43]. The confirmation for the neurologic involvement of the infection came from Beijing Ditan Hospital, which reported a case of viral encephalitis, with researchers finding SARS-CoV-2 in the cerebrospinal fluid, indicating the attack of the virus into the CNS [44]. Clinical Symptoms The extent of neurologic manifestations of COVID-19 is highly dependent on the mildness or severity of the infection itself [24,45,46], with mild patients showing symptoms like headache, dizziness, epilepsy, disturbed consciousness [20,47,48] and more severe patients showing acute arterial or venous stroke [49,50], confusion, and troubled consciousness [43,51]. There have also been recorded numerous instances of COVID-19 patients suffering from anosmia and dysgeusia as numerous reports dictate loss of smell and taste in patients infected with . In addition to viral encephalitis, infectious, toxic encephalopathy has also been reported in COVID-19 patients, mainly due to the hypoxia, viremia, and edema, all of which result due to the infection itself, thereby increasing the chances for the disease [53]. Acute cerebrovascular disease is another neurologic manifestation of the COVID-19, whereby the cytokines storm [14,54], the reduced platelet levels, and high D-dimer levels caused by the infection are directly related to the factors causing acute cerebrovascular disease [55]. Some reports have also reported the Acute inflammatory demyelinating polyneuropathy (Guillain-Barré syndrome) in COVID-19 patients [56,57]. Pathophysiology Several mechanisms have been proposed for the entry of the virus into the nervous system, based on the previous studies conducted on other coronaviruses like SARS-CoV and some studies conducted on the SARS-CoV-2. These mechanisms include access to the CNS through the nasal mucosa, lamina cribrosa [36], blood circulation pathway [58,59], olfactory bulb [59, 60], immune injury due to the systemic inflammatory response (SIR) caused by the infection and cytokine storm [54], and attachment of the SARS-CoV-2 to the ACE-2 receptor of the capillary endothelium, damaging the bloodbrain barrier and entering the CNS by attacking the vascular system [61 -67]. GASTROINTESTINAL IMPLICATIONS Fever, dry cough, and tiredness have been the most associated and most common symptoms of COVID-19. With the prevalence of the disease and increased systematic investigations, it became evident that gastrointestinal symptoms, including diarrhea, nausea, vomiting, and abdominal pain, were also present. This specified the involvement of gastrointestinal disorders and revealed that the digestive system could also be affected by SARS-CoV-2 [68 -74]. Clinical Symptoms 3% of the patients infected with the COVID-19 in Wuhan, admitted to ICU, developed diarrhea [9]. The onset of gastric dysfunction was more notable in patients with intense and more severe symptoms of coronavirus than those who were asymptomatic or showed relatively mild symptoms. The data recruited for approximately 40 days from the three hospitals of Hubei in China revealed that 47% of COVID-19 patients developed digestive system disorders along with respiratory system problems, and 3% developed only digestive dysfunction. Loose bowel movements were recorded in 17%of confirmed coronavirus patients [75]. Rare cases reported mesenteric ischemia along with gastrointestinal bleeding. Severe clinical outcomes are persistent in patients with coronavirus and gastrointestinal disorders. It is also ubiquitous that patients with gastrointestinal disorders developed acute respiratory syndrome; however, the exact mechanism for this phenomenon is a missing link [36]. As the severity of other symptoms increases, the gastrointestinal manifestations also become pronounced in COVID-19 patients. Pathophysiology How the virus SARS-CoV-2 enters the gastrointestinal tract is still under research. Endoscopy reports of six patients showed the presence of SARS-CoV-2 strain throughout the digestive tract, more prominently in the esophagus, stomach, duodenum, and rectum. ACE-2 receptors are present in the epithelial cells lining the upper digestive tract. Not only the ACE-2 receptors, but also the cleavage of S protein, carried out by cellular transmembrane protease serine 2 (TMPRSS2), is a crucial factor for viral attachment and infusion into the host cell. This serine protease is found abundantly in the colon and ileum, suggesting the invasion of the virus into the enterocytes [76,77]. Another mechanism observed for gastrointestinal involvement is the reduction in the uptake of tryptophan, which leads to decreased peptide levels in gut microbes, ultimately causing inflammation. Cytokine storm or systemic inflammation also affect the gastrointestinal pathway as a comorbidity of multiple organ responses [78]. ENDOCRINOLOGICAL MANIFESTATION The coexistence of endocrinological disorders with COVID-19 is typical in COVID-19 patients. It is not elucidated in the literature that COVID-19 causes endocrinopathies; however, the pervasiveness of more severe symptoms associated with coronavirus is reported. Some of the patients are directly or indirectly susceptible to endocrinopathies. Clinical Symptoms In the United States of America, 49.7% of COVID-19 patients were hypertensive; obesity was recorded in 48.3%, diabetes mellitus was prevalent in 28.3% patients, and cardiovascular disease was common in 27.8% [79 -81]. Center for Disease Control and Prevention in the United States reported that diabetes, hypertension, and cardiovascular diseases were common in 78% of COVID-19 patients in ICU. Multiple endocrine glands and organs could be the possible target of SARS-CoV-2, including the pancreas, adrenal glands, bones, testicles, pituitary glands, and thyroid glands. The worst outcomes of the COVID-19 are increasingly evident in people with obesity. Adrenal dysfunction can cause Cushing syndrome. Testicular diseases cause greater mortality in men [82]. Volume depletion in COVID-19 patients suffering from diabetes insipidus can even lead to hospitalization as higher mortality is reported [83]. Pathophysiology The effect of coronavirus on the endocrine system is still under debate. Mechanism with which this virus affects the endocrine glands and organ include elevated hypoglycemia and ketosis. The cytokine storm, along with multiple factors, can bind the virus with ACE-2 receptors on the β cells. The pancreas become affected, resulting in abnormal levels of insulin [36]. Metabolic pathways associated with the endocrine system can be targeted directly by the virus. However, multiple organ involvement can produce disastrous effects on metabolic pathways. Altered mechanisms can induce hormonal imbalances. RENAL EFFECTS The primary target of SARS-CoV-2 is the alveoli present in the lungs, however some of the evidence suggests that coronavirus can also attack and impair the renal functions. In extremely adverse cases, where symptoms of coronavirus are severe, several kidney-associated diseases are reported. It was found that coronavirus can enter the bloodstream and invade other organs, including kidneys. Samples of plasma collected from COVID-19 patients indicate the presence of viral RNA traces, indicating amultiple organ reach of SARS-CoV-2 and kidney is one of them. Clinical Symptoms Clinical diagnostics present an abnormally higher rate of occurrence of kidney disorders. Acute kidney injury is persistent in patients with chronic respiratory system failures. The health data of 5449 COVID-19 patients hospitalized in New York City exhibited that 1,993 cases (36.6%) developed AKI [84 -87]. The exceptional irregularities in albumin, proteins, creatine, and blood urea nitrogen levels were observed, indicating kidney damages in confirmed COVID-19 patients. Data for urine analysis and kidney problems retrieved from clinics, laboratories, and hospitals showed 44%and 26.9% of people with COVID-19 had proteinuria hematuria and hematuria, respectively; blood urea nitrogen anomaly was recorded for 14.1% patients. Similarly, 15.5% of patients were observed to have elevated creatine levels [88]. Renal complexities were present among 75.4% of patients in Chinese patients with . Coronavirus is also found to be involved in chronic kidney diseases, electrolyte imbalances, and metabolic acidosis, which can further contribute to renal failure. Pathophysiology More recently in Guangzhou, Zhong's lab has been reported to be successful in isolating coronavirus from the samples of urine collected from the corona positive cases. This places kidney as a potent organ affected by SARS-CoV-2 [90]. However, there is no elaborated mechanism that dictates direct viral invasion into kidneys that could lead to renal complications. Indirect pathways, including the excessive production of cytokines, multiple organ damages in response to viral attacks, and systemic effects, could be interrelated, resulting in kidney damages. Extracorporeal membrane oxygenation increases the production of cytokines, which may characterize glomerulonephritis, acute kidney injury, and endstage kidney disease in COVID-19 disease. Renal medullary hypoxia, renal compartment syndrome, and tubular toxicity could be an outcome of multiple organ damage as high peak airway pressure, intra-abdominal hypertension, rhabdomyolysis induced by cytokines in response to SARS-CoV-2 can produce overall stress for the body [91 -93]. Kidneys have a 100 times more ACE-2 receptors than the lungs,increasing chances of viral entry into the renal tissues via an ACE-2 dependent mechanism [88]. HEPATOBILIARY MANIFESTATION Severe COVID-19 is also reported to cause hepatobiliary manifestation, but only mild liver abnormalities are present. Abnormal production of alanine aminotransferase and aspartate aminotransferase has also been recorded. The incidence of chronic liver injury or liver damage has been rarely reported. The presence of an underlying or pre-existing hepatobiliary dysfunction has been linked to increase in the fatality of SARS-CoV-2 patients. Clinical Symptoms Case studies of COVID-19 patients logged in Fifth Medical Center of PLS General Hospital in Beijing province of China showed liver co-morbidities in 2-11%. In comparison, 14-53% of the patients had an abnormal level of aminotransferases. 62% of the patients in ICU in Wuhan, China manifested abnormal levels of AST [94]. During the process of hospitalization in Shenzhen, China, 76.3% of confirmed COVID-19 patients had abnormal liver test results, and 21.5% presented liver injury. 90% of patients with reported liver damage had mild COVID-19 symptoms. Out of these, an abnormal increase in values of ALT and GGT was recorded for 24% of patients [95]. The hepatocellular pattern was shared among all the patients with liver damage. The liver damage was commonly reported; however, the consequences were not fatal. Pathophysiology A well-defined mechanism by which SARS-CoV-2 is involved in liver injuries is not available. The impaired hepatic function may be due to the circulation of the coronavirus through the reticular system. Macrophages are abundantly present in the hepatobiliary system, which are involved in the production of the cytokines that result in systemic inflammation [96]. The virus can bind directly to the ACE-2 receptors present on the epithelium since these receptors are intensely present in the liver and it can be subjected to direct damage [97]. Some of the biological drugs like baricitinib, which are believed to produce promising results when used against SARS-CoV-2 for their antiviral properties, can cause the reactivation of any previously present hepatobiliary disorder. Severe oxygen deficiency and respiratory failure can lead to anoxia, which can cause hypoxic hepatitis [98,99]. HEMATOLOGY EFFECTS The hematopoietic system and hemostasis are essential systems of the body concerned with the regulation of blood flow in the body [8,24]. Hematology studies report that COVID-19 has considerable effects on the hematopoietic system and hemostasis in the body [20,100]. Lymphopenia has been reported in multiple cases, with some studies suggesting 67 to 90 percent of patients exhibiting some extent of lymphopenia [9, 66, 101 -103]. Clinical Symptoms In addition to a decrease in CD 4+ T cells [11,70], CD 8+ T cells [20], and leukocytes [18,104], a marked decrease in thrombocytes has also been reported, which, although mild (present in 5 to 36 percent of patients), is associated with severe patient outcomes [9,9,24,103,105]. Another important effect of COVID-19 in this context is coagulopathy, which is directly related with increased levels of fibrinogen, one of the procoagulant factors, and D-dimers, the latter of which is associated with more severe outcomes and worse mortality rate in hospitalized patients, 46 percent of whom reported high levels of D-dimer upon admission [8, 20, 24, 47, 106 -108]. Levels of D-dimers higher than 1000ng/mL have been reported to be an important risk factor in hospital deaths [8] with nonsurvivors having more than 500ng/mL levels of D-dimers coupled with prolonged Prothrombin Time (PT) [ Pathophysiology Multiple mechanisms have been proposed to understand the pathophysiology associated with the hematology manifestations involved in COVID-19 infection. The cytotoxic activity of SARS-COV-2 due to the ACE-2 dependent entry of the virus in lymphocytes [115 -117], depletion in lymphocytes mediated by apoptosis [118 -120], and inhibition of lymphocyte proliferation due to lactic acid [121] are suggested mechanisms for the lymphopenia associated with COVID-19. On the other hand, leukocytosis, especially neutrophilia and high levels of D-dimers, have been related with hyperinflammatory response to 111]. As for the thrombotic complication, the uncontrolled inflammation, hypoxia, and viral-mediated effects are related to higher instances of thrombotic complications in COVID-19 patients [36]. Endothelialitis, which is associated with irregularly high expression of ACE2 in endothelial cells, also promotes thromboinflammation, resulting in more severe outcomes [115]. DERMATOLOGIC EFFECTS While the dermatologic effects of COVID-19 have only been occasionally reported, still there have been reports about cutaneous findings in COVID-19 patients. Some patients experienced dermatologic effects before the onset of the disease, while others saw dermatologic manifestations before respiratory symptoms appeared or during the infection [36]. Clinical Symptoms The most common dermatologic effects reported include erythematous rash, urticaria, chickenpox-like vesicles, acrocutaneous lesions, maculopapular rash, and vesicular lesions [122,123]. Also, studies reported chilblain-like lesions present in less severe cases of COVID-19 while patients with more severe infections reported livedoid or necrotic lesions [76]. Pathophysiology While the exact mechanism of these dermatologic effects in COVID-19 patients is still unclear, possible mechanisms include SARS-CoV-2 RNA, cytokine-release syndrome, deposition of microthrombi, and vasculitis starting a hypersensitive immune response in these patients, leading to cutaneous and dermatologic effects [41]. OCULAR INFECTIONS More than a decade ago, the world saw a Severe Acute Respiratory Syndrome (SARS) outbreak.The tear samples of a patient from Singapore were found to contain the SARS coronavirus, indicating ocular manifestation of the disease in patients [124]. In 2020, the same situation was reported with Li Wenliang, the person who first reported the coronavirus outbreak and was himself an ophthalmologist and contacted the virus from an asymptomatic glaucoma patient [125]. Clinical Symptoms The ocular effects of Covid19 are apparent as, according to one case series, 31.6 % of COVID-19 patients hospitalized in Hubei province, China reported ocular symptoms. Of these 31.6 % of patients with ocular symptoms, the Reverse-Transcriptase Polymerase Chain Reaction (RT-PCR) result of a conjunctival swab of 16.7 % of patients indicated the presence of SARS-CoV-2 [126]. Common ocular symptoms prevalent in COVID-19 patients include conjunctival hyperemia, chemosis, epiphora, or increased secretions [126]. Pathophysiology The scarcity of published research on ocular manifestations of COVID-19 has disabled researchers to find the real mechanism of ocular manifestation of COVID-19. Possible mechanisms of action include i) ocular tissues being directly inoculated by respiratory droplets of aerosolized viral particles, ii) nasopharynx migration via the nasolacrimal duct or iii) lacrimal gland involvement in the hematogenous spread [127]. MANAGEMENT CONSIDERATIONS FOR MULTIPLE ORGAN DAMAGE The immunomodulated damage in COVID-19 is a leading cause of the increasing severity of infections for many patients. Several different immunosuppressive therapies have been initiated for controlling this damage. Trials of corticosteroids [27], IL-6 antagonists, tocilizumab [28], sarilumab, Granulocyte-Macrophage Colony-Stimulating Factor (GM-CSF) targeting drugs gimsilumab [29], lenzilumab [30], and namilumab [31] have been initiated. Novel therapies like cytosorb, which involves absorbing large amounts of cytokines [32]; or Thalidomide, an immunomodulatory agent, is also being employed [33]. Since the fatality rate in patients with an underlying cardiovascular problem is significantly more and the death rate becomes even more critical if the myocardial dysfunction occurs after SARS-CoV-2 infestation, it becomes indispensable to treat such patients with utmost medical attention. Even though there should be made specific considerations for the management of such patients, using Angiotensin-Converting Enzyme inhibitors (ACEi) and Angiotensin Receptor Blockers (ARB) is still questionable. There is no proof that the use of these therapeutics should be stopped after the viral infection happens. The interaction of cardiac drugs with the drugs administered for COVID-19 can have a potentially adverse effect [72]. Management of dosage should be observed along with encouraging proper diagnostics and prohibiting the random testing of biomarkers for cardiac injury. The recently used drugs for COVID-19; azathioprine, when used in combination with antiparasitic drugs, can result in prolonged QT interval, so precautions must be employed to avoid problem [73]. While the existing literature lacks a detailed course of action for the minimization of the neurologic manifestations of COVID-19, several different guidelines can be adhered to for effective results. These include adhering to the guidelines regarding the ischemic stroke and ensuring the availability of thrombolysis and thrombectomy [62], evaluation of patients with remote video monitoring [36], and using immunomodulatory therapies [63]. Corticosteroids are recently being used as effective antiviral drugs that are easy to administer and economically viable. Precisely monitored use of the drug should be made, and pre-testing for the virus must be ensured to prevent the fatal consequence. An elderly COVID-19 patient with severe ulcerative colitis, treated with corticosteroids, expired [79]. The transmission of the SARS-CoV-2 is mysterious. Since 50% of the fecal material and stool samples of COVID-19 patients show the viral presence so these can be a source of transmission [80]. Separate washrooms should be used to avoid such transmission. Biopsies, endoscopy, and autopsies could be performed to precisely elaborate that lesions from the intestine are produced after the systemic inflammation, or they are the consequence of direct primary infection of the gastrointestinal tract. Early detection and diagnosis lead to early treatment. Continuous detection of pro-inflammatory cytokines and limiting their production by monoclonal antibodies could be helpful for safe treatment [78]. The use of glucocorticosteroids is linked with immunosuppression activity, so their use must be regulated as a compromised immune system can result in more susceptibility to the virus and death if the viral infection has already occurred [84]. Adrenal insufficiency and crisis should be managed, and preliminary medical help should be provided without delay [85]. Endocrinologists must effectively cooperate with other medical specialists to perform unavoidable thyroid nodules surgeries for critical . Routine care, precise attention to desmopressin dose, and fluids can be helpful to prevent hypernatraemic dehydration in diabetes insipidus [83]. Since no proper medication is available, special consideration should be made if kidney failure, acute kidney injury, and other renal disorders develop, as it could result in an increased mortality rate among COVID-19 patients. A controversy regarding delaying Renal Replacement Therapies (RRT) is that it could be replaced with the use of loop diuretics to prevent volume overload. If RRT is unavoidable, then the patient's age, the severity of symptoms of COVID-19 should be observed and monitored to ensure timely and safe replacement of organs. Hospital staff, Radiologists, and nephrologists should work in collaboration to securely reduce the hemodialysis session for kidney failure [91]. Developing COVID-19 from organ donation is very rarely reported. Still, the kidney has the highest vulnerability for SARS-CoV-2. Special clinical screening and management should be practiced to ensure riskfree renal donation [93]. Due to the scarcity of data concerning the impact of drugs affecting the hepatobiliary system, it is mandatory to observe the biochemical monitoring of the liver [98]. Special attention should be given to the elderly COVID-19 patients who have a long-term history of liver injury since it can contribute to the severity of the disease. No direct evidence is available that proved the dysfunction of the liver as lethal and deadly in SARS-CoV-2 infection [97]. The effect of drugs on liver damage and hepatotoxicity should be considered in drug designing procedure. Less chronic liver disease diagnosis during the pandemic disables the revelation of the association between liver injury and . While there are no fully approved strategies to alleviate the hematology effects of COVID-19, there are indeed some proposed actions that have been shown to have a positive impact. These include continuous evaluations of blood count, D-dimers, lymphocytes, and fibrinogen in hospitalized patients [35], risk assessments for thromboembolism at repeated intervals, therapeutic anticoagulation [36], and mesenchymal cells to be used for their immunomodulatory, antiinflammatory, and anti-fibrotic properties [122], in addition to their ability to mediate inflammation around endothelial cells, hence not only reducing inflammation but also checking endothelial dysfunction [42]. Even though the dermatologic effects are not as prevalent, it is important to consider patients' dermatologic effects before administering any COVID-19 drug like Remdesivir, tocilizumab, or starting any possible treatments for COVID-19 [38]. Such patients should also be inspected on a case-to-case basis to decide the continuation of any biologic therapy for the infection.Lack of research in this field has made it challenging to manage the ocular manifestations of COVID-19. While health care providers are guided to make use of strict care to keep their eyes covered and use protective gear, ophthalmologists have been directed to limit their practice to emergencies and urgent cases as the possibility or mechanisms of transmission of the disease through ocular mechanisms are still unclear [125]. COVID-19 THERAPEUTICS, CYTOKINE MANAGEMENT AND DRUG REPURPOSING It has been established that the SARS-CoV-2 enters the cells through the ACE-2 receptor [128]. It has also been reported that the ACE-2 receptor is expressed more in the lungs [129], showing expression in multiple types of epithelial cells in the airway, for example, the alveolar epithelial type II cells in the lung parenchyma [130,131]. This over-expression of the ACE-2 receptor is directly related to the severity of the COVID-19 infection [132]. One of the reasons that the overexpression of ACE-2 in increasing the severity of the viral infection is the ability of the ACE-2 receptor to increase the pro-inflammatory cytokines (PICs) in the lungs of infected patients, which has been associated with severe outcomes of the viral infection in the lungs of the patients [133,134]. Continuous increase in the concentration of PICs in lungs gives rise to a sort of cytokine stormor Cytokine Storm Syndrome (CSS) that includes severe pneumonia in patients, drastically increasing the severity of the infection, also associated with increased mortality [14]. The CSS is defined as a cytokine-mediated hyperinflammatory response in which the immune system of the host is uncontrollably activated and amplified, resulting in excessive release of cytokines such as Tumor Necrosis Factor (TNF), TNF-α, interleukin (IL)-1 β, IL-2, IL6, IL7 IL-18, IL-9, IL-10, interferons (IFNs), IFN-c, granulocyte Colony-Stimulating Factor (G-CSF), granulocyte-macrophage colony-stimulating factor, fibroblast growth factor, and macrophage inflammatory protein 1 [9, 135 -137]. This excessive release of cytokines results in respiratory failure by increasing vascular permeability, leading to the entrance of a large amount of blood cells and fluid in the alveoli, critically damaging the host cells [138]. These cytokine reactions further result in the direct or indirect activation of the coagulation pathway [139].Anaphylatoxins such as C3a and C5a are generated by the complement cascade,whose binding to the complement receptors results in further release of histamine, leukotrienes, and prostaglandins [140]. It is the C5a that induces mast cells, neutrophils, and monocytes to release proinflammatory cytokines such as IL-12, TNF-α, and macrophage inflammatory proteins-1α while also stimulating T and B cells to release cytokines such as TNF-α, IL-1β, IL-6, and IL-8., resulting in the CSS [139]. With studies reporting the role of the CSS in causing hyper-inflammation that often results in the severity of the infection and increased mortality rate [141], more focus is being put in finding drugs that target these cytokines, reducing the hyperinflammation associated with increased severity and mortality of the viral infection. In this regard, many different antiviral, anti-rheumatic, anti-inflammatory, antineoplastic, and antiparasitic drugs are being repurposed and investigated for their positive, immune-modulating effects in severe COVID-19 cases.The antiviral drugs that have been used for COVID-19 patients include atazanavir [142], favipiravir [143], IFN-α2b (interferon) [144], lopinavir-ritonavir [145], remdesivir [146], ribavirin [147], and umifenovir [148]. A study reported significant improvement of clinical symptoms of more than 68 percent of patients subjected to a 10-day course of Remdesivir [146]. Remdesivir was approved by the FDA on May 1, 2020, under an Emergency Use Authorization (EUA) for patients with severe COVID-19 symptoms.Anti-rheumatic drugs have also been used in this regard, with the primary mechanism behind their use being inflammatory cascade inhibition and TNF-α [149], IL-6 receptors, or IL-1Ra blocking [150]. The drugs that have been used include recombinant IL-1Ra antagonist Anakinra [151] and antiarthritic agents, including Baricitinib [152], Etanercept [153], and Infliximab [154]. Another important drug is Tocilizumab [155], an IL-6 signal pathway inhibiting monoclonal antibody (mAb). Numerous reports have suggested that the IL-6 cytokine plays the most important role in the pathogenesis of COVID-19, significantly affecting the severity of the viral infection in the lungs [156]. Table 1. The potential therapeutics that could be used for managing and combatting SARS-CoV-2. With the investigation of the reach of the deadly virus beyond the respiratory tract and its fatal effects on other parts of the body, more focus is being put on finding more and better therapeutics that could help in managing the viral infection. Approach Mechanism Stem cell-based therapeutics These mesenchymal cells are delivered intravenously where they protect the alveolar epithelial cells present in the lungs and recruit the cells that help prevent pulmonary infection and manage the lung dysfunction [74]. Use of Opioids and cannabinoids These drugs are immune-modulatory, and hence they can enhance the immune function of the body by increasing the immune cell influx, increased production of cytokines and chemokines, and affect viral replication and viral pathogenesis [71]. Prophylactic and immune-mediated approaches The understanding of the viral mechanism of action is unavoidable to use such approaches. Two theories are prevalent in this regard; one is the use of immune suppressors to overcome the hyper inflammation caused by viral load, and the other one accounts for the use of immune potentiators to activate the adaptive immune responses in the host [68]. Antiviral agents These antiviral drugs and vaccines can stimulate immune responses. These drugs can bind to the receptor and block viral entry. The drugs can be protease inhibitors or prevent viral replication by inhibiting viral RNA polymerases [65]. Some anti-inflammatory drugs have also been repurposed and are being used to treat COVID-19, including non-steroidal, anti-inflammatory Indomethacin [157], immunomodulatory, and anti-angiogenic agent Thalidomide [158], and steroidal anti-inflammatory drugs like corticosteroids, one of which, dexamethasone, has now been licensed [159]. Some antineoplastic drugs, previously being used in leukemia, are also being tested and repurposed, making them exciting choices for COVID-19 treatment [128]. These drugs include Ruxolitinib, a Janus Kinase (JAK) inhibitor [160] and Ibrutinib, a Bruton's tyrosine kinase (BTK) inhibitor [34], both of which have been shown to be promising in treating COVID-19 patients. Several ongoing trials are searching for a viable solution to the hyper-inflammation that results due to the COVID-19 infection and causes extensive, multi-organ damage and drug repurposing is a significant part of this research [128]. Hopefully, with more research and investigation, scientists would be able to find more drugs that are better able to deal with adverse effects of COVID-19 on the body. Table 1 summarizes the most common therapeutics being used and tested for COVID-19 for a better understanding of the wide range of treatments being tested to treat this viral infection. CONCLUSION While extrapulmonary effects of COVID-19 are being appreciated and more research is being undertaken on the effects of the viral infection on different body organs and organs systems, there is still a need to undertake intensive research into the exact mechanisms that the viral particles use for entering the extra-respiratory tissues in the body, the viral features that make this process easier, and long term implications of these extrapulmonary manifestations of COVID-19. This is important as older adults are the most affected by the viral infection and studying and understanding the multi-organ response to COVID-19 is critical to ensure adequate medical care to COVID-19 patients who survive the infection. In the case of such multi-organ infections, ensuring the survival of patients goes beyond fighting the infection and is more concerned with dealing with the long-term implications of that infection on the body. One way this can be done more effectively is to put more focus on regional and international collaborations to work collectively to understand this poorly understood disease, putting an end to this dreadful pandemic. CONSENT FOR PUBLICATION Not applicable. FUNDING None. CONFLICT OF INTEREST 1. None of the authors of this paper have a financial or personal relationship with other people or organizations that could inappropriately influence or bias the content of the paper. 2. It is to specifically state that "No competing interests are at stake and there is No Conflict of Interest" with other people or organizations that could inappropriately influence or bias the content of the paper.
2021-09-28T11:34:16.353Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e090998af58dbe4c7ea0da88cfdf7c21119c26c2", "oa_license": "CCBY", "oa_url": "https://opencovidjournal.com/VOLUME/1/PAGE/80/PDF/", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e090998af58dbe4c7ea0da88cfdf7c21119c26c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
211081384
pes2o/s2orc
v3-fos-license
The role of environmental stress and DNA methylation in the longitudinal course of bipolar disorder Background Stressful life events influence the course of affective disorders, however, the mechanisms by which they bring about phenotypic change are not entirely known. Methods We explored the role of DNA methylation in response to recent stressful life events in a cohort of bipolar patients from the longitudinal PsyCourse study (n = 96). Peripheral blood DNA methylomes were profiled at two time points for over 850,000 methylation sites. The association between impact ratings of stressful life events and DNA methylation was assessed, first by interrogating methylation sites in the vicinity of candidate genes previously implicated in the stress response and, second, by conducting an exploratory epigenome-wide association analysis. Third, the association between epigenetic aging and change in stress and symptom measures over time was investigated. Results Investigation of methylation signatures over time revealed just over half of the CpG sites tested had an absolute difference in methylation of at least 1% over a 1-year period. Although not a single CpG site withstood correction for multiple testing, methylation at one site (cg15212455) was suggestively associated with stressful life events (p < 1.0 × 10−5). Epigenetic aging over a 1-year period was not associated with changes in stress or symptom measures. Conclusions To the best of our knowledge, our study is the first to investigate epigenome-wide methylation across time in bipolar patients and in relation to recent, non-traumatic stressful life events. Limited and inconclusive evidence warrants future longitudinal investigations in larger samples of well-characterized bipolar patients to give a complete picture regarding the role of DNA methylation in the course of bipolar disorder. association studies (GWAS) in BD have identified dozens of associated variants, they have explained only a small fraction of overall disease liability (Stahl et al. 2019). Therefore, the last decade has seen a shift towards investigating the complex interplay between genetic and environmental risk factors (Sharma et al. 2016). Advances in technologies have supported high-throughput investigations of biological markers representative of environmental modulation of the genome. These biomarkers hold promise for stratifying symptom-based phenotypes and assessing the prognosis of individual patients (Kobeissy et al. 2012). Moreover, these biomarkers could contribute to a more accurate multi-level diagnostic framework which relies on biological measures to supplement clinical ratings of symptoms (Meana and Mollinedo-Gajate 2017). BD is a chronic, disabling, and severe mental illness characterized by recurrent depressive and manic episodes, somatic and psychiatric comorbidities, and functional impairments (Goodwin and Jamison 2007). Considering the high global burden and lifetime prevalence of bipolar spectrum disorders, estimated at approximately 2.4% (Rowland and Marwaha 2018), there is a need to better understand the factors affecting its onset and course. The significance of environment, especially childhood trauma and stressful life events on the trajectories of affective disorders, including vulnerability, onset, relapse and occurrence, has been well established (Aldinger and Schulze 2017;Lex et al. 2017;Johnson 2005;Alloy et al. 2005;Paykel 2003). However, little is known about the mechanisms involved in the consequences of such life events. Recently, emphasis has been placed on the potential role of epigenetic variation in the etiopathogenesis of BD (Li et al. 2015). Epigenetics is an adaptive mechanism which can modulate the stress response through subtle gene expression modifications (Aas et al. 2016). In particular, DNA methylation (DNAm), the addition of a methyl group to DNA, primarily at cytosine-guanine dinucleotides (CpG), may pose a "mechanism by which life-experiences become 'embedded' in the genome" (Marzi et al. 2018). Increasing evidence from both animal and human data supports the epigenetic programming of genes in response to trauma and chronic stress. Consistent findings have linked prenatal (Monk et al. 2012;Weaver et al. 2004) and early-life adversities to epigenetic modifications of genes, especially those involved in the hypothalamic-pituitary-adrenal (HPA) axis (Kular and Kular 2018;McGowan et al. 2009;Vinkers et al. 2015; Jaworska-Andryszewska and Rybakowski 2019). While several studies have shown methylation changes associated with trauma during the adult period, few studies have investigated non-traumatic chronic stress (Matosin et al. 2017) or acute stressful life events. Candidate gene approaches in the general population have reported differential methylation of CpGs in the vicinity of SLC6A4 (Alasaari et al. 2012), TH (Myaki et al. 2015), and BDNF (Song et al. 2014) in association with sustained workrelated stress. One study, which examined LINE-1 as a proxy for global methylation, found no signification associations with chronic lifestyle stress (Duman and Canli 2015). To the best of our knowledge, not a single study has explored epigenome-wide signatures of DNAm in relation to acute, non-traumatic stress in humans. With regards to BD, studies have investigated methylation differences as both trait and state markers of the disorder in several promoter regions including SLC6A4, PPIEL31, BDNF, HCG9, KCNQ3, 5HTR1A and GPR24 (Ludwig and Dwivedi 2016;Fries et al. 2016;Pishva et al. 2014). Interestingly, evidence supports altered DNAm profiles for high-risk affected and even unaffected offspring of individuals with BD in comparison to low risk controls. Moreover, there seems to be a unique rate of change in DNAm over time for high risk individuals (Duffy et al. 2019). However, despite findings of differential epigenetic profiles, results have been inconsistent and there remains a need for genome-wide methylation studies, especially ones longitudinal in design. This study aims to gain a better understanding of the role of epigenetic modifications, specifically DNAm, in relation to stress during the course of BD. Using repeated measures over a 1-year period, we explored the relationship between DNAm and stressful life events in chronic BD patients. We took a two-pronged approach, first by interrogating CpGs in the vicinity of candidate genes previously implicated in the stress response and, second, by conducting an exploratory epigenome-wide analysis. Furthermore, we determined whether changes in symptom and stress measures over time were associated with a DNAm-based age estimate and epigenetic aging. Study sample The study was conducted using data from the longitudinal PsyCourse cohort. PsyCourse has been described in detail (Budde et al. 2019). Briefly, PsyCourse is a multisite, naturalistic study, based in the German and Austrian population. Psychopathology, pharmacological treatment, childhood trauma and current stressful life events were among other variables assessed at each of four visits (6-month intervals). Likewise, peripheral blood samples were collected at each visit, paving the way for a detailed analysis of the longitudinal correlation between disease status and peripheral biomarkers. For the purpose of this study, a subset of PsyCourse participants (n = 96) was selected according to a DSM-IV diagnosis (American Psychiatric Association 2002) of type I or II BD, availability of genotype data and biomaterial, and completed childhood trauma and stressful life events questionnaires. Demographic and clinical characteristics of these patients are reported in Table 1. The study was approved by the local ethics committee for each study center and was carried out following the rules of the Declaration of Helsinki. All individuals provided written informed consent. Stressful life events Current stressful life events were assessed with the Life Events Questionnaire (LEQ), a 79-item self-report instrument that has been described in detail (Norbeck 1984;Sarason et al. 1978). The LEQ covers a wide range of stressor exposure related to health, work, school, residence, love and marriage, family and friends, parenting, the personal sphere or social environment, finances, crime and legal matters. At each visit, participants reported whether they experienced any of the listed events in the last 6 months. When the patient experienced a specific event, they rated: (1) the nature of the event (good/bad) and (2) the impact of the event on his/ her life (0-3). At each time point, adverse life events were summed to yield a stress score that reflects the impact ratings of all "bad" events. The same was done for the impact ratings of "good" events. A total score was also summed including impact ratings of both "bad" and "good" events. These three LEQ scores were used as outcome measures in our association analyses. Childhood trauma The Childhood Trauma Screener (CTS) is a German, short version of the Childhood Trauma Questionnaire (Bernstein et al. 1997(Bernstein et al. , 2003Grabe et al. 2012). The screener includes five questions to assess sexual, physical and emotional abuse, as well as emotional and physical neglect. Validated threshold values (Glaesmer et al. 2013) were used to transform ratings for each item into a dichotomous scale in order to identify individuals with reported childhood trauma (yes/no). Details on reported childhood trauma and thresholds used can be found in Additional file 1: Table S1. Symptom ratings The Positive and Negative Syndrome Scale (PANSS) was used as a measure of psychopathology at the time of testing (Kay et al. 1987). A continuous total score of the three subscales, i.e. positive, negative, and general symptoms was used. The Global Assessment of Functioning (GAF) score was used as a measure of psychosocial functioning (Luborsky 1962;Endicott et al. 1976). The Young Mania Rating Scale (YMRS) was used as a measure of manic symptoms in the last 48 h (Young et al. 1978). Lastly, the Inventory of Depressive Symptomatology (IDS-C 30 ), a 30-item rating scale, was used to assess the severity of depressive symptoms (Trivedi et al. 2004). Analysis of DNA methylation DNA samples Genomic DNA was extracted from whole blood using the PerkinElmer Chemagen Kit (chemagic DNA Blood10k prefilling VD120419.che) and all samples were subsequently stored in a Hamilton Bios M system at − 80 °C. DNA quality was assessed using the QIAxcl ® system. DNA samples from baseline and 1-year follow-up visits were used to obtain methylation data. Prior to downstream analyses, potential population stratification was evaluated, and an initial step to remove European population outliers was taken (Budde et al. 2019). Thus, our sample consists of an ethnically homogenous population of Caucasians of European descent. Illumina EPIC chip processing Bisulfide conversion of DNA and processing of methylation arrays was accomplished in collaboration with the Institute of Human Genetics, University of Bonn, Germany. Whole-blood genomic DNA diluted with water (50 ng/μl) was treated with sodium bisulfite using the EpiTect ® Bisulfite Kit from QIAGEN ® following the manufacturer's protocol. DNAm was assessed using the Illumina Infinium Human MethylationEPIC BeadChip array (Illumina Inc., San Diego, CA, USA) according to the manufacturer's instructions. To minimize batch effects during DNAm measurement, an algorithm for sample randomization was used for positioning samples onto 96-well plates according to exposures of interest and confounding variables (see Additional file 1). Quality control and normalization Quality control The Bioconductor R package minfi was used to read raw intensity data files (.idat files) into R and for the subsequent quality control and normalization of methylation data (Aryee et al. 2014). Concordance between methylation-predicted and reported sex was confirmed. Filtering of poor-performing samples and probes was performed (see Additional file 1: Table S2). Probes with low detection p-values (> 0.05 in > 10% of samples) were excluded. Using the function dropLociWithSnps(), SNPs inside the probe body and at the nucleotide extension were removed according to a minor allele frequency ≥ 5% based on dbSNP. To prevent a possible gender effect, X and Y chromosomes were removed. According to a list previously published (Chen et al. 2013), non-specific probes i.e. probes on the EPIC array that co-hybridize to alternate genomic sequences, were removed. Lastly, probes with a bead count < 3 were removed. Normalization Data were normalized using functional normalization (FunNorm), an extension of quantile normalization. Fun-Norm uses internal control probes present on the array to infer between-array technical variation, by default using the first two principal components of the control probes (Fortin et al. 2014). Density plots were used to evaluate the distribution of M-values before and after functional normalization (see Additional file 1: Fig. S1). Technical batch effects were then identified using linear regressions to inspect the association of principal components of the methylation values with possible technical batches. Additionally, the R package shinyMethyl was used for visual inspection of principle component analysis (PCA) plots. Identified batch effects (i.e., array and slide) were removed using the Empirical Bayes' method ComBat (Johnson et al. 2007). Batch corrected M-values after ComBat were used for downstream analyses (see Additional file 1: Fig. S2). According to inspection of PCA plots, a single sample remained an outlier after batch correction and was excluded. Confounders Considering cell-type composition is a confounding factor in epigenome-wide association studies (EWAS), the minfi function estimateCellcounts() was used to estimate the cell type composition for our samples. This function uses a modified version of the Houseman algorithm to obtain a cell counts vector for the six cell-types (i.e., CD4T, CD8T, NK, B cells, monocytes, and granulocytes) for each sample (Houseman et al. 2012). Active smoking is another established modifier of DNA methylation (Lee and Pausova 2013). Methylation-based smoking scores were calculated based on the methylation profile of the 187 CpG sites identified in Zeilinger et al. (2013). First, raw beta values were normalized using the Teschendorff et al. beta-mixture quantile dilation (BMIQ) strategy (Teschendorff et al. 2013). Adjusted beta-values were then used for calculation of methylation-based smoking scores using methods previously described (Elliott et al. 2014). The correlation between self-reported number of cigarettes smoked yearly and methylation-based smoking scores was assessed (Spearman's ρ = 0.64; p < 0.001). To rule out possible confounding effects of medication, 5 samples were excluded in sensitivity analyses. These samples were participants who were not taking psychotropic drugs at the time of testing. All other participants were taking at least one (monotherapy) or a combination (combo therapy) of the following (1) antidepressants, (2) antipsychotics, (3) mood stabilizers, (4) tranquilizers, or (5) other psychiatric medications. Change in methylation over time The general "stability" of methylation over time was investigated. First, the absolute change in methylation β-values between baseline and 1-year follow-up visits were calculated across all CpG sites. To determine whether differential methylation between visits remained significant after adjusting for known confounders, the package lme4 (Bates et al. 2015) was used to fit a linear mixed-effects model (LMM) with the dependent variable "M-value" and the independent variable "time", adjusting for age, sex, DNAm smoking scores, and cell composition estimates. Patient ID was included as the random effect term. Comes et al. Int J Bipolar Disord (2020) 8:9 Candidate gene analysis The association between LEQ scores and the interaction between CT and total LEQ scores with DNAm was assessed via LMMs, adjusting for covariates as described above. We interrogated DNAm in the vicinity of genes previously implicated in the HPA-axis (i.e. BDNF, FKBP5, IL6, SLC6A4, and OXTR). All probes on the EPIC array annotated to each of these five genes were identified. The number of probes per gene ranged from 22 to 124. We corrected for multiple testing on a gene-level by applying the false discovery (FDR) correction (Benjamini and Hochberg 1995) per gene, with FDR-corrected p-values ≤ 0.05 deemed significant. Afterwards, Bonferroni-correction was used to correct overall for the number of candidate-genes tested. Exploratory EWAS An exploratory EWAS was conducted. As a means of noise reduction, the top 10% of the most variable CpGs of the normalized, batch corrected M-values were extracted according to median absolute deviation (MAD) scores i.e. the median of the absolute deviations from the data's median. Associations between the most variable sites and LEQ scores and the interaction between childhood trauma and total LEQ scores were then tested using LMMs, adjusting for covariates as described above. Epigenetic aging DNAm-based age prediction was performed using the Horvath age estimation algorithm (Horvath 2013) with a freely available online tool (https ://dnama ge.genet ics. ucla.edu/home) which predicts DNAm-age based on the methylation of 353 CpGs using an elastic net penalized regression model. The difference between the estimated epigenetic age and chronological age (Δage) and a measure of epigenetic age acceleration (AA), i.e., the residual from regressing DNAm age on chronological age, were calculated. LMMs were used to determine the effect of LEQ scores on Δage, adjusting for chronological age, sex, DNAm smoking scores, cell composition estimates, and technical batch effects (sample slide and array). Additionally, the difference in symptom ratings and stress scores between visits were calculated. The association between the change in symptoms and LEQ scores between baseline and 1-year follow-up with AA at 1-year follow-up was determined via linear regression models, again controlling for chronological age, sex, DNAm smoking scores, cell composition estimates and technical batch effects. Additional analyses Nominally significant CpGs (unadjusted p < 0.05) associated with total LEQ scores were used for gene-based enrichment analysis using the GOmeth function from the Bioconductor package missMethyl. GOmeth maps a vector of CpG sites to Entrez Gene IDs, and tests for gene ontology (GO) term pathway enrichment using a hypergeometric test (Geeleher et al. 2013). Additionally, the correlation between DNAm in blood and four brain regions was explored for the most suggestive CpGs associated with total LEQ scores (see Additional file 1). Change in methylation over time The mean absolute difference in methylation (β) between visits 1 and 3 (|Δβ|) was calculated across all samples for all CpG sites (Fig. 1). Over the 1-year period, |Δβ| ranged from < 0.001 to 0.299 with an average change of 0.014. Of 753,251 CpG sites, only 68 had an |Δβ| of 0.10 or more, while 8454 sites differed by at least 0.05 between visits. Just over half of the sites (428,610) showed an absolute difference in methylation of at least 1%. Investigation of the functional genomic distribution of the least stable CpGs over time (|Δβ| ≥ 0.10) revealed the majority of CpGs fell within Open Seas, while 12 fell within CpG Islands, and the remaining in CpG Shores and Shelves (Fig. 2). In summary, 34,776 CpG sites showed a nominally significant difference over time (unadjusted p-value < 0.05), after correcting for age, sex, smoking and cell composition estimates. However, not a single locus withstood correction for multiple testing (FDR-corrected p-value < 0.05). Methylation association analysis We performed an exploratory analysis looking for associations between LEQ scores and DNAm in individual CpG probes in the vicinity of candidate genes previously implicated in the stress response and in the most variable CpG sites across the epigenome. Methylation at a single CpG site (cg15212455; POU6F2; "POU class 6 homeobox 2"; chr 7) was associated with impact ratings of total LEQ scores with a suggestive significance of p < 1.0 × 10 −5 , although not a single locus withstood correction for multiple testing (FDR-corrected p > 0.05 for all comparisons). Figure 3 shows the Manhattan plot depicting all analyzed CpG sites with their calculated p-values for the association between DNAm and total LEQ scores. Table 2 lists the top 20 loci associated at nominal significance with total LEQ scores. Inspection of quantile-quantile (QQ) plots did not show evidence for inflation or bias ( Fig. 4; Lambda factor = 0.98). Manhattan plots and associated QQ plots for additional association analyses can be found in Additional file 1: Fig. S3-S8. The sensitivity analysis, excluding subjects who did not take psychotropic drugs at the time of testing, did not yield signification associations. These results, specific to modeling the association between DNAm and total LEQ scores, are presented in Additional file 1: Figs. S9 and S10. Epigenetic aging As expected, there was a strong positive correlation between individuals' DNAm age and chronological age (r = 0.941, p < 0.001; see Additional file 1: Fig. S11). According to Horvath's estimate, the mean (SD, range) AA was − 0.23 years (3.71, range − 9.94 to 9.86 years) at baseline and 0.25 years (3.95, range − 8.12 to 9.43 years) at the 1-year follow-up. Between visits, the mean (SD, range) change in AA was 0.50 years (4.97, range − 10.72 to 13.85 years). Overall, no statistically significant associations between epigenetic aging and symptom or stress measures were detected. Additional analyses We included genes mapped by the top CpG sites (unadjusted p < 0.05) associated with total LEQ scores in an enrichment analysis. No biological processes survived FDR correction (see Additional file 1: Table S3). Blood brain correlation coefficients for methylation of the top 20 loci associated with total LEQ scores (overlapping with the 450 K Beadchip array) are presented in Table 3. Eight of the top 20 most differentially methylated loci associated with total LEQ scores showed a significant correlation between methylation in the blood and methylation in at least one brain region. Methylation of the CpG site that was most strongly associated with total LEQ scores was significantly correlated with methylation in all four brain regions (p < 0.001; see Additional file 1: Fig. S12). Manhattan plot for association between DNA methylation and total LEQ scores. The horizontal red line represents the epigenome-wide significant threshold for this study (p < 6.6 × 10 −7 ) and the blue line represents a suggestive significance threshold (p < 1.0 × 10 −5 ) Discussion To the best of our knowledge, our study is the first to investigate epigenome-wide methylation changes over time in BD patients. Moreover, it is the first to explore methylation changes related to non-traumatic stressful life events on an epigenome-wide scale. Although no locus withstood correction for multiple testing, our suggestive findings and secondary analyses provide limited evidence supporting a role of DNAm in association with non-traumatic life events in chronic BD patients. We identified a single, suggestively significant, CpG site associated with total LEQ scores, mapping to POU6F2, which has been associated with several psychiatric traits as well as intelligence and educational attainment. More specifically, genome-wide association studies have identified POU6F2 risk variants associated with psychological distress (Koshimizu et al. 2019), feeling emotionally hurt (Nagel et al. 2018), schizophrenia (Goes et al. 2015), autism (Anney et al. 2010), educational attainment (Lee et al. 2018;Okbay et al. 2016) and intelligence Davies et al. 2018). Additionally, in a longitudinal investigation of DNAm changes preceding adolescent psychotic experiences, DNAm of the CpG site cg11604728 (POU6F2) measured at age 15-17 was among the top 20 CpG sites indicative of psychotic experiences at age 18 (Roberts et al. 2019). Furthermore, POU6F2 is highly expressed in the brain with the highest expression found in the frontal cortex (Additional file 1: Fig. S13) and methylation of our suggestive CpG site in blood is correlated with methylation in brain tissue across multiple brain regions. Interestingly, another of our top 20 CpG sites (cg26822318) falls in proximity to the FER1L6 gene, of which a variant (rs4870888) has been associated with suicide attempts in a meta-analysis of major depressive disorder, schizophrenia and BD (Mullins et al. 2019). Furthermore, another GWAS reported a FER1L6 variant (rs10481151) suggestively associated with cognitive performance (Need et al. 2009). At the current sample size, our study provides only minimal evidence supporting an association between methylation of individual CpGs and non-traumatic, recent stressful life events in BD. These findings, however, corroborate other reports of a limited role of DNAm with non-traumatic stress (Marzi et al. 2018). Noteworthy, a recent study reported hypermethylation of KITLG associated with childhood trauma in healthy controls (n = 91) but not in bipolar patients (n = 50) (He et al. 2018). Although the mechanistic role of DNAm in the phenotypic expression of early life adversities is well established in the literature, other mechanisms may be responsible in adulthood and in association with subsequent events. This notion aligns with theories such as Post's kindling hypothesis and the decay model which suggest a higher impact of life events on first episode than on subsequent episodes in BD (Aldinger and Schulze 2017;Kemner et al. 2015;Hillegers et al. 2004). Furthermore, it must be considered whether positive epigenetic associations with life events could be disorder-specific, genotype-dependent, associated with specific trauma exposure, age groups, sex and/or tissues measured (Marzi et al. 2018;Vinkers et al. 2015;Uddin et al. 2010;Boks et al. 2015;Smith et al. 2011;Mehta et al. 2017). While there is no gold standard for life stress measurements, differences in how to quantify stress may also have a major effect on findings (Johnson 2005;Bender and Alloy 2011;Monroe 2008;Dohrenwend 2010;Brown and Harris 2012). The main strength of our study is its longitudinal design, allowing for repeated measures within individuals and to investigate methylation changes over time and in relation to symptomatology and stressful life events. To the best of our knowledge, this is the first study to collect repeated epigenome-wide methylation measures in bipolar patients. Furthermore, our study paid attention to critical confounding factors which often lead to spurious findings. For example, the use of methylation-based smoking scores better controls for the extent of smoking throughout the lifetime than the use of self-reported smoking measures (Elliott et al. 2014;Shenker et al. 2013). Finally, in contrast to most other studies, we have included an exploratory epigenome-wide approach. Despite the strengths of our study, several limitations need to be addressed. First, our study was limited by our small sample size which makes identifying subtle differences in methylation difficult. Taking power into consideration, and as an attempt to address the inherent multiple testing problem associated with EWAS, we limited our EWAS to only the most variable CpG sites according to MAD scores. While the fact that not a single site-specific association in DNAm survived correction for multiple testing could reflect the limited statistical power of our small sample, it may also be related to an overly conservative multiple testing correction considering the lack of variability in methylation at many CpGs and spatial correlation of methylation with nearby sites (Walker et al. 2016;Lunnon et al. 2015). A recent study estimated there are approximately 530,000 independent tests in a whole blood EPIC array DNAm study. Accordingly, they proposed a corrected significance threshold of 9.42 × 10 −8 to be used as a standard threshold for future EWAS based on the EPIC array (Mansell et al. 2019). Furthermore, the study introduced a freely available online tool which allows users to perform power calculations to guide sample sizes, accounting for the individual properties of each DNAm site and using their empirically derived significance threshold. According to their tool, an effect size of just 1% difference between cases and controls would require a sample of 1000 participants, for only a third of methylation sites to have > 80% power. We observed an effect size below 5% in our study (based on median split) for our most significantly associated site, indicating that our study is nevertheless underpowered. Future studies should take advantage of this tool to assess, a priori, required sample sizes according to their expected effect sizes. Furthermore, complementary systems biology approaches such as weighted gene co-methylation network analysis (WGCNA) could be beneficial for studies with limited sample sizes, providing more insight into the functional role of altered DNAm (Langfelder and Horvath 2008). Another limitation is in relation to the fact that our sample represents a cohort of chronic BD patients which likely influenced our investigation of epigenetic aging related to symptom ratings over time. The chronicity of patients may also confound our findings with regards to the heterogenous treatments patients have received over the years. To acknowledge this critical factor, we conducted a sensitivity analysis excluding those subjects not taking psychotropic drugs at the time of testing, however, this also did not lead to significant results. One must also consider the possible recall and desirability biases associated with self-rating questionnaires like the LEQ and CTS. Lastly, little is known about the temporal stability of epigenetic markers (Byun et al. 2012;Talens et al. 2010). We cannot be sure whether the time interval of 1 year was too long or short to observe dramatic methylation changes or at what time window following exposure to stressful life events one might observe changed methylation profiles. Conclusions BD is a multifactorial psychiatric illness, and for many patients full interepisodic remission never occurs (Sam et al. 2019). Stressful life events have been associated with a worse course of BD (Aldinger and Schulze 2017) and there remains a need to better understand the mechanisms which allow these stressors to bring about phenotypic change. Our study provides limited evidence supporting an association between DNAm and recent, non-traumatic stressful life events in BD patients. As findings in clinical populations have been inconsistent, there is still much to be understood especially with regards to the temporal nature of environmentally induced DNA modifications. Future larger studies of well-characterized patients, longitudinal in design, are warranted.
2020-02-12T16:07:38.810Z
2019-07-26T00:00:00.000
{ "year": 2020, "sha1": "8a61469bb0bd821054f1280f7596fdd4ec16d73a", "oa_license": "CCBY", "oa_url": "https://journalbipolardisorders.springeropen.com/track/pdf/10.1186/s40345-019-0176-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a61469bb0bd821054f1280f7596fdd4ec16d73a", "s2fieldsofstudy": [ "Psychology", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
201617205
pes2o/s2orc
v3-fos-license
Genomic profiling of primary histiocytic sarcoma reveals two molecular subgroups Histiocytic sarcoma is a rare malignant neoplasm that may occur de novo or in the context of a previous hematologic malignancy or mediastinal germ cell tumor. Here, we performed whole exome sequencing and RNA-sequencing (RNA-Seq) on 21 archival cases of primary histiocytic sarcoma. We identified a high number of genetic alterations within the RAS/RAF/MAPK pathway in 21 of 21 cases, with alterations in NF1 (6 of 21), MAP2K1 (5 of 21), PTPN11 (4 of 21), BRAF (4 of 21), KRAS (4 of 21), NRAS (1 of 21), and LZTR1 (1 of 21), including single cases with homozygous deletion of NF1, high-level amplification of PTPN11, and a novel TTYH3-BRAF fusion. Concurrent NF1 and PTPN11 mutations were present in 3 of 21 cases, and 5 of 7 cases with alterations in NF1 and/or PTPN11 had disease involving the gastrointestinal tract. Following unsupervised clustering of gene expression data, cases with NF1 and/or PTPN11 abnormalities formed a distinct tumor subgroup. A subset of NF1/PTPN11 wild-type cases had frequent mutations in B-cell lymphoma associated genes and/or clonal IG gene rearrangements. Our findings expand the current understanding of the molecular pathogenesis of this rare tumor and suggest the existence of a distinct subtype of primary histiocytic sarcoma characterized by NF1/PTPN11 alterations with predilection for the gastrointestinal tract. Introduction Histiocytic sarcoma (HS) is a rare and aggressive malignant neoplasm that has morphological and immunophenotypic features of mature tissue histiocytes. 1 It predominantly occurs in adulthood, although any age may be affected. Sites of involvement may be nodal or extranodal, and include the gastrointestinal (GI) tract, skin, and liver. 2 Histologically, the tumor cells are usually pleomorphic with cytologic atypia and can be multinucleated or have a spindled or xanthomatous morphology. The diagnosis of HS requires the demonstration of histiocytic markers (CD68, CD163, CD4 or lysozyme) and the exclusion of tumors of other lineages by negativity of immunohistochemical stains for Langerhans cells (CD1a, langerin), follicular dendritic cells (CD21, CD23, clusterin), B and T cells, cells of myeloid or epithelial lineage (MPO, CK), and melanocytic markers. 3,4 Histiocytic sarcoma may arise as a primary neoplasm (pHS), but is also well described in the context of an existing or concurrently diagnosed hematologic malignancy, most frequently a follicular lymphoma, but also chronic lymphocytic leukemia (CLL) and B-or T-lymphoblastic leukemia (B-ALL/T-ALL). [5][6][7][8][9] Rare cases have also been associated with mediastinal germ cell tumor. 3 Cases arising in the context of a lymphoid neoplasm are often referred to as "secondary" HS (sHS), and frequently possess identical clonal antigen receptor gene rearrangements or occasionally identical structural events (e.g. identical IGH/BCL2 rearrangements) as in the associated lymphoma. However, on histological and immunophenotypic examination they have no other evidence of lymphoid origin. 5,6,10 Rather, these cases express markers of histiocytic/monocytic differentiation, but are nonetheless thought to be related to the associated B-cell neoplasm through a poorly understood process sometimes referred to as transdifferentiation 5 or origin from a common neoplastic progenitor. 11 Interestingly, the presence of clonal IG gene rearrangements or a BCL2 translocation is not restricted to secondary cases associated with a B-cell malignancy, as both abnormalities have also been observed in sporadic or "primary" cases of HS. 12,13 In contrast to the more comprehensive studies performed in other histiocytic tumors, especially Langerhans cell histiocytosis and Erdheim-Chester disease, [14][15][16][17] until recently, molecular analysis of HS has remained relatively underexplored. 18 BRAF p.V600E mutations have been reported in approximately 12% of 108 published cases with molecular or immunohistochemical data, and additional alterations in members of the RAS/MAPK and PI3K/AKT pathways, including other BRAF variants, KRAS, HRAS, NRAS, MAP2K1, PIK3CA, PTPN11 and PTEN are also described (see Online Supplementary Tables S1 and S2 for a complete list of references). The distinction between pHS and sHS is often not clearly defined in these studies. To better understand the genetic landscape of alterations in a well-characterized series of pHS, we performed an integrated genomic analysis of 21 cases utilizing whole exome sequencing, whole transcriptome sequencing, and copy number analysis. Cases of sHS were intentionally excluded from this study. Case selection, IGH/BCL2 and clonality studies Twenty-one cases of pHS were identified from the files of the Hematopathology Section of the National Cancer Institute under an Institutional Review Board approved protocol (Online Supplementary Methods). The histological and immunophenotypic features and clonality characteristics of the cases are detailed in Figure 1 and Online Supplementary Variant analysis Germline variants were excluded in three cases with available matched normal samples. Exonic variants with a depth of coverage ≥ 20 and a read count ≥ 6 were retained. As matched germline samples were unavailable for most cases, we generated a targeted gene list to reduce the number of variants for review. Genes were compiled from the COSMIC Cancer Gene Census (http://cancer.sanger.ac.uk) 19 and literature review to select disease relevant genes with a potential oncogenic role. The list was supplemented with additional genes identified by filtering the exome sequencing data to include recurrently mutated genes (≥ 3 samples) after removing variants based on CADD phred-like scores 20 and population allele frequencies (Online Supplementary Methods). All variants involving genes in the targeted gene list were evaluated and categorized as significant based on set criteria (Online Supplementary Methods and Online Supplementary Figure S1). Variants not meeting the set criteria were excluded. Mutations were reviewed in the Integrative Genomics Viewer (IGV). 21 RNA sequencing Details of RNA library preparation, sequencing and fusion detection are described in the Online Supplementary Methods. RNA-Seq analysis was conducted using the CCBR RNA-Seq pipeline (https://github.com/CCBR/Pipeliner). Gene set enrichment analysis was performed using Ensemble of Gene Set Enrichment Analyses (EGSEA data version: 1.6.0) 22 and sorted by average rank. Copy number analysis Nine samples were successfully assessed using the OncoScan CNV FFPE Assay (Affymetrix, Santa Clara, CA, USA) according to the manufacturer's protocol. Copy number was estimated from the exome sequencing data in the remaining cases using default settings for CNVkit v0.8.5 23 and PureCN v1.8.1. 24 Calls from CNVkit were exported in nexus.ogt format for review and annotation in Nexus 9.0 Software (BioDiscovery, Hawthorne, CA, USA). Alterations called by both algorithms were further analyzed as described in the Online Supplementary Methods. Data sharing All genomic data from this study will be deposited in the dbGaP database (www.ncbi.nlm.nih.gov/gap) with the accession number phs001748.v1.p1. Primary histiocytic sarcoma is characterized by frequent alterations involving the RAS/MAPK pathway Whole exome sequencing was performed on 21 cases of pHS as defined in the Online Supplementary Methods, and on three matched normal controls (His01, His08, His16), in two groups. The median coverage in the first 15 cases ranged from 106-165x, and in the second six cases from 205-305x. Sequencing depth for the three matched controls ranged from 72-143x. Variants were filtered as described in the Online Supplementary Methods using stringent criteria, and all candidates were individually reviewed in IGV. Multiple and occasionally concurrent mutations involving genes of the RAS/MAPK pathway ( Figure 2 and therefore predicted to be inactivating. The single missense mutation (p.V1182D) was predicted to be deleterious or probably damaging by functional impact algorithms SIFT 25 and PolyPhen-2. 26 In addition to NF1 mutations, 3 of the 5 cases showed concurrent mutations in PTPN11 (p.F71V [His01]; p.E76G [His02]; p.A72V [His12]). PTPN11 mutations were present within the autoinhibitory N-SH2 domain at amino acid residues known to be associated with a gain-of-function consequence and described in Noonan syndrome and juvenile myelomonocytic leukemia (JMML). 27,28 One case without a PTPN11 mutation [His17] had an additional mutation in GNAI2 at p.R179H, a codon previously shown to be targeted by activating mutations 29 and 1 of the 5 NF1 mutated cases also had a mutation in JAK2 at p.V617F [His12]. A fourth PTPN11 mutated case at p.E76K [His18] did not have another RAS pathway mutation; however, it had high level amplification of the mutated PTPN11 allele (see below). Additional mutations involving the RAS/MAPK pathway were detected in another 13 cases, none of which had Molecular pathogenesis of histiocytic sarcoma haematologica | 2020; 105(4) [36][37][38][39][40] In addition to mutations in the signaling pathways described above, mutations in epigenetic modifiers and/or transcription factors were detected in eight cases, including five with SETD2 mutations (c. Copy number analysis shows additional alterations in NF1, PTPN11 and CDKN2A A homozygous deletion in the NF1 gene was identified in an additional case [His03] from a lymph node and confirmed using a fluorescence in situ hybridization (FISH) probe targeting the deleted area. RNA-Seq data showed markedly lower counts of NF1 transcript in this case in comparison to the other samples, consistent with loss of NF1 ( Figure 3A-C). The three cases with a single NF1 mutation [His01, His02 and His16] showed loss-of-heterozygosity (LOH) or copy number loss involving chromosome 17 including the NF1 gene. Interestingly, a focal high-level amplification in chromosome 12 targeting PTPN11 was discovered in a further case [His18] involving the GI tract. This case harbored a known variant in the N-SH2 domain (p.E76K) of the amplified PTPN11 allele. In contrast to the other PTPN11 mutated cases, no mutation of NF1 was detected by exome sequencing. This high-level amplification was confirmed by FISH which showed multiple copies of the PTPN11 gene in the tumor cells in a double minute pattern. The amplified segment involved the entire PTPN11 gene, with the breakpoints identified by OncoScan in an adjacent gene, RPH3A, and 5' to PTPN11 involving HECTD4 on the complementary strand. This event was associated with a dramatic increase in PTPN11 transcripts in comparison to the other cases ( Figure 3D-F). Identification of a novel TTYH3-BRAF fusion Fusion calling of RNA-Seq data identified a novel intrachromosomal fusion transcript between exon 12 of C. Egan et al. 956 haematologica | 2020; 105(4) Figure 4B), as well as by FISH ( Figure 4C). Interestingly, RNA-Seq data showed markedly higher levels of BRAF transcript as compared to all other samples, suggesting that the TTYH3 gene partner contributed an active promoter to the fusion gene ( Figure 4D). TTYH3 was found to be highly expressed in all cases with RNA-Seq data (data not shown). Identification of two primary histiocytic sarcoma subgroups by whole transcriptome sequencing: association with NF1/PTPN11 mutational status We examined the gene expression profile of pHS through whole transcriptome sequencing of 17 of the tumor samples using four cases of reactive nodal histiocytic infiltrates as controls. Three of the 17 tumor samples initially sequenced were excluded from the differential expression analysis as they failed quality control metrics and/or were outliers within the tumor group [His10] or within the data as a whole [His06, His09] (Online Supplementary Figure S4). Re-clustering segregated the remaining samples into three groups: normal controls (4 samples), cases with NF1 or PTPN11 abnormalities (5 samples), and a third heterogeneous group comprising NF1/PTPN11 wild-type cases (9 samples) ( Figure 5). Gene set enrichment analysis shows enrichment of cell cycle processes in cases without NF1/PTPN11 abnormalities To better understand the potential biological significance of the two pHS subgroups, we performed gene set enrichment analysis using EGSEA. This analysis showed significant enrichment of cell cycle pathway and cell proliferation gene sets in the NF1/PTPN11 wild-type tumor samples relative to samples with NF1/PTPN11 alterations. Ki67 immunohistochemistry was performed on a subset of cases and confirmed the lower proliferation rate in the NF1/PTPN11 subgroup ( Figure 6 and Online Supplementary Table S3). The two tumor subgroups were also evident in the differential expression analysis comparing the normal and tumor samples when the genes with significant differential expression (FDR < 0.05, absolute log fold change > 1) were visualized across samples (Online Supplementary Table S5 and Online Supplementary Figure S5). We took advantage of this to explore the possibility that disease site might be influencing the clustering of the tumor sub-Molecular pathogenesis of histiocytic sarcoma haematologica | 2020; 105(4) 957 groups, as four of the five NF1/PTPN11 samples subjected to unsupervised clustering were GI excisions. When we excluded genes associated with GI site from this tumor versus normal comparison, the two tumor subclusters were unaffected; however, a set of genes that were more clearly differentially expressed between the tumor subgroups were revealed that upon removal resulted in the elimination of the tumor subgroups. Functional enrichment of this set of genes using ToppFun (https://toppgene.cchmc.org) showed enrichment for cell cycle processes, supporting the EGSEA result and the interpretation that the observed clustering was related to the difference in cell cycle processes between tumor groups rather than the tumor site (Online Supplementary Tables S6-S8 and Online Supplementary Figures S6 and S7). Correlation of clonal IG rearrangement status and B-cell associated mutations with NF1/PTPN11 wild-type status Clonal analysis to detect rearrangements of IG and TRG genes was performed on 19 of 21 cases, including 6 of 7 NF1/PTPN11 cases and 13 of 14 NF1/PTPN11 wildtype cases. Five cases had clonal rearrangement of the IGH and/or the IGK locus [His04, His10, His14, His20, His21], while two cases (including one with IG rearrangement) showed rearrangements of the TRG locus [His13, His20]. One case was indeterminate for a significant clonal IG rearrangement [His12]. All five clonally rearranged cases for IG were NF1/PTPN11 wild-type and all had additional mutations in transcriptional regulators and/or signaling pathway genes previously reported altered in B-cell lymphoma (see above). Three additional cases in the NF1/PTPN11 wild-type group had at least one B-cell associated gene mutation [His07, His08 and His19]. IGH/BCL2 (MBR) translocation analysis was positive in 1 of 17 cases [His10] (Online Supplementary Table S3). In total, 8 of the 14 cases in the NF1/PTPN11 wild-type subgroup had clonal IG gene rearrangements or mutations in genes reported to be mutated in B-cell lymphomas (Figure 2). Discussion Our study furthers the current understanding of the genomic landscape of primary HS through integration of whole exome sequencing and gene expression analysis. It confirms the central role of the RAS/MAPK pathway in the pathogenesis of pHS, with RAS pathway abnormalities identified in all cases in this study. Moreover, it identifies C. Egan et al. 958 haematologica | 2020; 105(4) A B C D two molecular subgroups based on the presence or absence of NF1/PTPN11 alterations and prevalence of SETD2 mutations, and independently through unsupervised clustering of RNA-Seq data. In addition, our study identifies novel mechanisms of RAS/MAPK pathway activation, including a previously unreported intrachromosomal fusion between TTYH3 and BRAF that preserves the BRAF kinase domain, and high-level PTPN11 amplification. Perhaps the most surprising finding of our study was the discovery of the NF1/PTPN11 subgroup with its distinct molecular characteristics and tissue site of involvement. In contrast to the NF1/PTPN11 wild-type subgroup, none of these cases harbored abnormalities in genes associated with B-cell lymphomas beyond SETD2 or had clonal IG rearrangements. GSEA revealed that this subgroup was characterized by a relative loss of gene sets related to cellular proliferation and the cell cycle compared to those harboring other RAS/MAPK alterations, a finding supported by Ki67 immunohistochemistry. The majority of the NF1/PTPN11 mutant cases had more than one MAPK pathway activating mutation. Three of the seven cases had co-occurring NF1 and PTPN11 mutations, while a fourth case had a co-occurring mutation in GNAI2 involving a codon previously shown to activate the MAPK pathway. 29 Additionally, while the remaining PTPN11 mutated case did not have a co-occurring RAS mutation, it did have high-level amplification of the mutated PTPN11 allele. In NF1 mutant melanoma, the frequent presence of a second gene mutation often involving PTPN11 (or another RASopathy gene) has led to the suggestion that NF1 inactivation is insufficient to cause full activation of the downstream MAPK pathway and tumorigenesis. 43 This hypothesis has been given further credence by recent data showing that NF1 loss-of-function mutant cell lines are dependent on SHP2 (encoded by PTPN11) mediated signaling for oncogenic RAS/MAPK pathway activation, 44 raising the possibility that activating mutations of PTPN11 may synergize with NF1 loss-of-function mutations to further potentiate the oncogenic activity of the pathway. In contrast to the NF1/PTPN11 positive subgroup, the NF1/PTPN11 wild-type cluster was comprised primarily of cases with prototypic RAS/MAPK pathway activating mutations involving KRAS, NRAS, BRAF and MAP2K1. Interestingly, eight of the 14 cases in this subgroup contained IG gene rearrangements and/or additional mutations in genes commonly associated with B-cell lymphoproliferative disorders. These included one or more mutations in epigenetic regulators, transcription factors or signaling pathway genes, including CREBBP, KMT2D, DDX3X, ARID1A, MEF2B, SGK1, TNFRSF14, DTX1, GNA13, STAT6 and CARD11. [36][37][38][39][40] Clonal IG rearrangements were identified in five cases and a BCL2 gene rearrangement was identified in one case, while neither were definitively detected in the NF1/PTPN11 subgroup. In our series, it is worth noting that none of our cases had evidence of a concurrent or previous lymphoma, although we cannot exclude the possibility of an occult or unreported B-cell lymphoma being present. The finding of additional mutations associated with B-cell lymphomas and clonal IG gene rearrangements suggests that some cases of NF1/PTPN11 wild-type pHS may be similar in origin to the sHS that are associated with B-cell malignancies, which often share IG gene rearrangements with the associated B-cell lymphoma. 5,6 This overlap has also been recently reported by Shanmugam et al. 18 In their series, they showed enrichment for a mutational signature resembling aberrant somatic hypermutation in cases that had a history of B-cell lymphoma or that had mutations in genes that are frequently mutated in B-cell lymphoma. Interestingly, they also found recurrent CDKN2A alterations that were more frequent in cases with a history of Bcell lymphoma or the aberrant somatic hypermutation signature. Similarly, our study identified a high frequency of focal CDKN2A losses/alterations in the NF1/PTPN11 wildtype subgroup which, in our cohort, frequently had molecular alterations associated with B-cell lymphoma. Concurrent mutations in RAS/MAPK pathway genes were less common in the NF1/PTPN11 wild-type group, occurring in 4 of 14 cases with alterations. Two cases had concurrent mutations in MAP2K1 [His11 and His14]. The other two had co-occurring BRAF (p.G469V) and MAP2K1 (p.F53L) mutations [His10] or KRAS (p.G12D) and RAF1 (p.D486G) mutations [His21]. These data are consistent with the limited published data in HS in which reported occurrences of multiple RAS/MAPK pathway mutations tend to manifest as co-occurring MAP2K1 mutations 18 or involve atypical BRAF mutations, with co-occurring BRAF (p.G464V) and KRAS (p.Q61H), 45 BRAF (p.D594N) and KRAS (p.A146T), 18 BRAF (p.G469R) and NF1 (p.W2229*) 18 and BRAF (p.F595L) and HRAS (p.Q61R) mutations 46 described. Interestingly, in the latter case the unusual BRAF mutation was shown to have weak oncogenic activity requiring the co-operation of the HRAS mutation for full activity. Two of the analytical challenges in our study included the lack of available matched germline samples in all but three cases, and the possibility that over-representation of GI site in the NF1/PTPN11 group could bias the RNA-Seq clustering. To exclude as many germline SNPs as possible we filtered all variants using stringent criteria for their representation in control populations (gnomAD) 47 and took CADD scores, 20 as well as presence in the Catalogue of Somatic Mutations in Cancer (COSMIC) 48 into consideration. In assessing potential site bias in gene expression clustering, we found that the separation of the tumor samples into the subgroups in the differential expression analysis was influenced by the removal of cell cycle-related genes but not by exclusion of GI site-associated genes. This, in addition to the similar mutational alterations in the cases, suggests that the clustering observed occurs independently of site. In conclusion, our study provides further insight into the molecular pathogenesis of pHS. We show frequent mutations and alterations in genes of the RAS/MAPK pathway, suggesting that patients could potentially benefit from genomic evaluation and targeted therapy, and we report a distinct molecular subtype of pHS that correlates with the NF1/PTPN11 status of the tumor and frequently involves the GI tract. Finally, we also identify a subset of NF1/PTPN11 wild-type cases with mutations in B-cell lymphoma associated genes and/or clonal IG gene rearrangements. The identification of molecular subtypes of primary histiocytic sarcoma may prove to have clinical relevance in future studies.
2019-08-24T13:05:07.073Z
2019-08-22T00:00:00.000
{ "year": 2019, "sha1": "f35406d9bbc39d796a0fed9f381072c777850d34", "oa_license": "CCBYNC", "oa_url": "https://haematologica.org/article/download/9331/67810", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8ef5fa90e246d20392c72bf13b10b2d5ebc3578", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
233626894
pes2o/s2orc
v3-fos-license
Genetic Consequences of Forest Fragmentation in a Widespread Forest Bat ( Natalus mexicanus , Chiroptera: Natalidae) : Recent historical and anthropogenic changes in the landscape causing habitat fragmentation can disrupt the connectivity of wild populations and pose a threat to the genetic diversity of multiple species. This study investigated the effect of habitat fragmentation on the structure and genetic diversity of the Mexican greater funnel-eared bat ( Natalus mexicanus ) throughout its distribution range in Mexico, whose natural habitat has decreased dramatically in recent years. Genetic structure and diversity were measured using the HVII hypervariable domain of the mitochondrial control region and ten nuclear microsatellite loci, to analyze historical and contemporary information, respectively. The mitochondrial and nuclear results pointed to a differential genetic structuring, derived mainly from philopatry in females. Our results also showed that genetic diversity was historically high and currently moderate; additionally, the contemporary gene flow between the groups observed was null. These findings confirm that the effects of habitat fragmentation have started to be expressed in populations and that forest loss is already building barriers to contemporary gene flow. The concern is that gene flow is a process essential to ensure that the genetic diversity of N. mexicanus populations (and probably of many other forest species) distributed in Mexico is preserved or increased in the long term by maintaining forest connectivity between locations. Introduction The fragmentation of natural habitats is a key issue for biodiversity and poses a threat to the genetic diversity of multiple species [1][2][3][4]. Fragmentation is a process of the change in the spatial structure from a relatively homogeneous environment to one with a progressively less homogeneous structure that is ultimately transformed into a heterogeneous habitat. This can reduce the total area of a given habitat type, splitting the remaining habitat, and even increasing the isolation of remnants [5][6][7]. Habitat fragmentation disrupts the connectivity among populations of various taxa, reducing population genetic diversity and increasing population structuring [8][9][10][11], due to the genetic drift associated with low gene flow [12]. Bats are among the most abundant and diverse groups of mammals in tropical forests, playing a central role in pollination, regulation of insect populations, and seed dispersal [13,14]. Despite their ability to fly, bats are vulnerable to the loss of genetic variation in response to anthropogenic fragmentation in tropical forests [15][16][17]. molecular markers with different inheritance modes (mtDNA and microsatellites) that will allow the evaluation of the historical and contemporary genetic structure and diversity of natalid bats in Mexico. Sampling Between 2004 and 2014, tissue samples were collected from Natalus mexicanus specimens inhabiting 21 locations throughout their geographic range in Mexico (Table S1 and Figure 1). Bats were captured using harp traps and mist nets, and wing membrane biopsies were collected with a 3 mm biopsy punch (Fray Products Corp., Buffalo, NY, USA). Tissue samples were preserved in 70% of ethanol and deposited at −20 • C at the tissue collection of the Laboratorio de Biología y Ecología de Mamíferos de la Universidad Autónoma Metropolitana-Iztapalapa (UAMI), Mexico. The captured bats were released, except for some individuals who were preserved as vouchers and deposited at the Mammal Collection of the UAMI (catalog numbers: RLW300713Nme3-RLW300713Nme10). throughout its distribution range in Mexico, where the natural landscapes have been fragmented as a result of human activities, and to test whether population genetic isolation is occurring due to the lack of dispersion. To this end, we have integrated information from molecular markers with different inheritance modes (mtDNA and microsatellites) that will allow the evaluation of the historical and contemporary genetic structure and diversity of natalid bats in Mexico. Sampling Between 2004 and 2014, tissue samples were collected from Natalus mexicanus specimens inhabiting 21 locations throughout their geographic range in Mexico (Table S1 and Figure 1). Bats were captured using harp traps and mist nets, and wing membrane biopsies were collected with a 3 mm biopsy punch (Fray Products Corp., Buffalo, NY, USA). Tissue samples were preserved in 70% of ethanol and deposited at −20 °C at the tissue collection of the Laboratorio de Biología y Ecología de Mamíferos de la Universidad Autónoma Metropolitana-Iztapalapa (UAMI), Mexico. The captured bats were released, except for some individuals who were preserved as vouchers and deposited at the Mammal Collection of the UAMI (catalog numbers: RLW300713Nme3-RLW300713Nme10). Specimen collection protocols and animal handling followed the Institutional ethical guidelines set by the American Society of Mammalogists [52] and the ethical guidelines of the División de Ciencias Biológicas y de la Salud, Universidad Autónoma Metropolitana-Iztapalapa [53] (Project "Biology and Ecology of bats in Mexico," approved by the Consejo Divisional de Ciencias Biológicas y de la Salud. Session 17.18. Date 28 November Specimen collection protocols and animal handling followed the Institutional ethical guidelines set by the American Society of Mammalogists [52] Mitochondrial DNA Total DNA was extracted from 245 Natalus mexicanus specimens following the protocol of the WizardSV Genomic DNA Purification System (Promega) kit. For these 245 individuals, a 331-bp fragment of the HVII domain of the mtDNA control region was amplified via polymerase chain reaction (PCR), using the primers L16517 and HSC [54] and following the conditions and the modifications of the primer HSC by [50]. Sequencing was performed with the Big Dye Terminator Kit (Perkin-Elmer, Norwalk, Connecticut) on an ABI 3130xl automatic sequencer (Applied Biosystems, Foster City, California). The sequences were edited and aligned with Geneious v. 5.6.4 [55] using the ClustalW algorithm and were subsequently adjusted visually. Microsatellite Loci Amplification Microsatellite loci amplification was made from tissue samples of 171 individuals from 11 localities of Mexico by means of ten dinucleotide microsatellite primers previously developed for the species (Nm1-Nm10- [56]) that were used along with the PCR conditions described above. Fragments were read on an ABI PRISM Genetic Analyzer 3130XLl sequencer, with a LIZ (GeneScan™ 500 ® LIZ Size Standard) as the allele-size standard. Allele size was estimated using GeneMarker v. 2.4.2 (SoftGenetics, LLC, State College, PA, USA). Genealogical Analysis Genealogical relationships between haplotypes were determined by a network of haplotypes through the median-joining method, with the software Network v. 4.6.1.3 [57]. Loops were resolved according to the criteria of [58]. Genetic distances between the genealogical groups (haplogroups) obtained were calculated with MEGA v. 5.0.5 [59], using the Tamura-Nei model (TrN). Haplotype diversity (h) and nucleotide diversity (π) were estimated for each locality, as well as for the groups obtained, using DnaSP v. 5 [60]. The genetic structure between and within localities was determined with molecular analysis of variance (AMOVA) with the software Arlequin v. 3.5.1.2 [61], run at two levels: Nongrouped and among the four groups obtained through the haplotype network (see Section 3). The program Barrier v. 2.2 was used to highlight likely geographic areas of genetic discontinuity [62]. The population dynamics was evaluated with an extended Bayesian skyline plot (EBSP) analysis, using mtDNA control region with BEAST v. 1.8.4 [67] on the CIPRES web portal (specialized in phylogeny); the analysis was run twice, each for 30 million generations, using a coalescent Bayesian skyline model, and an uncorrelated lognormal relaxed clock model. The optimal evolutionary model was estimated with jModelTest v. 2.1.6 (Pacific, Gulf of Mexico and Yucatan Peninsula: Tamura Nei 93). A substitution rate ranging from 0.01 to 0.025 substitutions per site per million years (s/s/my) was used following [68]; an Excel graph was produced. All analyses were performed for each genetic haplogroup obtained; the exception was San Sebastián (SS), given the low sample size (see results) and was included in the haplogroup Pacific. Historical Gene Flow The relative mutation-scaled migration rates (M) between the four mitochondrial groups obtained in Network and the relative effective population size (θ) were estimated using Markov chain Monte Carlo simulations in Migrate-n v. 3.7.2 [69] under a Bayesian inference model and with a constant mutation rate. A random tree was used as the baseline genealogy. The parameters of the first run were used as baseline values for the subsequent run until a converging result was obtained. The Markov chain length was set as 10,000 steps with 1000-step increments. An adaptive 4-chain heating scheme was set at temperatures of 1.0, 1.5, 3.0, and 1.000. A total of 10,000 trees per chain were discarded. Microsatellite Data Analysis The presence and frequency of null alleles were confirmed by locus and locality with MICROCHECKER v. 2.2.3 [70]. To confirm that the presence of null alleles has no effect on the results, we calculated F ST and genetic distance values with and without ENA correction (estimation of null alleles) using the software FREENA [71] and performed a Student's t-test with NCSS v. 11 [72]. The deviation from the Hardy-Weinberg equilibrium (HWE) and the ligation imbalance between pairs of loci were calculated using GENEPOP v. 4.0 [73] and applying the sequential Bonferroni correction to the significance level of p < 0.05 [74]. Population Structure and Genetic Diversity The genetic structure was evaluated through a Bayesian clustering analysis with STRUCTURE v. 2.2 [75], under conditions of 1,000,000 burn-in and 500,000 Monte Carlo Markov chain, testing clusters from K = 2 to 11, with 20 replicates per K. The most likely number of genetic clusters (K) was determined by estimating Delta K (∆K) and the logarithmic probability of K, In P (K) = L (K) [76] using the Structure Harvester website [77]. On the other hand, the distribution of genetic variation between and within populations was analyzed using molecular analysis of variance (AMOVA), based on F ST and R ST with 30,000 permutations using Arlequin v. 3.5.1.2 [61]. Genetic diversity by locality was obtained using GenAlEx v. 6.3 [78], estimating the number of alleles (Na), exclusive alleles (NP), observed heterozygosis (HO), and expected heterozygosis (HE). Contemporary Migration Rates Gene flow among the four groups (see Results) identified by STRUCTURE was estimated in BAYESASS v. 3.0.4 [79]. These programs use different models to estimate gene flow rates. BAYESASS uses an assignment method and does not incorporate genealogy; besides, it reflects the gene flow that occurred only in the past 1-3 generations. The BAYESASS analysis was first run with microsatellite data using the default delta values for allelic frequency, migration rate, and inbreeding. Subsequent analyses incorporated different delta values to ensure that the proposed changes between chains at the end of the run were between 40% and 60% of the total chain length [79]. Once the delta values (∆A = 0.40, ∆m = 0.45, and ∆F = 0.60) were within the accepted proportion (∆A = 0.15, ∆m = 0.15, and ∆F = 0.14) for four genetic groups, analyses were run three additional times (10 million iterations, one million burn-in, and a sampling frequency of 5000) with different random seeds. All parameter estimates converged. Mitochondrial DNA Data Analysis The 245 sequences of the Natalus mexicanus mtDNA control region had 331 bp with no tandem replicates. They showed a base composition of T: 23.8%, C: 26.6%, A: 32.5%, and G: 17.1%, with 271 conserved sites and 60 variable sites, 44 of which are parsimoniously informative sites. Population Structure and Genetic Diversity When groups were not defined, the AMOVA analysis showed a higher percentage of genetic variation (71%) between localities and a high differentiation (F ST = 0.71; p < 0.05); the differentiation values per group were also high and significant (F CT = 0.446, p < 0.05) ( Table 1). The analysis using pairwise F ST distances in the Barrier software detected three geographic barriers separating localities into three groups (GM, PM, and SS). These barriers are located in the Sierra Madre Oriental and in the central valleys of Oaxaca, separating GM and PM, and PM and SS, respectively. Demographic Analysis The mismatch distribution (SSD) and the Harpending's raggedness index (HRI) of the groups PM-SS (SSD = 0.0011, p = 0.705; r = 0.0042, p = 0.809) and PYUC (SSD = 0.0042, p = 0.727; r = 0.0799, p = 0.472) showed a unimodal distribution; for the GM group, the curve was not strictly unimodal (SSD = 0.0095, p = 0.572; r = 0.0186, p = 0.755) (Figure 3), although it was consistent with recent population growth. In all cases, Fu's F tests were negative and significant, while Tajima's D tests were negative but nonsignificant (Table 2). The extended Bayesian skyline plot analyses indicated that groups PM and GM increased their effective population size from 125,000 years ago, while PYUC remained constant through time (Figure 3). Microsatellite Data Analysis There were 198 alleles with ten loci recorded in 171 individuals. Fifty-five unique alleles were found, where the locality Valle de Bravo (VB) showed a higher number of alleles, but with a low frequency (0.031-0.094), while the locality Pe showed a lower number of alleles, but with a higher frequency (0.031-0.188) ( Table 3). Table 3. Genetic diversity statistics with microsatellites for each locality. Sum and average alleles per locus (Na); unique alleles (Np); observed heterozygosis (HO), expected heterozygosis (HE). Abbreviations in each locality are as in Figure 1. Null alleles occurred in most loci of individuals from at least two localities; however, as F ST and genetic distances (with and without ENA correction) did not differ significantly, no loci were excluded. Eight loci (Nme1-Nme4, Nme6, Nme8-NM10) deviated from the Hardy-Weinberg equilibrium in three or more localities. No disequilibrium in the linkage between loci was detected. Population Structure and Genetic Diversity The STRUCTURE analysis identified four genetic groups (K = 4), which differed in composition from those obtained with mtDNA: (1) Pe locality (Baja California Peninsula); Table 4). HO values ranged from 0.404 to 0.681 and HE from 0.585 to 0.787. Table 5). The highest migration rates were observed from Group 1 (site Pe: Baja California peninsula) to Group 4 (sites Co, TG, Bo, and Cva: Pacific and Gulf of Mexico). The migration rates from the groups were very low (<5%) and not significantly different from zero. All estimates of migration (m) between groups had 95% confidence intervals that approached zero, indicating little to no recent migration between genetic groups (Table 5). Contemporary Migration Rates BAYESASS runs yielded low levels of contemporary gene flow among groups ( Table 5). The highest migration rates were observed from Group 1 (site Pe: Baja California peninsula) to Group 4 (sites Co, TG, Bo, and Cva: Pacific and Gulf of Mexico). The migration rates from the groups were very low (<5%) and not significantly different from zero. All estimates of migration (m) between groups had 95% confidence intervals that approached zero, indicating little to no recent migration between genetic groups (Table 5). Table 5. Estimates of contemporary migration rates (95% confidence intervals) based on microsatellite data among groups of Natalus mexicanus. Discussion Our results show that contemporary levels of genetic diversity in N. mexicanus are moderate, and gene flow values between groups are either low or nil, in parallel with high values of population genetic differentiation. These data suggest a reduction in effective population size with isolated populations. A study based on microsatellites showed that, similar to N. mexicanus, the papillose woolly bat (Kerivoula papillosa) currently thriving in a fragmented landscape showed parallel reductions in population density and genetic diversity [16]. Small-sized bats, like N. mexicanus and Kerivoula papillosa, with relatively low mobility, may be more severely affected by landscape alterations regardless of a wide geographic distribution [80]. Our findings also show a differential genetic structure for the mitochondrial control region and nuclear microsatellites, suggesting female philopatry (e.g., [81]). Mating behavior and philopatry affect the population structure in bats [82,83]; besides, bat species with limited long-distance flight capacity demonstrate a greater population structuring relative to species with greater mobility [15]. Although poorly documented, N. mexicanus may display sexual segregation, with females remaining in the cave during the gestation and lactation stages, while most males leave the cave at this time [38,84]. They do not seem to show massive migrations, but they migrate locally in search of the most favorable daytime shelters [85]. The patterns detected using mtDNA and microsatellites showed no genetic differentiation between Natalus mexicanus populations living in northern and southern Mexico [48], neither into two reciprocally monophyletic nor deeply divergent groups, as proposed by [86,87]; conversely, this finding is consistent with the observations previously reported by us [49]. The mitochondrial genealogical analyses identified four lineages (GM, PM, PYUC, and SS), consistent with the geographic structure based on the cytochrome b gene [49]. These lineages respond to historical processes and probably evolved due to the effect of barriers restraining dispersal during the Pleistocene, including mountain ranges, depressions, and lowlands in the Isthmus of Tehuantepec, which were partially revealed by the Barrier software (although it failed to detect the PYUC group). The intraspecific divergence between groups GM and PM was fostered by mountain ranges such as Sierra Madre Oriental and Sierra Madre Occidental, followed by a subsequent expansion, as evidenced by the mismatch analyses; however, a moderate gene flow between the two groups was recorded. The locality TG (Chiapas, group PM) showed haplotypes shared with group GM (H73, H75, and H77), explained probably by an incomplete lineage sorting or retention of ancestral polymorphisms, similar to the pattern shown by other bat species [88][89][90]. The genetic diversity statistics and demographic testing indicated population expansion in groups PM and GM, i.e., these groups experienced an increasing effective population size from 125,000 years ago, similar to reports for other mammal species [91,92], despite the significant climate changes recorded in this period [93,94]. The SS group is located in the central valleys of Oaxaca, a region with lower altitudinal ranges but surrounded by mountains with altitudes above two thousand meters [95], which would explain the isolation of this population from the group PM; however, high levels of PMto-SS historical gene flow were observed. In addition, the highest levels of historical gene flow were recorded from PYUC to SS, which could be a reflection of a historical dispersal route through the Isthmus of Tehuantepec, as has been documented in birds [96,97]. The separation between GM/PM and PYUC probably results from the influence of the Isthmus of Tehuantepec, which has functioned as a geographic barrier for flying organisms such as bats [50,51,98] and birds [99,100]; this hypothesis was supported by the moderate gene flow values obtained here. For the group PYUC, signatures of demographic stability over time were observed, a finding also supported by paleontological information; these observations suggest that the general climate of the region did not change drastically from the end of the Pleistocene to the present [101,102]. Our analyses based on microsatellite data revealed a pattern inconsistent with the distribution of groups based on mtDNA. Discrepancies in population structure derived from markers with different inheritance patterns have been observed in several organisms, including bats [103][104][105], birds [106,107], reptiles [108], and amphibians [109,110], among others. This work detected a marked contemporary genetic structure with four genetic groups, none of which are consistent with the groups observed based on mtDNA (Figures 2 and 5). Group 1 consists of PE (located on the Baja California Peninsula), which unlike the results based on mtDNA, showed a vicariant process and became separated from the continental genetic signature. Similar results have been observed in different vertebrate species [111,112]. Microsatellite group 2 includes localities of mitochondrial groups GM and PM, while group 4 clusters localities of mitochondrial groups GM, PM, and PYUC. In these localities, geographic barriers (e.g., Sierra Madre Oriental, Sierra Madre Occidental, Isthmus of Tehuantepec) do not seem to hamper connections between the Gulf of Mexico and Pacific slopes, as well as with the Yucatan Peninsula; nonetheless, the gene flow is low (Table 5). Natalus mexicanus thrives in the interior of forests, although also being able to prosper in the remnants of tropical forests by using resources in the coastal corridors that stretch across landscapes [7,40]. Thus, we can assume that this species may be migrating locally through the Balsas depression or the Isthmus of Tehuantepec lowlands, both suggested as biological corridors for other bat species [50,[113][114][115]. Group 3 includes localities VB and SS, located in the trans-Mexican volcanic belt and the Sierra Madre del Sur physiographic provinces, respectively. It is surprising that these two localities are clustered in the same group despite being more than 400 km apart; both are located in conifer and oak forests within two valleys, one in the State of Mexico and the other in Oaxaca. Isolated populations located within valleys have also been recorded for other mammal species [116,117]. In both localities, mitochondrial and nuclear genetic diversity is relatively low, likely due to genetic drift and inbreeding [15] related to isolation. The genetic structure based on microsatellites appears to match vegetation types of accord INEGI [118] (Table S1), as reported for birds [119,120]. Accordingly, the group 1 locality in the Baja California peninsula is characterized by sarcocaul shrubland; the localities of groups 2 and 4 have a secondary shrub vegetation of low deciduous forest, and group 3 has pine and oak forests. These results suggest that the genetic differentiation of N. mexicanus in Mexico could be related to the great diversity of habitats where it thrives and are in agreement with those of a previous report [48]. In this context, the high deforestation rate in the habitats of this species is cause for concern; the low deciduous forest alone-the main habitat of the species-loses 650 thousand ha annually [33]. Approximately only 27% of the original cover of seasonally dry forest in México remain as intact forest; if the current trends in deforestation continue, the remaining forest will be heavily reduced and degraded in the near future [32]. Although genetic diversity in areas inhabited by the funnel-ear bat is currently moderate, contemporary gene flow is virtually zero among most groups and low between group 1 (Baja California) and group 4 (GM, PM, and PYUC individuals). This may be a consequence of habitat fragmentation, which should be interpreted as a warning signal, given that the loss of genetic variation and flow can reduce the ability of individuals to adapt to a changing environment, resulting in endogamic depression [121], lower reproduction [122,123], and a higher probability of extinction [124,125]. The information obtained for Natalus mexicanus in this study is also alarming because the current status of most of its populations is unknown [40], and during the development of our field work, we have been able to verify that some populations have either declined or completely disappeared due to human disturbances [126]. Conclusions This work reports the first population genetics analysis of the Mexican greater funneleared bat (Natalus mexicanus) using mitochondrial and nuclear markers, with contrasting results in terms of genetic structure between both molecular markers. This analysis advances our understanding of the underlying evolutionary processes, revealing historical isolation events resulting from geographic barriers, although with some degree of gene flow, as well as an almost null contemporary gene flow and local effects of affinity to the habitat. We predict that this is possibly due to local dispersal by males through biological corridors of great conservation value for the species. As the populations studied are located in the main habitats in which the species currently thrives and that present low levels of genetic diversity, our results also support the hypothesis that the increasing fragmentation and exploitation of Mexican tropical forests is affecting the levels of current diversity and contemporary genetic flow between populations. Tropical forest remnants are used intensively by many insectivorous bats, so our findings also support the thesis that forest remnants have considerable conservation value probably for many forest species; therefore, their conservation should have a high priority to keep isolation levels low and thus maintain or restore the genetic diversity of many species linked to this particular habitat. Conflicts of Interest: The authors declare no conflict of interest. Appendix A List of localities, acronyms, haplotypes of mtDNA control region sequences (number of individuals), and GenBank accession numbers of the samples used in this study.
2021-05-05T00:08:10.408Z
2021-03-25T00:00:00.000
{ "year": 2021, "sha1": "58aff3126623ce579595c454ebdca447adb5aa65", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-2818/13/4/140/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dc16dece055b2c0e148b96919d3c45a7232c2577", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
233401863
pes2o/s2orc
v3-fos-license
Polyvalent Bacterial Lysate Protects Against Pneumonia Independently of Neutrophils, IL-17A or Caspase-1 Activation Polyvalent bacterial lysates have been in use for decades for prevention and treatment of respiratory infections with reported clinical benefits. However, besides claims of broad immune activation, the mode of action is still a matter of debate. The lysates, formulated with the main bacterial species involved in respiratory infections, are commonly prepared by chemical or mechanical disruption of bacterial cells, what is believed influences the biological activity of the product. Here, we prepared two polyvalent lysates with the same composition but different method of bacterial cell disruption and evaluated their biological activity in a comparative fashion. We found that both bacterial lysates induce NF-kB activation in a MyD88 dependent manner, suggesting they work as TLR agonists. Further, we found that a single intranasal dose of any of the two lysates, is sufficient to protect against pneumococcal pneumonia, suggesting that they exert similar biological activity. We have previously shown that protection against pneumococcal pneumonia can also be induced by prior S. pneumoniae sub lethal infection or therapeutic treatment with a TLR5 agonist. Protection in those cases depends on neutrophil recruitment to the lungs, and can be associated with increased local expression of IL-17A. Here, we show that bacterial lysates exert protection against pneumococcal pneumonia independently of neutrophils, IL-17A or Caspase-1/11 activation, suggesting the existence of redundant mechanisms of protection. Trypsin-treated lysates afford protection to the same extent, suggesting that just small peptides suffice to exert the protective effect or that the molecules responsible for the protective effect are not proteins. Understanding the mechanism of action of bacterial lysates and deciphering the active components shall allow redesigning them with more precisely defined formulations and expanding their range of action. INTRODUCTION Respiratory tract infections (RTIs) are the most frequent among all human infections, associated with high morbidity and mortality, and affecting mainly children and the elderly (1). Streptococcus pneumoniae is the leading cause responsible for almost 50% of cases, followed in prevalence by Haemophilus influenzae with an incidence of 20%. Staphylococcus aureus and Klebsiella pneumoniae are also involved, as well as several viral pathogens, such as respiratory syncytial virus and Influenza (2,3). Although knowledge about major respiratory pathogens and their interaction with the host have increased significantly in recent years, prevention and treatment of RTIs remain a major public health challenge. There is a clear need for new strategies and therapeutic alternatives to control the burden of RTIs particularly in the lights of emergent new pathogens as the SARS-CoV-2. Given the broadness of agents involved, nonspecific immunotherapies that work independently of infectious agents, strains or serotypes, seems to be central (4). During the 70s, an immunotherapy based on polyvalent bacterial lysates (PBL) formulated with the main bacterial species implicated in the RTIs emerged as a new strategy. These strains are grown independently, harvested and then lysed and mixed to obtain the PBLs used in the clinics. The cell lysis is a key stage for obtaining bacterial lysates that works effectively and different methods have been studied, being the most used alkaline and mechanical lysis (5)(6)(7)(8). PBL can be administered orally, nasally or sublingually, and are capable of activating mucosal-associated lymphoid tissues, both locally and distally (7,9). The epithelial cells, macrophages and dendritic cells underlying these tissues would interact directly with the bacterial lysates, mainly through TLRs, activating the NFkB and/or MAPK pathways (10)(11)(12)(13). This activation results in the production of pro-inflammatory intermediates such as cytokines and chemokines, which then recruit effector cells and generate an acute inflammatory response in the lung (10,13,14). Several clinical trials have tested bacterial lysates in both adult and child populations, and encouraging results have been obtained. They were effective in preventing recurrent infections in children (15)(16)(17), as well as in adults, with a reduction in the incidence and duration of the disease (18,19). Also in patients with Chronic Obstructive Pulmonary Diseases (COPDs), it has been shown that PBL treatment reduces the duration and severity of exacerbations (20)(21)(22). However, its use remains a matter of debate as it has been questioned the forcefulness of clinical trials and their experimental design (7). The focus has long been on the understanding of its mechanism of action in detail and looking for strategies for a better characterization of its individual components. Although in the recent years significant progress has been made, there is still a need to elucidate specific immune mechanisms involved in a common mechanism of action to such a complex mixture of antigens. This knowledge will help in the implementation of bacterial lysates as immunotherapy for the prevention and treatment of RTIs. In this work we produced polyvalent bacterial lysates by two methods, alkaline and mechanical and compared them. We found that a single dose of either bacterial lysate suffices to induce protection against lethal pneumococcus challenge, suggesting that they both have similar biological activity. Then we analyzed for one of them, whether this protection relies on the same immune mediated mechanism that have shown essential for protection against pneumococcal pneumonia exerted by other treatments. We found that intranasal administration of PBL rapidly recruits neutrophils to the lungs, but these are not necessary for the protection. Treatment and challenge experiments in Il-17a or Capase-1 deficient mice also showed that protection could be fully achieved in these animals. Finally, we showed that trypsin-treated lysates afford protection to the same extent, suggesting that just small peptides suffice to exert the protective effect or that the molecules responsible for the protective effect are not proteins. Growth and Culture of Bacterial Strains Staphylococcus aureus (ATCC ® 25923 ™ ), Klebsiella pneumoniae (ATCC ® 10031 ™ ) and Haemophilus influenzae (ATCC ® 19418 ™ ) were grown using a vegetable medium containing vegetal Peptone Soya (40 g/L), NaCl (2 g/L), Na2HPO4 (2 g/L), Sodium (0.5 g/L) and Glucose (6 g/L). Hemin and NADH (25g/L of each) were added for H. influenzae culture. Streptococcus pneumoniae (ATCC ® BAA334 ™ ) was grown using synthetic medium as described (23). The strains were grown individually at 37°C in atmosphere with 5% CO 2 for H. influenzae and S. pneumoniae. Overnight cultures were 1/100 diluted and incubated for 4 h or until reach an O.D. at 600 nm of 0.8-1. The biomass was harvested by centrifugation, washed and finally resuspended in saline solution. For the in vivo infection model with S. pneumoniae, a serotype 1 clinical isolate (E1586, Sp1) obtained from the National Reference Laboratory, Ministry of Health, Montevideo, Uruguay was used. Working stocks of Sp1 were prepared in Todd Hewitt broth (Sigma) supplemented with 0.5% yeast extract (THYB) and stored at −80°C in THYB plus 12% (vol/vol) glycerol for no longer than six months as previously described (24)(25)(26)(27). The consistency of batch production of working stocks, was assessed by checking that the minimal lethal dose (MLD) is maintained, and that protection against a MLD challenge is obtained by treatment with flagellin and/or a sublethal infection as before (24,25). Bacterial Lysates Alkaline and mechanical lysates for each bacterial strain were prepared as follow. For alkaline lysis, bacterial cultures were first inactivated (100°C for 10 minutes). Then, several NaOH concentrations were assessed (at controlled temperature and pH), in a kinetic analysis were aliquots were obtained at different time-points and their bacterial profiles were analyzed by SDS-PAGE. Once optimal conditions (NaOH concentration and time of lysis) were selected for each strain, the alkaline lysis was performed and neutralized with HCl. After centrifugation, the supernatant was washed and filtered on the Quix Stand benchtop system (GE HealthCare) with a 30 kDa membrane. For mechanical cell disruption an EmulsiFlex -C3 Disruptor (Avestin, Inc.) was used. The homogenization pressure was set between 500 and 30000 psi. The protein profiles of the different fractions obtained during the passages were analyzed by SDS-PAGE. Once the optimal number of passages was selected, the bacterial biomass was passed through, the lysate was collected, centrifuged and the supernatant preserved. Proteins were quantified by the BCA method. Protein profiles were determined by SDS-PAGE on 12% gels. Carbohydrate quantification was performed by the Phenol -Sulfuric Acid method, using a standard glucose curve as previously reported (28). The polyvalent alkaline bacterial lysate (PABL) and polyvalent mechanical bacterial lysate (PMBL) were formulated by mixing equal amounts (250 mg of total proteins) of each monovalent bacterial lysates obtained by alkaline or mechanical lysis. For in vivo or in vitro assays, the same total protein amount of PABL or PMBL were used. For trypsinization of the bacterial lysates, two volumes of PABL or PMBL were mixed with 1 volume of 0.017% Trypsin-EDTA and incubated at 37°C with shaking for 1 h. The mixture was then incubated at 70°C for 30 minutes. Complete trypsinization was confirmed by SDS-PAGE analysis observing absence of proteins. In Vitro Stimulation of Alveolar Epithelial Cells A549 cells (ATTC CCL-185) were grown on Ham's F-12 medium (Capricorn) supplemented with 5% v/v of FBS. Cells were plated in 24-well plates (2x10 5 cells/well) and 24 h later were stimulated with 10 mg/ml of PABL or PMBL, for several times ranging from 1 to 48 h. At selected time-points, the medium was removed, cells were washed with phosphatebuffered saline (PBS) and lysed in TRIzol (Invitrogen) for total RNA extraction as described below. Animal Studies Female mice of C57BL/6, Il17a-/-, Casp1-/-and Casp1+/+ strains (6 to 8 week-old) supplied by the National Division of Veterinary Laboratories or the Transgenic and Experimental Animals Unit of the Pasteur Institute of Montevideo (Uruguay) were used for the experiments. Animals were maintained in individually ventilated cages and were handled in a vertical laminar flow cabinet (Class II A2; Esco, Hatboro, PA). All experiments complied with current national and institutional regulations and ethical guidelines (CHEA, Uruguay). Mice were anesthetized by intraperitoneal (i.p.) injection of 2.2 mg ketamine plus 0.11 mg xylazine in a total volume of 200 µl. PABL or PMBL were diluted in sterile saline solution and administered into the nostrils of mice, at the desired dose (6 or 18 mg) in a final volume of 30 ml. For mouse infection protocols, working stocks of Sp1 were thawed, washed with sterile saline solution and diluted to the appropriate dose. One MLD of Sp1 was administered intranasally in a final volume of 50 ml to previously anesthetized mice, and in each experiment the bacterial inoculum was confirmed after infection by plating serial dilutions onto blood agar plates. Mice survival was followed up for at least 2 weeks after Sp1 challenge For depletion of granulocytes, 100 µg of anti-Gr-1 (RB6-8C5) or an isotype control (HB152) was administered i.p. 12 h before i.n. administration of PMBL. Sp1 challenge was performed 24 h after PMBL administration. The anti-Gr1 injection was found to completely deplete PMNs in peripheral blood as assessed by flow cytometry, as reported before (25). We confirmed that the administration of RB6-8C5 induce a 99% reduction in PMNs assessed in peripheral blood circulation 24 h after PMBL treatment, compared with those receiving the isotype control antibody (Supplemental Figure 1). Flow Cytometry For bronchoalveolar lavages (BAL) sampling, the trachea was cannulated, and 1 ml of PBS plus 1 mM EDTA was instilled six times and recovered by gentle aspiration. BAL cells were washed in PBS containing 2 mM EDTA and 1% fetal bovine serum (FACS-EDTA). For immunophenotyping, 1x10 6 cells were seeded per tube, and Fc receptors were blocked by incubation with 1 ml of Fc Block (BD) for 20 minutes at 4°C. Then, cells were labeled with specific monoclonal antibodies (BD) (Ly6G-PE, CD11b-APC-Cy7) for 30 minutes at 4°C and finally washed and fixed with 4% formaldehyde and stored at 4°C in dark. Cells were acquired into a FACS Canto II cytometer using BD FACSDiva ™ software (BD) for acquisition and analysis. qRT-PCR Total RNA extraction was performed from cultured cells or lung portions, preserved in TRIzol (Invitrogen), following the manufacturer's instructions. Lungs were homogenized with a TissueRuptor (Qiagen). Quantification of nucleic acids was performed in Nanodrop (Thermo Fisher Scientific). 1 mg total RNA was treated with DNase I (Invitrogen), and first-strand cDNA synthesis was carried out using random primers (Invitrogen) and Moloney murine leukemia virus (M-MLV) reverse transcriptase (Invitrogen) in a Corbett CG1-96 thermocycler (10 min at 25°C, 50 min at 37°C, 15 min at 70°C). qPCR was performed using specific primers at a final concentration of 1 mM and a QuantiTect SYBR green PCR kit (Qiagen) in a Applied Biosystem 7900 HT Fast Real Time PCR (15 min at 95°C, followed by 40 cycles at 95°C for 15 sec and 60°C for 1 min). Differences in gene expression level were obtained using the Ct method for relative mRNA quantitation using b-actin or b2microglobulin as housekeeping genes. To ensure the reliability of qRT-PCR results, we first evaluated three different housekeeping genes to check that they do not change between different experimental conditions, and based on the results, we choose the single best for all further experiments. In the experiments presented here we tested GAPDH, b-actin and b2-microglobulin for A549 human cell line, and GAPDH, 18S and b-actin for experiments in mice, and then selected b2-microglobulin for the former and b-actin for the later. Further, for every experiment conducted, results were first validated by double-checking that expression of the selected housekeeping gene showed no variation between samples, i.e. Ct values for the housekeeping gene among samples has to show less than 1 cycle of variation. Statistical Analysis GraphPadPrism7 Software (GraphPad Software, San Diego, CA) was used. The one-way ANOVA test was used to compare means between groups and two-way ANOVA, with Bonferroni correction, for comparison of means between groups and at different times. A log rank (Mantel-Cox) test was performed for analysis of Kaplan-Meier survival curves. Preparation and Characterization of Polyvalent Bacterial Lysates It has been argued that the means by which the lysates are prepared (mechanical vs chemical disruption of bacterial cells) has direct influence in the immunostimulant activity of the product (5). Thus, we prepared two different PBL with the same bacterial composition but using either mechanical (PMBL) or alkaline treatment (PABL) as method of bacterial disruption, for side-by-side comparison. Higher number of proteins and carbohydrates were obtained in the mechanical version of the lysates, in monovalent as well as in the polyvalent lysates ( Table 1). In addition, SDS-PAGE analysis of individual bacterial lysates, showed that those prepared by mechanical disruption show larger number and more defined protein bands, whereas alkaline lysates showed a smear in the SDS-PAGE (Supplemental Figure 2). In line with this, we have also recently showed elsewhere that MALDI-TOF spectra revealed more proteins abundance and with greater intensity in the PMBL compared to PABL (29). For use in comparative analysis PABL were further concentrated as to have the same amount of total proteins than PMBL. Polyvalent Bacterial Lysates Activates NF-kB in a MyD88 Dependent Manner In vitro stimulation of the THP-1 XBlue reporter cell line with either PMBL or PABL induced similar increase of NF-kB/AP-1 activity ( Figure 1A). Instead, neither lysate induced production of the reporter protein when used to stimulate the MyD88deficient THP-1 cell line, confirming that the activation of NF-kB and/or AP-1 induced by the polyvalent bacterial lysates is MyD88-dependent ( Figure 1B). A similar result was obtained when a murine reporter cell line (RAW-Blue) was used ( Figure 1A inset). Mechanical Lysate Induces Stronger Pro-Inflammatory Activity in Human Alveolar Cells We have previously shown that the human alveolar epithelial cell line A549 stimulated with an alkaline PBL engages in a proinflammatory gene expression program within 4 h after stimulation (26). Thus, we stimulated side-by-side A549 cells with either polyvalent bacterial lysate, and followed up the magnitude and kinetics of that transcriptional profile for up to 48 hours. We found that both lysates rapidly upraised the transient gene expression of several pro-inflammatory chemokines (Ccl20, Cxcl8, Cxcl1) and cytokines (Il6, Tnfa) as well as of antimicrobial peptides (Lcn2, S100a9). However, in most cases PMBL induced significantly higher mRNA levels than PABL ( Figure 2). Upregulation of chemokines and cytokines peaked by 1 or 3 h after stimulation, and returned to basal level by 6 h after stimulation. Instead, Lipocaline 2 (Lcn2) and S100a9 showed a different kinetics. Whereas expression level of Lcn2 was already increased by 3 hours and kept on rising by 48 h, differential expression of S100a9 was only apparent after 24 h of stimulation ( Figure 2). DEFB1 (human beta defensin 1) and Camp (Cathelicidine antimicrobial peptide) were also assessed but no significant increases in mRNA levels were detected at any time-point (Supplemental Figure 3). Both Polyvalent Bacterial Lysates Induces Similar Pro-Inflammatory Activity in Lungs Mice received intranasally any of each polyvalent lysate and 24 h later were sacrificed to evaluate the inflammatory response induced in vivo. Both lysates induced a significant influx of neutrophils into the lungs, showing an approximate 20-fold increment as compared with the saline-treated group (Figure 3). No difference in PMN recruitment was observed between PABLor PMBL-treated groups. Further, a rapid inflammatory gene expression program was induced in both treated groups ( Figure 4). Expression of pro-inflammatory mediators was up regulated by 4 h after treatment, and whereas for some of them expression levels decreased at 24 h (Il6, Cxcl1, Ccl20, S100a9), others maintained their high expression level (Il10, Tnf-a, Il17a, Ifn-g, Lcn2). Both polyvalent lysates generated a similar pro inflammatory response, yet with differences at particular time points. Polyvalent Bacterial Lysates Protects Against Pneumococcal Pneumonia Mice were treated with one dose of either PABL or PMBL, and 24 h later they were challenged with a lethal dose of S. pneumoniae serotype 1 (Sp1). Treatment with either lysate conferred protection against pneumococcal lethal pneumonia with survival rates of 80 and 100% for PMBL and PABL, respectively ( Figure 5A). We then evaluated whether PMBL treatment afforded protection when the challenge was administered later in time. Lysates administered either 48 or 72 h before pneumococcal challenged still afforded significant protection ( Figure 5B). Protection Does Not Depend on Neutrophil Recruitment to the Lungs We have previously reported that intranasal administration of a sublethal dose of pneumococcus, or intranasal stimulation with flagellin, both induces high level of resistance to S. pneumoniae (Sp1) lethal challenge (24,25). Protection depended on neutrophils in both cases. Thus, we tested whether neutrophils are also required for PMBL-mediated protection. For that, 12 h before PMBL treatment, one group of mice received anti-Gr-1 antibody that depletes up to 99% of the neutrophil population and another group received an isotype control antibody (Supplemental Figure 1). Both groups of animals receiving PMBL lysate survived a further Sp1 challenge whether or not neutrophils were depleted ( Figure 6A). Thus, protection from lethal pneumonia induced by the PMBL lysate does not depend on neutrophils. Protection Does Not Require IL-17A nor Caspase-1 Activation IL-17A has been reported as relevant cytokine for protection against S. pneumoniae (30). We have previously shown that IL-17 expression was strongly up regulated in sub lethal infected animals that were protected against S. pneumoniae challenge (24,26), and here we found that Il17a is also upraised after bacterial lysates treatment (Figure 4). Thus, we evaluated whether is required for PMBL-mediated protection. Il17a -/and WT mice were treated with PMBL and then challenged with Sp1. Again, both groups showed same survival rates, suggesting that Il17a is not required for the resistance against pneumonia induced by PMBL ( Figure 6C). To further investigate the mechanisms associated with PMBL-mediated protection against lethal pneumococcal pneumonia, we also evaluated the role of inflammasome activation. Casp1 -/or syngeneic WT mice were treated with PMBL and then challenged with Sp1. Both groups showed same survival rates ( Figure 6B). Whole Protein Fraction Is Not Necessary for Induction of Protection by Polyvalent Lysates To evaluate the contribution of different bacteria's component in the immunostimulant activity of PBL, we evaluated the biological activity of a trypsin-hydrolyzed PMBL lysate (PMBL tripsinized). First we assessed NF-kB/AP-1 activation, after stimulation of RAW-Blue ™ reporter cells. Both, intact and hydrolyzed PMBL lysate elicited a similar response ( Figure 7A). Then, we compared the ability to induce protection of both, intact and hydrolyzed PMBL lysate. Treatment with trypsinized PMBL or intact PMBL conferred same level of protection against pneumococcal lethal pneumonia ( Figure 7B), confirming that protection induced by the lysates does not requires whole proteins to be present in the formulation. DISCUSSION In this work, we investigated the immunostimulant activity of PBLs, in relation with the method of preparation and their capacity to protect against pneumococcal pneumonia from a mechanistic point of view. It is postulated that the method used for bacterial cell disruption influences the biological activity of the lysate (6, 31). However, we found that two lysates prepared with the same bacterial composition and just different method of cell disruption induced MyD88-dependent NF-kB activation to a similar extent, and both protected against a lethal challenge with S. pneumoniae. It has been postulated that lysate-induced resistance is at least partially dependent upon TLR signaling, and the activity is lost in the absence of MyD88 (32), and this would be in line with both preparations showing the same protective activity. The only differences observed were that the mechanical lysate induced higher levels of pro-inflammatory mediators in an alveolar cell line, and that the alkaline lysate required a further step of concentration during preparation to reach the same protein concentration than the mechanical. It has been suggested that PBLs prepared by mechanical disruption of the bacterial cell are more immunogenic than those prepared by chemical lysis of the cells (31,33). Still, here we showed that when both preparations have similar composition, they exerted similar biological effects. However, it is noteworthy that the carbohydrates to protein ratio was higher in PABL than in PMBL (Table 1), so when we concentrated it to equal protein concentrations then the carbohydrates content would be higher in PABL, and this could be on the basis of the improved efficacy. Altogether, these results point out the importance of better characterization of the products. Improved methods for characterization of bacterial lysate components would result in more reliable and comparable products. Most descriptions of commercial lysates, either in the literature or in patent filling just quantifies proteins and carbohydrates (5,(34)(35)(36). We have recently proposed that the use of MALDI-TOF technique for characterization and identification of biomarkers of biological activity that helps to evaluate the consistency and reproducibility of batch production would represent an advance in this sense (29). Vaccine-induced protection against pneumonia relies on neutrophils that kill anti-capsular antibodies-opsonized bacteria (37)(38)(39). Using two other different models of protection against lethal pneumococcal challenge, we previously showed that protection is also dependent on recruitment of neutrophils to the lungs, even in the absence of antibodies (24,25). Here, we found that bacterial lysates upregulate the expression of the neutrophil-recruiting chemokines Cxcl-1 and IL-8 in the human alveolar epithelial cell line A549, as well the mouse functional homologue Cxcl-1 that resulted in the rapid recruitment of neutrophils into the lungs. However, neutrophils were not required for PBL-induced protection against S. pneumoniae challenge suggesting redundancy among protective mechanisms, since other effectors are certainly involved. The lung epithelial cells have been proposed as a key actor in antimicrobial defenses, through secretion of antimicrobial peptides. Using an aerosolized lysate of nontypeable Haemophilus influenzae (NTHi) it has been reported protection against S. pneumoniae that correlated in magnitude and time with rapid bacterial killing associated with overexpression of multiple antimicrobial polypeptides in the lung (40,41). We found that PBLs induced the rapid expression of Lcn2 and S100a9 in the lungs, making tempting to speculate that they are involved in the lysate-induced protection. However, contradictory results have been reported regarding the involvement of antimicrobial peptides in lung protection. Whereas it has been shown that Lcn2 protects against Klebsiella pneumoniae lung infection in mice (42) and a protective role for Lcn2 in allergic airway disease has been demonstrated (43), other report has presented results showing pulmonary Lcn2 impaired bacterial clearance survival in pneumococcal pneumonia (44). Further, it has been recently reported that bronchial epithelial cells can show innate immune memory and that initial exposure to PAMPs could modify their subsequent response to infection, a phenomenon relied on epigenetic regulation (45). Thus, it is tempting to speculate that this phenomenon might be on the basis of the effectivity of the lysates. It has been postulated that the high frequency of invasive disease associated with serotype 1 pneumococcus is because it evades sensing by inflammasomes (46). Others had shown that mucosal delivery of cholera toxin subunit B (CTB) reduces the pneumococcal load in the nasopharynx in a caspase-1/11 dependent manner (47). Instead, we found that the absence of Caspase-1 did not abrogate the protection against S. pneumoniae, suggesting that inflammasome sensing is not essential, and reinforcing the concept of redundancy in immune protection of the airways. We found that the trypsinized lysates, still activates NF-kB and/ or AP-1 and also protects against pneumococcal challenge. Thus, it could be that ligands such as peptidoglycans, lipopeptides, lipoteicoic acid, or even fragments of bacterial DNA or RNA are responsible of immune activation and protective effect of PBL at least against pneumococcal pneumonia. In this regard, it has been shown that aerosolized delivery of a combination of synthetic ligands for TLR2/6 and TLR9 can induce protection against both gram-negative and gram-positive pathogens as well as against influenza virus (32,48). Beta-glucan from bacterial origin can also protect against systemic infection with S. pneumoniae (49). Alternatively, it could be that small peptides resulting from trypsin digestion can still be responsible of the PBL-induced protection. We previously showed that intranasal administration of the TLR5 agonist flagellin protein induces strong protection against pneumococcal mucosal challenge, but in that case the effect was lost in a trypsin-treated flagellin (25). All in all, our results show that PBLs exert protection against S. pneumoniae using mechanisms distinctive from conjugate vaccineinduced protection, suggesting that these immunotherapies can be a valuable cost-effective complement for control of the burden of pneumococcal diseases. Further characterization of the active components, and their interaction with pulmonary microbiota will pave the way for the designing of new products that might be a cost-effective tool for the prevention and treatment of RTIs. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by CHEA, Comisioń Honoraria de Experimentacioń Animal.
2021-04-27T13:13:06.126Z
2021-04-26T00:00:00.000
{ "year": 2021, "sha1": "a790b39fa9b84bb53d9c1ba7b719bc9c5ce5e087", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.562244/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "a790b39fa9b84bb53d9c1ba7b719bc9c5ce5e087", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }