id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
203609760 | pes2o/s2orc | v3-fos-license | Chimeric antigen receptor macrophage therapy for breast tumours mediated by targeting the tumour extracellular matrix
Background The extracellular matrix (ECM) is essential for malignant tumour progression, as it is a physical barrier to various kinds of anticancer therapies. Matrix metalloproteinase (MMPs) can degrade almost all ECM components, and macrophages are an important source of MMPs. Studies using macrophages to treat tumours have shown that macrophages can enter tumour tissue to play a regulatory role. Methods We modified macrophages with a designed chimeric antigen receptor (CAR), which could be activated after recognition of the tumour antigen HER2 to trigger the internal signalling of CD147 and increase the expression of MMPs. Results Although CAR-147 macrophage treatment did not affect tumour cell growth in vitro compared with control treatment. However, we found that the infusion of CAR-147 macrophages significantly inhibited HER2-4T1 tumour growth in BALB/c mice. Further investigation showed that CAR-147 macrophages could reduce tumour collagen deposition and promote T-cell infiltration into tumours, which were consistent with expectations. Interestingly, the levels of the inflammatory cytokines TNF-α and IL-6, which are key factors in cytokine release syndrome, were significantly decreased in the peripheral blood in CAR-147 macrophage-transfused mice. Conclusion Our data suggest that targeting the ECM by engineered macrophages would be an effective treatment strategy for solid tumours.
BACKGROUND
Cancer immunotherapy aims to promote or modify immune cells (especially T cells) to attack cancer cells while keeping normal cells intact. The innate and adaptive immune systems play vital roles in the immune surveillance, identification and destruction of cancer cells. 1,2 Adoptive cellular therapy ranked first in the top ten scientific and technological advances in 2013. Recently, adoptive cellular therapy based on DCs, T cells, NK cells, etc. has achieved good effects on tumours. Among these cells, chimeric antigen receptor (CAR)-modified T cells (CAR-T cells) have developed very rapidly in recent years. This concept was first proposed in 1989. 3 In recent years, the success of CAR-T cell immunotherapy targeting the B-cell lineage differentiation antigen CD19 in B-cell malignancies has provided new opportunities for the treatment of cancer. Although the therapeutic effect of CAR-T cells on haematological malignancies is impressive, the results of treating solid tumours with CAR-T cells have been less than ideal. 4 Whether CAR-T cells can reach the tumour site is a prerequisite for their anti-tumour effect. When T cells extravasate from blood vessels, they need to pass through dense tumour tissue to reach the target cell location. Physical barriers formed by the stroma characterise many types of cancer, and the resulting high tissue pressure further prevents the extravasation of T cells. The tumourassociated extracellular matrix (ECM) and fibroblasts have immunomodulatory effects. 5,6 Compared with wild-type mice, mice with deficient expression of the ECM protein tenascin have higher effector immune cell infiltration into tumours. 7 Tumour tissue contains abundant and special ECM such as collagen and proteoglycan. The dense tissue morphology forms a physical barrier that limits the free migration of T cells. 8 Some studies have shown that a high blood vessel density is associated with high Tcell and B-cell abundances in tissue sections from patients with solid tumours. 9 Some successes have been achieved in animal models by utilising fibroblast activation protein (FAP) CAR-T cells to reduce the number of tumour fibroblasts to counteract these physical barriers. 10 Heparanase, an enzyme that degrades the matrix, can promote CAR-T cell infiltration and anti-tumour efficacy in solid tumours. 11 The ECM is generated by the highly organised interactions of fibre molecules, proteoglycans, glycoproteins, glycosaminoglycans and other macromolecules, including approximately 300 different proteins. 12 Its synthesis and degradation are mainly regulated by matrix metalloproteinase (MMPs) and tissue inhibitors of metalloproteinases (TIMPs). MMPs are a family of calcium and zinc-www.nature.com/bjc dependent proteolytic enzymes that currently includes at least 26 subtypes that degrade almost all ECM and basement membrane components. 13 TIMPs are an important family of enzymes that regulate the activity of MMPs, inhibiting the activity of MMPs and reducing the degradation of the ECM. Four members of the TIMP family have been found: TIMP-1, TIMP-2, TIMP-3, and TIMP-4. TIMPs form TIMP−MMP complexes with MMPs at a ratio of 1:1, thereby blocking the binding of MMPs to substrates and inhibiting the activity of MMPs. 14 The overall proteolytic activity is determined by the ratio of MMP−TIMP, which in turn affects the deposition and degradation of the ECM. 15 Macrophages are an important source of MMPs. 16 Kupffer cells (KCs) can express a variety of MMPs, such as MMP-9, MMP-12 and MMP-13, to degrade the matrix, which is beneficial in the repair of liver damage and liver fibrosis. 17,18 Some studies have shown that the infusion of bone marrow-derived macrophages in mice can significantly alleviate liver fibrosis and improve liver function. 19 In a mouse model of acute liver injury induced by acetaminophen, the transplantation of KCs can also protect liver cells and reduce liver damage. 20 Clinically, pulmonary macrophage transplantation is an effective cellular therapy in children with pulmonary alveolar proteinosis. [21][22][23] These clinical data also suggest that macrophage transplantation is safe and well tolerated.
Numerous studies have shown that macrophages from various sources can coexist in tumours. Locally self-maintained macrophages are a part of the TAM population, but macrophages recruited from the peripheral blood account for the majority of TAMs, 24 which suggests that injected macrophages can infiltrate tumours. Hence, to prevent the matrix in the tumour from reducing the effect of anticancer drugs, hindering the entry of T cells, and promoting tumour growth, we designed a chimeric antigen receptor targeting HER2 for macrophages, with the hope of activating MMPs to degrade the matrix and broaden the path for T cells entry into the tumour. HER2 is a well-established therapeutic target in breast cancer. The 4T1 murine breast tumour model, which show similarities to the human disease, was used to study the effectiveness of CAR-147 macrophages in vivo.
Cells
4T1 cells and Raw264.7 cells were purchased from Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences. 4T1 cells were cultured in RPMI1640, Raw264.7 cells were cultured in DMEM. All media supplemented with 10% FBS (Gibco), 2 mM Lglutamine, 100 units/ml penicillin and 100 units/ml streptomycin. The cells were kept in a humidified atmosphere of 5% CO 2 at 37°C.
Mice
Nine-week-old female BALB/c mice and nude mice were purchased from Nanjing Biomedical Research Institute of the Nanjing University, Nanjing, China and bred in our animal facilities under specific pathogen-free conditions. All experiments related to animals were approved by the Institutional Animal Care and Use Committee, Nanjing University. The average weight of mice at the start of the experiment was 20 g. Vendor health reports indicated that the mice were free of known viral, bacterial and parasitic pathogens. Animals were housed in an SPF facility, using five mice per cage with sterile wood shavings as bedding. Animal welfare was assessed daily by the authors. All animals were either treated (where possible) or humanely euthanized at any sign of illness or stress.
Macrophage/tumour cell coculture Raw264.7 cells were cocultured with 4T1 cells in a cell−cell contact fashion for 24 h or 48 h. After coculture, the 4T1 cells and Raw264.7 cells were harvested, layered in a 40 and 70% Percoll gradient (Sigma-Aldrich, St Louis, MO), and centrifuged at 3000 rpm for 30 min without brake engagement. The Raw264.7 cells at the interface were collected. All the coculture experiments in this study were performed with cell−cell contact.
Tumour cell invasion assay 4T1 tumour cells (5 × 10 5 cells/ml) were placed in Matrigel-coated invasive chamber (24 wells, 8 mm pore size); after 24 h, the 4T1 tumour cells invading the other side of the chamber were fixed and stained with crystal violet.
Phagocytosis
Phagocytosis assays were performed by fluorescent red latex beads (1 μM diameter, L-2778, Sigma-Aldrich). Latex beads were pre-warmed for 1 h at 37°C in complete medium (10% FBS in DMEM) before the phagocytosis assays. The pre-warmed beads were added to macrophages (number ratio approximately 10:1) for 4 h at 37°C. The phagocytosis was terminated by the addition of 1 ml of pre-cooled PBS. Macrophages were harvested and analysed by flow cytometry.
Gene expression analysis TRIzol reagent (Invitrogen) was used to prepare total RNA from macrophages or tissue samples. Total RNA (1.5 μg) was reverse transcribed using a 5× All-In-One RT MasterMix (abm Cat#G486 Code Q111-02) kit. GAPDH was used as the normalisation gene. Q-PCR assays were carried out with a CFX96 real-time PCR detection system (Bio-Rad) using a Q-PCR kit (Vazyme Biotech). The comparative threshold method for relative quantification was used, and the results are expressed as -fold changes. The primers were synthesised by Invitrogen.
Western blotting Cells were collected and washed twice with PBS, and protein was extracted by whole-cell lysis with a kit purchased from Beyotime (Haimen, Jiangsu, China) that contained protease and phosphatase inhibitors. Centrifugation at 4°C removed cellular debris, and the protein concentration was determined by a Pierce BCA assay. The protein content was electrophoresed on a 10% SDS-PAGE gel and then immunoblotted on a polyvinylidene fluoride membrane (American Biosciences). Antibodies against PARP (9532S, Cell Signaling Technology), PCNA (2586, Cell Signaling Technology), Caspase-9 (9508S, Cell Signaling Technology), and β-actin (KC-5A08, Kangchen Biotech) were used.
Flow cytometry analysis For apoptosis analysis, cells were stained with PE-Annexin V in the presence of 7-AAD using an Annexin V apoptosis detection kit according to the manufacturer's instructions (BD Pharmingen, San Diego, CA).
For cell cycle analysis, cells were collected and fixed with precooled 70% ethanol at 4°C for 2 h. The fixed cells were washed with PBS and stained with a PI working solution (50 μg/ml PI and 50 μg/ml RNase A) at room temperature for 30 min in the dark followed by flow cytometry detection.
Construction of chimeric antigen receptors (plenti-CAR-HER2-CD147) CAR-HER2-CD147 consists of an anti-hHER2 scFv that was derived from the A21 mouse hybridoma, 25,26 the hinge region of mouse IghG1 (Gene ID: 16017, aa 98−110), and the transmembrane and intramembrane regions of the mouse CD147 molecule. The anti-hHER2 scFv encoding amino acid sequences was reverse translated, codon optimised, and synthesised as a single construct (Genscript, Jiangsu, China). The exact sequence of the CD147 molecule included in CAR-HER2-CD147 corresponds to the GenBank identifier NM_009768.2. The sequence includes all amino acids starting with the amino acid sequence MAALWP and continuing to the carboxy terminus of the protein. The signal peptide was derived from interferon gamma receptor 1 (IFNgR1, Gene ID: 15979). To facilitate the measurement of transfection efficiency, a myc-tag was inserted in front of the anti-hHER2 scFv. Homologous recombination was used to insert all fragments into the lentiviral vector pLenti6/V5-D-TOPO®.
Tumour model and infusion of macrophages All animals were assessed to be healthy and free of disease prior to tumour implantation. Mice were anaesthetised with an intraperitoneal injection of 70 mg/kg pelltobarbitalum natricum, and then 200,000 tumour cells in a volume of 20 μl were injected into the mammary fat pads of BALB/c mice or BALB/c nude mice. On days 8 and 15 after tumour cell inoculation, 1 × 10 6 Raw264.7 cells were injected intravenously (i.v.). Tumour bioluminescence was analysed using the In Vivo Imaging System (IVIS Lumina XR, Caliper Life Sciences) for the first time on day 7. Animals were humanely euthanized on day 30, and tumour tissues were harvested and weighed. For animal studies of CAR-147 effects using BALB/c mice, three treatment groups (PBS, control Raw264.7 cells, CAR-147 Raw264.7 cells) were used, each with five animals. The experiment was repeated, and data were pooled, including ten mice per group. For animal studies using BALB/c nude mice, three treatment groups (PBS, control Raw264.7 cells, CAR-147 Raw264.7 cells) were used, each with eight animals. Each experimental group was confined to a separate cage. Cages were selected for a particular treatment at random at the start of treatment (all treatments were started at the same time). All groups were assessed at the same time. Tumours were allowed to establish until they were palpable or detectable by bioluminescence prior to treatments. Animals were euthanized by carbon dioxide asphyxiation followed by cervical dislocation to ensure death.
Preparation of tumour single-cell suspensions Tumours from treated mice were cut into small pieces using scissors, followed by digestion with pre-warmed 0.1% collagenase I containing 75 µg/ml DNase I (1 h, 37°C). The digested tissue samples were filtered using a 40-μm cell strainer. Then, a red blood cell lysis buffer was used to remove red blood cells at RT for 2 min. The single-tumour cell suspensions were analysed by flow cytometry.
Cytokine assay by ELISA IL-1β, IL-6, IL-10, IL-12, TNFα and IFNγ concentrations in cell culture supernatants, blood samples and tumour homogenates were assessed using ELISA kits according to the manufacturer's protocols.
In vivo near-infrared fluorescence imaging To investigate the localisation of infused macrophages in tumourbearing mice, 10 6 control Raw264.7 cells or 10 6 CAR-147 Raw264.7 cells stained with the near-infrared fluorescent probe DiR (Yeasen Biotech, China) were injected intravenously. Ten mice were used (n = 5 control Raw264.7 group versus n = 5 CAR-147 Raw264.7 group). At different time intervals, the mice were anaesthetised and imaged using the In Vivo Imaging System (IVIS Lumina XR, Caliper Life Sciences).
Three-dimensional multicellular sphere culture (MCS) In total, 4000 MDA-MB-453 tumour cells and 1000 PMA-treated THP-1 macrophages were added to the 96-well Clear Roundbottom Ultra Low-attachment Microplate (Corning, USA) for 72 h at 37°C. The MCSs were monitored with a microscope, and uniform tumour spheroids were selected for subsequent studies. To study MCS penetration by T lymphocytes, Jurkat T cells were stained with 5 μM CFDA-SE (carboxyfluorescein diacetate succinimidyl ester) (Beytotime, Jiangsu, China) for 15 min at 37°C in PBS. Labelling was stopped by adding cold complete medium and washing three times. Then, 50,000 Jurkat T cells were incubated with a spheroid for 20 h. After washing and fixing in 4% paraformaldehyde, the tumour spheroids were imaged under a Nikon confocal microscope (Nikon TE2000U, Tokyo, Japan).
Statistical analysis
Data are expressed as the mean ± SEM. Statistical analysis was performed by Student's t test when only two value sets were compared. One-way ANOVA followed by Dunnett's test was used when the data involved three or more groups. P < 0.05, P < 0.01 or P < 0.001 were considered statistically significant and indicated by *, ** or ***, respectively.
Construction of CAR-147 targeting HER2
To breakdown the "physical barrier" of the tumour matrix, we selected CD147, a membrane molecule that is essential for ECM remodelling via the expression of MMPs. 27 According to the chimeric antigen receptor design principle, we generated a modified CAR-147 construct for macrophages (the detailed structure is shown in Fig. 1a). Briefly, CAR-147 is composed of a single-chain antibody fragment targeting human HER2, the hinge region of mouse IghG1, and the transmembrane and intracellular regions of the mouse CD147 molecule. The HER2-scFv sequence was designed by Sangon Biotech based on reports in the NCBI database (PDB: 3H3B_D). To verify that the constructed chimeric antigen receptor system can activate internal signalling after stimulation by HER2 and affect the expression of the downstream signal, we established stable clones of 4T1 cells overexpressing human HER2-EGFP (HER2-4T1) (Fig. 1b) and Raw264.7 macrophages expressing CAR-147 (Fig. 1c). HER2-EGFP includes the replacement of the HER2 intracellular region with EGFP to avoid the effect of HER2 on the 4T1 cells. Control macrophages and CAR-147 macrophages were directly cocultured with HER2-4T1 cells. After coculture for 24 h, the macrophages from each group were collected to analyse the expression of MMPs and TIMPs by realtime PCR. As shown in Fig. 1d, the expression of multiple MMPs (MMP3, MMP11, MMP13, and MMP14) was significantly upregulated in the CAR-147 macrophages at an E:T ratio of 1:1. When the E:T ratio was 2:1, the expression of MMP9, MMP10 and MMP12 was also upregulated (Fig. 1e), indicating that the increase in MMP expression in the CAR-147 macrophages was related to tumour antigen stimulation. Increased MMP expression was also detected after coculture for 48 h (Fig. 1f). In parallel, the expression levels of MMPs in CAR-147 macrophages cocultured with wild-type (WT) 4T1 cells were not different from those in control macrophages, indicating that the chimeric antigen receptor could specifically Chimeric antigen receptor macrophage therapy for breast tumours mediated. . . W Zhang et al. recognise the antigen (Fig. 1g). These data demonstrated that CAR-147 could specifically recognise the antigen HER2 and effectively activate the expression of MMPs in macrophages.
To assess the effect of CAR-147 on macrophage phenotype, flow cytometry was used to analyse the expression of membrane molecules (MHCII, CD206, PDL1, CD40, CD80, and CD86) in macrophages after coculture. We found that CAR-147 had no effect on macrophage membrane molecule expression, except for inducing increased expression of CD80 ( Supplementary Fig. 1A). In addition, a phagocytosis assay using fluorescent red latex beads and a reactive oxygen species assay using the DCFH-DA probe showed that CAR-147 did not affect phagocytosis ( Supplementary Fig. 1B) or ROS production in macrophages ( Supplementary Fig. 1C), respectively. Furthermore, we evaluated the release of inflammatory cytokines from control macrophages and CAR-147 macrophages after coculture for 48 h. The results showed that there were no differences in the secretion of IL-1β, IL-6, IL-10, IL-12, TNFα or IFNγ ( Supplementary Fig. 1D). These data indicate that CAR-147 specifically activates MMP expression without affecting other functions, such as phagocytosis, ROS production, and inflammatory cytokine secretion.
CAR-147 macrophages inhibit tumour growth in vivo
We next investigated the effects of CAR-147 macrophages on tumour cell growth in vitro. An apoptosis assay showed that there was no difference in apoptosis in HER2-4T1 cells after coculture with control macrophages or CAR-147 macrophages (Supplementary Fig. 2A), which was consistent with the results for western blotting detection of apoptosis-related proteins (PARP and Caspase-9) (Supplementary Fig. 2B). Cell cycle analysis by PI staining revealed that compared with control macrophages, CAR-147 macrophages did not affect the cell cycle in HER2-4T1 cells ( Supplementary Fig. 2C). The protein expression level of PCNA, which indicates the cell proliferation ability of HER2-4T1 cells, was not changed (Supplementary Fig. 2B). The effect of CAR-147 macrophages on HER2-4T1 cell invasion was examined with Matrigel, a reconstituted ECM. As shown in Supplementary Fig. 2D, compared with control macrophages, CAR-147 macrophages had no effect on the invasion ability of HER2-4T1 cells. Therefore, CAR-147 macrophages did not inhibit the growth of tumour cells in vitro.
To further examine the effect of the designed chimeric antigen receptor on solid tumours in vivo, we established a mouse model of breast cancer with orthotopically transplanted HER2-4T1 cells.
To analyse the tissue-distribution and time-course changes of infused macrophages, 10 6 EGFP + macrophages or 10 6 DiR-labelled macrophages were injected intravenously for flow analysis or in vivo imaging, respectively. In normal mice, in vivo imaging showed that the DiR-labelled macrophages mainly accumulated in the liver and were almost completely gone by 144 h post-infusion ( Supplementary Fig. 3A, B). In tumour-bearing mice, the DiRlabelled macrophages were detected at the tumour site on day 1 post-infusion, and the maximum signal was detected on day 3, as visualised by fluorescence imaging (Fig. 2a-c). The infiltration of the infused macrophages into tumour tissue was also detected by flow cytometry, and the corresponding gating strategy is presented in Supplementary Fig. 3C. The results showed that CAR-147 had no effect on the infiltration of infused macrophages into tumours (Fig. 2c and Supplementary Fig. 3D). Furthermore, we analysed the phenotypes of infused macrophages in tumours.
No differences were observed between control macrophages and CAR-147 macrophages (Supplementary Fig. 3E).
The tumour microenvironment may contribute to the progressive differentiation of infused macrophages. However, compared with no treatment, macrophage infusion did not promote tumour growth (Fig. 2e). Control macrophages and CAR-147 macrophages were injected intravenously on the 8th and 15th day post tumour implantation (Fig. 2d). Beginning on the 7th day, tumour burden was measured by the In Vivo Imaging System, and we observed that the CAR-147 macrophages effectively inhibited tumour growth compared with the control macrophages (Fig. 2e, f) and reduced spleen weight (Fig. 2g), while body weight (Fig. 2h) was unaffected in both groups. To further analyse whether CAR-147 macrophage reinfusion causes cytokine storms like CAR-T cell infusion does, we detected the cytokines IL-1β, IL-6, IL-12, TNFα, and IFNγ in serum samples and tumour homogenates. Interestingly, the results showed that the IFNγ, TNFα, and IL-6 levels in the serum of the CAR-147 macrophage group were decreased compared with that of the control group (Fig. 2i), while the IL-12 and IFNγ levels were increased in tumour tissue samples from the CAR-147 macrophage-treated mice (Fig. 2i). Overall, CAR-147 macrophages significantly inhibited tumour growth in the 4T1 breast cancer mouse model but might not cause cytokine release syndrome.
CAR-147 macrophages can promote T-cell infiltration into tumours
The purpose of the CAR-147 macrophages we designed is to remodel the tumour ECM, destroying the physical barrier in solid tumours to promote T-cell infiltration and thus inhibit tumour growth. We have shown remarkable antitumoural effects of CAR- . All values are expressed as the mean ± SEM. *P < 0.05, **P < 0.01, and ***P < 0.001 147 macrophages on 4T1 tumour-bearing mice. For further study, we performed multicolour flow cytometric analysis to investigate immune cell infiltration and phenotypes. As shown in Fig. 3a, no difference was observed in the number of CD45 + tumourinfiltrating leucocytes (TILs). It is noteworthy that CAR-147 macrophage-treated tumours exhibited significantly more CD3 + T-cell content (Fig. 3b, c) and less MDSC content in the TIL population (Fig. 3d) than control macrophage-treated tumours. However, there were no significant differences in DC (MHCII + CD11c + ) or NK (NK1.1 + ) cell infiltration ( Supplementary Fig. 4A, B). The number and phenotype of tumour-associated macrophages (TAMs) were unaffected in both groups (Supplementary Fig. 4C, D). To characterise T-cell function in tumours, we first examined the percentage of CD8 + T cells in the CD3 + T-cell population and found that this percentage was not affected (Supplementary Fig. 4E). The expression levels of CD44 and CD62L can be used to identify the activation status of T cells. The percentages of CD44 high CD62L low effector T cells in the CD3 + Tcell population remained unchanged (Supplementary Fig. 4F). Furthermore, the expression of the degranulation marker CD107a and T-cell exhaustion marker PD-1 was analysed. No differences were observed between the two groups ( Supplementary Fig. 4G, H). We analysed the correlation between the percentage of CD3 + T cells and tumour weight in the two groups. The two factors showed a stronger correlation in the CAR-147 macrophage group ( Supplementary Fig. 5A), suggesting that T cells play an important role in the anti-tumour effect of CAR-147 macrophages. We next addressed the extent to which the anti-tumour effect of CAR-147 macrophages depended on host T cells. Thus, we inoculated HER2-4T1 cells into syngeneic BALB/c nude mice. The course of macrophage infusion treatment that was efficacious in WT BALB/c mice had no effect on tumour growth in the T-cell-deficient nude mice ( Fig. 3e-g). These results suggest that CAR-147 macrophages promote enhanced CD3 + T-cell mobilisation in breast cancer, which is clearly required for the therapeutic effect of CAR-147 macrophage infusion.
CAR-147 macrophages reduce extracellular matrix deposition To further assess whether CAR-147 macrophages act on the ECM and disrupt this "physical barrier", we used Masson's trichrome staining to analyse the collagen content in tumour tissue. Consistent with our premise, collagen content was significantly reduced after CAR-147 macrophage treatment compared with control macrophage treatment (Fig. 4a, b). Next, we examined the expression of various MMPs and TIMPs in tumours. The results showed that there were no differences in the expression levels of most MMPs, except MMP3, MMP14 and MMP15 levels were upregulated in CAR-147 macrophage-treated mice (Fig. 4c). Extracellular matrix breakdown by MMPs is essential for tumour cell invasion and metastasis. Therefore, degradation of the matrix by CAR-147 macrophages is likely to promote tumour cell metastasis. Then, we assessed the effect of CAR-147 macrophages on tumour metastasis. The results showed that CAR-147 macrophages did not promote tumour metastasis while inhibiting tumour growth ( Supplementary Fig. 5B). In summary, the infusion of CAR-147 macrophages can degrade the dense collagen-based matrix that surrounds tumours, which may require the involvement of MMP3, MMP14 and MMP15. c Immunofluorescence analysis of CD3 + T-cell infiltration in tumour tissues from control Raw264.7 cell-and CAR-147 Raw264.7 cell-treated animals. d Flow cytometric analysis and quantification of MDSCs (Gr-1 + CD11b + ) in the TIL population (n = 10). e In total, 2 × 10 5 HER2-4T1 cells were transplanted into BALB/c nude mice. BLI was performed to assess tumour burden. f Tumour growth from control Raw264.7 cell-or CAR-147 Raw264.7 cell-treated BALB/c nude mice was analysed by measuring volumes every 2 days. g The weight of tumours from control Raw264.7 cell-or CAR-147 Raw264.7 cell-treated BALB/c nude mice was analysed. Each symbol indicates one mouse (n = 8). All values are expressed as the mean ± SEM. *P < 0.05, **P < 0.01, and ***P < 0.001 CAR-147 macrophages facilitate T-cell infiltration in threedimensional multicellular sphere models of human breast cancer To determine whether this treatment strategy works with human macrophages, we constructed a chimeric antigen receptor containing the internal signalling domain of the human CD147 molecule (CAR-h147). The human monocytic leukaemia cell line THP-1 shares many properties with human monocyte-derived macrophages after stimulation with phorbol-12-myristate 13-acetate (PMA) for 24 h. 28 Here, we examined the expression of MMPs in PMA-treated THP-1 macrophages expressing CAR-h147 (Fig. 5a) after coculture with HER2 + human breast cancer MDA-MB-453 tumour cells (Fig. 5b). The results showed that the expression of multiple MMPs (MMP2, MMP3, MMP9, MMP10, MMP11, MMP12, MMP13, MMP14, and MMP15) in the CAR-h147 THP-1 macrophages was upregulated after coculture compared with that in control THP-1 macrophages (Fig. 5c). Because 2D cultures cannot be used for cell infiltration studies, we performed three-dimensional multicellular sphere culture (MCS). It was observed that Jurkat T cells were able to penetrate deeper into the MCS formed by MDA-MB-453 tumour cells and CAR-h147 THP-1 macrophages compared to that formed by tumour cells and control THP-1 macrophages (Fig. 5d). These ex vivo studies initially demonstrated that CAR-147 macrophages facilitated T-cell infiltration in a human tumour model.
DISCUSSION
Tumours and stromal cells produce and assemble collagen, proteoglycans, and other molecules to form rigid ECMs and establish physical barriers that reduce the spread of therapeutic agents to tumour cells. The potential problem with many novel anticancer strategies is the inability to penetrate the tumour stroma, which is one of the reasons why CAR-T cell therapy is not effective in solid tumours.
In stroma-rich pancreatic ductal adenocarcinoma, blood vessel density is significantly reduced, and blood vessels are embedded within the matrix, resulting in the inability of therapeutic agents to reach the tumour site. 29 Hélène Salmon et al. reported that the density and orientation of the stromal ECM affected the localisation and migration of T cells in human non-small cell lung cancer tumours, in which more T cells were observed in regions of loose fibronectin and collagen and these cells migrated in a linear manner along fibronectin fibres. 8 Lysyl oxidase (LOX) and LOX-like (LOXL) catalyse the cross-linking and stabilisation of collagen and have high activity in tumours. In a mouse model of pancreatic cancer, LOX inhibition was shown to suppress metastasis and promote the efficacy of gemcitabine. 30 In addition, the ECM provides essential signals to promote tumour cell growth and inhibit apoptosis. Some studies have reported that the signals initiated by collagen IV are important cues for the survival and growth of tumour cells in the liver. Accordingly, finding an effective treatment strategy for targeting the ECM is necessary.
In the tumour microenvironment, macrophages can account for 50% of the leucocytes present, most of which are recruited from the peripheral blood. 31 It has been observed that macrophage infusion can improve liver fibrosis in animal models. 19 These data suggest that using modified macrophages to target the tumour ECM may be a novel and effective strategy. In the 1980s, clinical trials in cancer patients studying the adoptive transfer of macrophages generated from blood monocytes were carried out 32 and demonstrated the safety and feasibility of macrophage reinfusion in patients. In our study, infused macrophages accumulated mainly in the liver and tumour site and disappeared gradually over 7 days after infusion. These data suggest that macrophages can induce durable anti-tumour effects in a very short time period, indicating that adoptive macrophage therapy is a controlled and safe therapeutic strategy.
In our study, we modified macrophages to target the tumour ECM based on the chimeric antigen receptor structure used in CAR-T cell therapy. The chimeric antigen receptor designed for macrophages mainly contained two regions: one was an extracellular region, a single-chain variable region moiety (scFv) that recognised HER2, which is an important biomarker of breast . All values are expressed as the mean ± SEM. *P < 0.05, **P < 0.01, and ***P < 0.001 cancer, and the other was the intracellular region of CD147, which is used to activate the expression of MMPs in macrophages. CD147, also known as ECM metalloproteinase inducer (EMMPRIN), is a member of the class of transmembrane glycoproteins that plays important roles in regulating the synthesis and expression of cellular MMPs. It has been reported that high expression of CD147 will greatly increase the quantity and activity of MMPs, thereby increasing the degradation rate of the basement membrane and destroying this natural mechanical barrier. Our data showed that CAR-147 can upregulate the expression of multiple MMPs in macrophages after coculture with HER2-4T1 cells in vitro. However, except for CD80, membrane surface molecules did not exhibit changed expression, and phagocytosis, ROS production, and inflammatory cytokine secretion in macrophages were also unaffected. Additionally, CAR-147 macrophages did not affect tumour cell growth in vitro compared with control macrophages.
In the HER2-4T1 tumour model, we observed that the infusion of CAR-147 macrophages significantly inhibited tumour growth. Since it was demonstrated that CAR-147 macrophages did not affect the growth of tumour cells in vitro, we tested whether the effect of inhibiting tumours was dependent on autologous T cells as we predicted. The results showed that the proportion of T cells in the CAR-147 macrophage-treated tumours was approximately four times higher than that in the control macrophage-treated tumours, while the same macrophage infusion treatment process had no effect on tumour growth in T-cell-deficient nude mice. These data indicated that CAR-147 macrophages promote T-cell infiltration, which is clearly required for the therapeutic effect. In the current study on tumour immunotherapy, it is still a big challenge to reconstruct the immune system of immunodeficient mice with human immune cells. Therefore, it is difficult to test the utility of CAR-147 human macrophages in immune-compromised mice with human HER2 + breast cancer tumours. However, we performed a 3D-culture model to preliminarily demonstrate that this treatment strategy can also be effective in human breast cancer tumours. Some studies have reported that adequate T-cell infiltration in tumours is a prerequisite for sensitivity to immune checkpoint blockade (ICB) therapy. 33 This observation may suggest that CAR-147 macrophages can overcome tumour resistance to ICB therapy by increasing T-cell infiltration. After macrophage infusion, the collagen content in tumours was significantly reduced, while the expression of MMP3, MMP14 and MMP15 was increased. CAR-147 macrophages may require multiple MMPs to exert an anti-tumour function. When designing chimeric antigen receptors for different types of tumours, the effects of different MMPs should be considered.
Cytokine release syndrome (CRS) is the most frequent toxic event in clinical trials when using CAR-T cells to treat haematological malignancies, and severe CRS can be fatal. IL-6 is the main inflammatory mediator of CRS. It has long been thought that CAR-T cells cause CRS, but some studies have reported that this side effect is largely attributed to macrophages. 34,35 Interestingly, in our study, we found that the levels of inflammatory cytokines (IL-6, TNFα, IFNγ) in CAR-147 macrophage-treated mice were lower than those in control mice. However, cytokines are important mediators of the immune system involved in the exertion of anti-tumour effects. We further analysed changes in inflammatory cytokines in tumours after macrophage infusion. CAR-147 macrophages significantly increased the levels of IL-12 and IFNγ, which can exert potent anti-tumour responses, in the tumour tissue. These findings suggest that the effects of CAR-147 macrophages are local and relatively tumour specific. In conclusion, we provide a novel approach to target the tumour ECM by engineered macrophage infusion. Our data showed that after recognition of the antigen HER2, CAR-147 macrophages can increase the expression of MMPs to degrade the tumour ECM, which can promote T-cell infiltration and inhibit tumour growth. CAR-147 macrophage treatment combined with other anti-tumour therapies, such as CAR-T cells, ICB, and chemotherapeutic drugs, may be relatively effective in the elimination of cancer cells with minimal side effects. | 2019-10-01T14:17:28.428Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "1cc830d321dfe56601c0acca09f8340c24dd4b40",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41416-019-0578-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d996b08d94457770c22fe9ef46298790a057b11",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
261368193 | pes2o/s2orc | v3-fos-license | Partial-Norm of Entanglement: Entanglement Monotones That are not Monogamous
Quantum entanglement is known to be monogamous, i.e., it obeys strong constraints on how the entanglement can be distributed among multipartite systems. Almost all the entanglement monotones so far are shown to be monogamous. We explore here a family of entanglement monotones with the reduced functions are concave but not strictly concave and show that they are not monogamous. They are defined by four kinds of the ``partial-norm'' of the reduced state, which we call them \textit{partial-norm of entanglement}, minimal partial-norm of entanglement, reinforced minimal partial-norm of entanglement, and \textit{partial negativity}, respectively. This indicates that, the previous axiomatic definition of the entanglement monotone needs supplemental agreement that the reduced function should be strictly concave since such a strict concavity can make sure that the corresponding convex-roof extended entanglement monotone is monogamous. Here, the reduced function of an entanglement monotone refers to the corresponding function on the reduced state for the measure on bipartite pure states.
Entanglement, as a quintessential manifestation of quantum mechanics [1,2,3], has shown to be a crucial resource in various quantum information processing tasks [1,6,4,7,5]. The most striking property of entanglement is its distributability, that is, the impossibility of sharing entanglement unconditionally across many subsystems of a composite quantum system [9,8]. Understanding how entanglement can be quantified and distributed over many parties reveals fundamental insights into the nature of quantum correlations [10] and has profound applications in both quantum communication [11,12,13] and other area of physics [14,15,16,17,18,11,19]. Particularly, monogamy law of quantum correlation is the predominant feature that guarantees the quantum key distribution secure [8,20].
Quantitatively, the monogamy of entanglement is described by an inequality, involving a bipartite entanglement monotone. The term "monotone" refers to the fact that a proper measure of entanglement cannot increase on average under local operations and classical communication (LOCC) [21,23,22]. Recall that the traditional monogamy relation of entanglement measure E is quantitatively displayed as an inequality of the following form: where the vertical bar indicates the bipartite split across which the (bipartite) entanglement is measured. However, Eq. (1) is not valid for many entanglement measures but E α satisfies the relation for some α > 0 [24,9,25]. Intense research has been undertaken in this direction. It has been proved that the squashed entanglement and the one-way distillable entanglement are monogamous [26], and almost all the bipartite entanglement measures so far are monogamous for the multiqubit system or monogamous on pure states [24,9,14,25,27,28,29]. However, for the higher dimensional system, it is difficult to check the monogamy of entanglement measure according to Eq. (1) in general. Consequently, the definition of the monogamy is then improved as [30]: a measure of entanglement E is monogamous if for any ρ ABC ∈ S ABC that satisfies the disentangling condition, i.e., Such a definition simplifies the justification of the monogamy of entanglement measure greatly [30,31]. Recall that, a function E : S AB → R + is called a measure of entanglement if (1) E(σ AB ) = 0 for any separable density matrix σ AB ∈ S AB , and (2) E behaves monotonically under LOCC. Moreover, convex measures of entanglement that do not increase on average under LOCC are called entanglement monotones [21]. Let E be a measure of entanglement on bipartite states. We define E F ρ AB ≡ min n j=1 p j E |ψ j ⟩⟨ψ j | AB , where the minimum is taken over all pure state decompositions of ρ AB = n j=1 p j |ψ j ⟩⟨ψ j | AB . That is, E F is the convex roof extension of E. Vidal [21,Theorem 2] showed that for any entanglement measure E, E F above is an entanglement monotone if for any states ρ 1 , ρ 2 , and any 0 ≤ λ ≤ 1. Hereafter, we call h the reduced function of E and H A the reduced subsystem for convenience. In Ref. [30], according to definition (2), we showed that E F is monogamous whenever E F is defined via Eq. (3) with h is strictly concave additionally. Except for the Rényi α-entropy of entanglement with α > 1, all other measures of entanglement, that were studied intensively in literature, correspond on pure bipartite state to strict concave functions of the reduced density matrix. Theses include the original entanglement of formation [32], tangle [33], concurrence [34,33], G-concurrence [35], Tsallis entropy of entanglement [36], and the entanglement measures induced by the fidelity distances [37]. Nevertheless, we are not sure yet whether the entanglement monotone is monogamous if the reduced function is concave but not strictly concave. The purpose of this paper is to address such a issue. We explore the entanglement monotone suggested in Ref. [38], from which we also obtain another two entanglement monotones. We also investigate the partial negativity which is defined as the norm of the negative part of the state after partial transposition. The reduced functions of theses quantities are not strictly concave, and they are not equivalent to each other. We then show that they are not monogamous. This is the first time to prove that there exist entanglement monotones that are not monogamous in the light of the disentangling condition. Our results establish a more closer relation between the monogamy of an entanglement monotone and the strict concavity of the reduced function and suggest that we should require the strict concavity of the reduced function for any "fine" entanglement monotone. Moreover, comparing with other reduced functions for which the corresponding entanglement measures are shown to be monogamous, we find that if the reduced function is defined on all of the eigenvalues of the reduced state it is strictly concave and vice versa in general.
After tracing over subsystems we are left with It is easy to see that ρ T A AC is not positive whenever a ′ 1 a 2 ̸ = a 1 a ′ 2 , and thus E 2 (ρ AC ) > 0, here T X denotes the partial transpose transformation with respect to the subsystem X.
If the reduced subsystem is two-dimensional, we consider the three-qubit case with no loss of generality. Any pure state |ψ⟩ in C 2 ⊗ C 2 ⊗ C 2 can be expressed as [39] |ψ⟩ for any ρ AB = k p k |ϕ k ⟩⟨ϕ k | according to Corollary 5 in Ref. [30]. This leads to the minimal eigenvalue of coincides with that of ρ A , which yields either λ 2 = λ 4 = 0, or λ 1 = 0 and λ 0 ≤ λ 3 . That is, ρ AC could be entangled. Therefore E 2 is still not monogamous whenever the reduced subsystem is two dimensional. Let λ min be the minimal positive Schmidt coefficient of |ψ⟩. We define for pure state and then define by means of the convex-roof extension for mixed state. Denoting by it turns out that We call E min the minimal partial-norm of entanglement, which reflects as the minimal case of the partial-norm. It is clear that E min (ρ) = 0 iff ρ is separable. Let δ(ρ) = (δ 1 , δ 2 , . . . , δ d ) for any state ρ ∈ S with dim H = d, where δ i s are the eigenvalues, where "≺" is the majorization relation between probability distributions. Thus E min is an entanglement monotone. By now, except for the convex-roof extension of the negativity N [41], denoted by N F , all the reduced functions of convex-roof extended entanglement monotones in previous literature are shown to be strictly concave. Here N is defined as [40] N (ρ) = i µ i with µ i s are the eigenvalues of the negative part of ρ T A . In order to show that the reduced function of N F , denoted by h N , is strictly concave. We give the following statement at first, which is a complementary of Vidal [21,Theorem 2].
Proposition 2. Let E be an entanglement measure with the reduced function h defined as Eq. (3). If E is an entanglement monotone, then h is concave.
Proof. Let ρ and σ be any given two states in S A , 0 ≤ t ≤ 1. Taking |ψ⟩ AB and |ϕ⟩ AB in H AB such that ρ = tr B |ψ⟩⟨ψ| AB and σ = tr B |ϕ⟩⟨ϕ| AB , we let where I A,B is the identity operator acting on H A,B . This leads to since E is an entanglement monotone, which is equivalent to that is, h is concave.
By Proposition 2, N F is an entanglement monotone since N is an entanglement monotone and thus the reduced function h N is concave. Note here that, in Ref. [41], there is a gap in the proof of the concavity of h N : the second inequality of the last part in page 2 is wrong since |ϕ k ⟩ is not necessarily a basis (i.e., it is just an orthogonal set but not complete) in general. We show that h N is strictly concave as well. We assume to obtain a contradiction that h N is not strictly concave. Then there exists ρ , here spec(X) denotes the spectrum of X. Let We take then for any ensemble of ρ AB = k q k |ϕ k ⟩⟨ϕ k | AB , It turns out that N (ρ AB ) = N (| Ψ⟩ A|BC ). But | Ψ⟩ ABC does not admit the form |ψ⟩ AB 1 |ψ⟩ B 2 C up to some local unitary operation, where B 1 B 2 means H B has a subspace isomorphic to H B 1 ⊗ H B 2 and up to local unitary on system B 1 B 2 , which contradicts with Theorem 3 in [29]. Thus h N is strictly concave. That is, all the reduced functions of the monogamous entanglement monotones so far are strictly concave. We now go back to discuss the monogamy of E min . Clearly, if the reduced system is two-dimensional, then E min = E 2 , which is not monogamous. For higher dimensional case, we consider a pure state as in Eq. (6) just by replacing a 2 0 = a ′ 2 0 ≥ 1 2 , a 0 > a 1 ≥ a 2 , a ′ 0 > a ′ 1 ≥ a ′ 2 , with a 0 = a ′ 0 , a 1 ≥ a 2 > a 0 , a ′ 1 ≥ a ′ 2 > a ′ 0 , from which one can conclude that E min is not monogamous.
However, E min does not achieve the maximal value for the maximally entangled state. For making up the disadvantages, we can define for pure state and then define by means of the convex-roof extension for mixed state, where S r (|ψ⟩) denotes the Schmidt rank of |ψ⟩. We call it the reinforced minimal partialnorm of entanglement. E ′ min is equal to 2E min for any 2 ⊗ n state. In such a case, E ′ min reaches the maximal quantity for the maximally entangled state but not only for these states. In addition, it is easy to follow that E ′ min is also an entanglement monotone and is not monogamous.
The upper bounds of these quantities can be easily derived. Let ρ be a state in S AB , and E 2 (ρ) = i p i E 2 (|ψ i ⟩), then That is Analogously, and where r A,B is the rank of ρ A,B . When k ≥ 3, E k is not a faithful entanglement monotone, and it is not monogamous either. Another entanglement measure that lack of investigating the monogamy is the Schmidt number, which is regarded as a universal entanglement measure [42], defined by [43] S r (ρ) = min where the minimum is taken over all decomposition ρ = i p i |ψ i ⟩⟨ψ i |. It is also not monogamous since both the Schmidt number of |W ⟩ = 1 √ 3 (|100⟩ + |010⟩ + |001⟩) and that of its two reduced states are 2.
In addition, let ρ T A be the partial transpose of ρ, one may consider the partial-norm of the negative part of ρ T A , N ρ − . For example, we takê We call it partial negativity hereafter. Take ρ = |ψ⟩⟨ψ| with |ψ⟩ = j λ j |e j ⟩ A |e j ⟩ B as the Schmidt decomposition of |ψ⟩. ThenN (|ψ⟩) = λ 1 λ 2 , and the corresponding reduced function isĥ where δ 1 = λ 2 1 , δ 2 = λ 2 2 .N can still be regarded as a kind of partial norm as √ δ 1 δ 2 ≤ δ 1 = ∥ρ A ∥, in other words,N is also a kind of partial norm of entanglement. By definition,N (|ψ⟩ ab ) = 0 if and only if it is separable, and 0 <N (ρ) ≤ N (ρ) for any non-positive partial transpose state ρ. A simple comparison betweenN and E 2 , E min , E ′ min are given in Fig. 1 and Fig. 2, which indicate that they are not equivalent to each other. For the two-qubit case, 2N F coincides with the G-concurrence [35]. We conjecture thatĥ is concave [44].ĥ is strictly concave on S(H) with dim H = 2 since it reduced to an elementary symmetric function [45, p. 116], but it is not true for the higher dimensional case. In order to see this, we take (σ). We now assume thatN is an entanglement monotone, then we can conclude the following.
Theorem 3.N andN F are not monogamous whenever the reduced subsystem has dimension greater than 2.
We show this statement by a counter-example. Let with λ 0 ≥ λ 1 ≥ λ 2 > 0, it turns out that That is,N is not monogamous. Moreover, from this example, we can also getN F is not monogamous either in light ofN ≤N F . Analogous to that of the logarithmic negativity, we define the logarithmic partial negativity byN It is straightforward thatN l is not convex. For any LOCC acting on ρ ab that leaves the output states {p i σ i }, we have since log 2 is concave andN is non-increasing on average under LOCC by assumption, where x i =N (σ i ) + 1. Therefore it is also an entanglement monotone and is not monogamous (hereafter, we still call it an entanglement monotone even though it is not convex as in Ref. [22]). In sum, for the sake of distinguishing these entanglement monotones so far in the sense of monogamy law, we suggest the term informationally complete entanglement monotone, which means that its reduced function is related to all its eigenvalues. For example, the entanglement of formation is informationally complete since the von Neumann entropy is defined on all of the eigenvalues which include all the information of the entanglement, but E 2 , E min , E ′ min ,N ,N F , andN l are not the case except for the two-dimensional case since they just capture "partial information" of the entanglement. The worst one is the Schmidt number, which reflects the least information of the entanglement, and of course is not informationally complete. Our discussion supports that, for an entanglement monotone E F with reduced function h, E F is monogamous if and only if it is informationally complete, and in turn, iff h is strictly concave (the "if" part is proved [31]). So the axiomatic definition of an entanglement monotone should be improved as follows. Let E be a nonnegative function on S AB with E(|ψ⟩) = h(ρ A ) for pure state. We call E a strict entanglement monotone if (i) E(σ AB ) = 0 for any separable density matrix σ AB ∈ S AB , (ii) E behaves monotonically decreasing under LOCC on average, and (iii) the reduced function h is strictly concave. We use henceforth the term strict entanglement monotone to distinguish it from the previous entanglement monotone.
With such a spirit, except for E 2 , E min , E ′ min ,N ,N F ,N l and the Schmidt number, all the previous entanglement monotones that are shown to be monogamous or monogamous on pure states are strict entanglement monotones, these include the original entanglement of formation, negativity, the squashed entanglement [46], the convex-roof extension of negativity, tangle, concurrence, the relative entropy of entanglement [23], G-concurrence, the Tsallis entropy of entanglement, the conditional entanglement of mutual information [47], and the entanglement measures induced by the fidelity distances, etc. However, it still remains unknown that whether or not the non convex-roof extended strict entanglement monotones in literature are monogamous in addition to the squashed entanglement. We conjecture that all the informationally complete entanglement monotones are monogamous.
As a by-product, we can obtain new coherence measures from the reduced function h of E 2 , E min and E ′ min , respectively. Let C h (|ψ⟩) = h(x 0 , x 1 , . . . , x d−1 ) for pure state |ψ⟩ = i x i |i⟩ under the reference basis {|i⟩} d−1 i=0 , and by the convex-roof extension for mixed state, i.e., where the minimum is taken over all decomposition ρ = j p j |ψ j ⟩⟨ψ j |. It turns out that (i) h(1, 0, · · · , 0) = 0, (ii) h(π(x 0 , x 1 , . . . , x d−1 )) = h(x 0 , x 1 , . . . , x d−1 ) for any permutation π and any (x 0 , x 1 , . . . , x d−1 ), and (iii) h is concave. This reveals that C h is a well-defined coherence measure according to Theorem 1 in Ref. [48]. Also notice here that, the associated function h of all the previous coherence measures defined by means of the convex-roof extension are strictly concave, which are different from C h . | 2022-12-14T06:41:14.574Z | 2022-12-13T00:00:00.000 | {
"year": 2022,
"sha1": "b0617203cf3bc3cf7994f1244256128f6df4db58",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1367-2630/acf152/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "b0617203cf3bc3cf7994f1244256128f6df4db58",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3323808 | pes2o/s2orc | v3-fos-license | State-of-the-art review: stress T1 mapping—technical considerations, pitfalls and emerging clinical applications
In vivo mapping of the myocardial T1 relaxation time has recently attained wide clinical validation of its potential utility. In this review, we address the basic principles of the T1 mapping techniques, with particular attention to the emerging application of vasodilatory stress agents to interrogate the myocardial microvascular compartment, and differences between commonly used T1 mapping methods when applied in clinical practice.
a method of direct quantitative tissue characterization. In general, each tissue type is expected to have a normal range of T1 values, deviation from which may indicate disease or a change in physiology.
T1 mapping sequences
Myocardial T1 values measured in vivo depend on the chosen method, and are influenced by technical factors, such as magnetic field strength and pulse sequence design, and physiologic factors, including heart rate, temperature, age, gender, and disease [1]. The general design of T1 mapping sequences includes delivery of a pre-pulse and acquisition of multiple T1-weighted images to allow fitting of these signals to an exponential recovery curve. Common T1 mapping sequences used for cardiac T1 mapping are inversion recovery techniques [2][3][4][5], saturation recovery techniques [6], and mixed hybrid approaches [7].
Current cardiac T1 mapping techniques evolved from the original Look-Locker spectroscopic method developed in 1970 [8], and provide a time-efficient approach for T1 mapping. As the living heart is a dynamic organ that contracts and relaxes, modification of the original scheme was needed to assure acquisition of sufficient information without sacrificing accuracy and clinical utility. The modified Look-Locker inversion recovery (MOLLI) was developed in 2004 [2] to address this issue by introducing intermittent image acquisition using electrocardiographic (ECG) gating to target a designated phase of the cardiac cycle, and then repeating the inversion experiments after a carefully optimized delay time to obtain adequate information to fit a single exponential T1 recovery curve (Fig. 1a). This sequence scheme significantly advanced T1 mapping for Abstract In vivo mapping of the myocardial T1 relaxation time has recently attained wide clinical validation of its potential utility. In this review, we address the basic principles of the T1 mapping techniques, with particular attention to the emerging application of vasodilatory stress agents to interrogate the myocardial microvascular compartment, and differences between commonly used T1 mapping methods when applied in clinical practice.
Keywords Cardiovascular magnetic resonance · Vascular reactivity · Stress · Tissue characterization · T1 mapping Introduction: what is T1 mapping? T1 relaxation time, spin-lattice relaxation time, or simply T1, is the fundamental magnetic resonance property that describes the exponential recovery of the longitudinal component of magnetization (Mz) back towards its thermal equilibrium. In vivo, the recovery of Mz is complex, but characterizing the underlying processes with a single T1 value has shown promise as a biomarker [1]. The measured T1 is determined by intrinsic tissue properties, and the extrinsic environment, including surrounding structure and milieu, as well as software and hardware used to measure T1. Modern sequences allow direct generation of spatially resolved T1 relaxation maps. T1 mapping of tissues allows immediate assessment of their T1 values on a voxel-by-voxel basis as cardiac applications, allowing acquisition of a cardiac T1 map within a manageable 17-heartbeat-long breathold.
The shortened modified Look-Locker inversion recovery (ShMOLLI 2010) [3] (Fig. 1b) further addressed several limitations of MOLLI-based cardiac T1 mapping towards practical clinical applications, and has been extensively validated clinically over the past 7 years . In particular, advantages of ShMOLLI include: 1. Short breath-holds: it significantly shortened the breathhold time to 9 heartbeats (usually around 10 s) per T1 map, rendering imaging time easier for sicker patients to cope with [3]. 2. Heart rate independence: it eliminated heart rate dependency characteristic of other MOLLI-based techniques and variants, and is able to cope with tachyarrhythmias, such as rapid atrial fibrillation, frequent ectopic beats, and sinus tachycardia; this is particularly relevant for performing mapping during dynamic heart rate changes, such as for stress applications [3,14,[27][28][29]. 3. Flexibility: ShMOLLI is a one-for-all technique for a wide range of T1. In particular, it estimates long T1s without the progressive heart rate-dependent underestimation typical of MOLLI [3]. This is relevant for tissues such as blood (in the range of up to 2000 ms) and fluids such as in pericardial effusions, cysts, and cerebrospinal fluid (in the range of 3000-4000 ms). This feature is also important for assessing edematous tissues, and extracellular volume (ECV) where blood T1 is required. Other novel applications include characterization of masses (e.g. differentiating cysts from solid tumors) [30,31], and splenic T1 to determine stress adequacy, which requires a T1 mapping sequence that can handle both long T1 values and in dynamic stress conditions [10,11,29,32]. 4. Practicality: ShMOLLI is able to simultaneously estimate short and long T1 pixels in the same image without requiring separate sequence sampling schemes for pre-and post-contrast T1 applications [33]; this makes it highly convenient in the practical workflow for ECV applications. It also allows post-contrast characterization of masses to determine gadolinium uptake, without misclassifying a cyst as a mass that appears to take up gadolinium contrast agents, which may, for example, suggest a vascular tumour on post-contrast T1 maps (Fig. 2).
There are other short MOLLI variants that have been developed also aimed at shortening imaging times [4,33]. Currently, MOLLI-based sequences are the most commonly used and validated, although saturation-recovery single-shot acquisition (SASHA, SmarT1Map) sequences have attracted much attention due to acceptable short imaging times, nominal lack of heart rate dependency and excellent accuracy in ShMOLLI at a heart rate of 60 bpm. SSFP readouts are simplified to a single 35° pulse each, presented at a constant delay time TD from each preceding R wave. The 180° inversion pulses are shifted depending on the inversion recovery (IR) number to achieve the desired first TI of 100, 180 and 260 ms in the consecutive IR experiments. The plots below represent the evolution of longitudinal magnetisation (Mz) for short T1 (400 ms, thin lines) and long T1 (2000 ms, thick lines). Note that long epochs free of signal acquisitions minimise the impact of incomplete Mz recoveries in MOLLI so that all acquired samples can be pooled together for T1 reconstruction. In ShMOLLI, the validity of additional signal samples from the second and third IR epochs is determined by progressive nonlinear estimation. As originally published by BioMed Central in Piechnik [3] estimating myocardial T1 times shown in simulation and in phantoms [34,35]. Hybrid approaches that combine saturation and inversion pulses are also available as emerging techniques for cardiac applications [7].
What do T1 measurements bring to clinical practice?
Myocardial T1 mapping methods can be used for native (or pre-contrast) T1 mapping, post-contrast T1 mapping, and ECV mapping (detailed review may be accessed elsewhere [36]). Briefly, native (pre-contrast) T1 reflects a composite signal from both the intracellular (predominantly myocytes) and extracellular spaces (which includes the interstitial and intravascular compartments). T1 predominantly detects free water, and increased free water content in tissue, such as edema or water collecting in expanded interstitial spaces. T1 does not directly detect collagen fibers, but predominantly the accumulation of water around fibrotic tissue which typically prolongs native T1 relaxation times and is responsible for the strong indirect links to areas of fibrosis reported in the literature. Processes that are known to lower T1 times include significant iron and fat content [26,37,38], as well as contrast agents, particularly gadolinium. Isolated, single time-point post-contrast T1 mapping is currently not preferred to estimating the ECV, due to strong dependencies on the timing and dose of contrast administered, and other confounding factors [1]. Instead, ECV may be quantified non-invasively using pre-and post-contrast T1 maps to obtain pre-and post-contrast myocardial and blood T1 values, adjusting for the hematocrit.
It is important to emphasize that T1 biomarkers are nonspecific and may deviate from their normal ranges due to a variety of causes. In particular, T1 and ECV may act as a surrogate for interstitial fibrosis only if other confounding factors of increased T1 or ECV-including edema, inflammation, amyloidosis that expand the interstitial space, and ischemia-have been excluded [1,27,28]. Current evidence demonstrates that native myocardial T1 values can be measured within a tight normal range, with clinically relevant sensitivity to changes in a wide range of cardiac diseases [36,39,40]. T1 maps can be displayed using color scales or threshold-based overlay masks to highlight tissue differences and aid visual interpretation [11,13,21,31,41,42] (Fig. 3). Native T1 maps allow differentiation of an increasing range of tissue types without the need for gadolinium-based contrast agents (GBCA).
Principle of gadolinium-free T1 mapping to assess the coronary vascular compartment
Myocardial blood volume (MBV) constitutes ~10% of the total myocardial volume at rest [43], and may increase twofold during coronary vasodilatory stress [44,45]. In healthy individuals with normal myocardium and coronary arteries, there is significant coronary vasodilatory reserve, which can be interrogated by administration of adenosine vasodilatory stress [46]. Coronary vasodilation augments both coronary blood flow as well as intramyocardial blood volume [45]. Since native blood T1 is much longer than native myocardial T1, blood T1 is expected to increase the measured myocardial T1 through its partial volume effects [9]. This has been shown in normal volunteers who exhibit a 6% increase in myocardial T1 with narrow normal ranges during adenosine vasodilator stress, using the heart rate-independent Characterizing tissues with very long T1 values using different T1 mapping techniques. Shown are T1 maps from a patient with a past history of breast cancer. Liver cysts (black arrows) observed with ShMOLLI retains the characteristic very long T1 both pre-(a) and post-gadolinium-based contrast due to its consistent performance over a wide range of heart rates and T1 values. In c, the 5(3)3 MOLLI variant pre-contrast T1 map shows ~30% lower T1 in the liver cysts, consistent with the back-loaded 11-heartbeart MOLLI 3(3)5 variant [4]. d Post-contrast T1 map using the 4(1)3(1)2 MOLLI variant dedicated for post-contrast applications suffers substantial underestimation of cystic T1 by >70%. Comparing c and d, cystic lesions may appear to take up gadolinium-based contrast agents (GCBAs), which may suggest a tumour with communication to the vasculature, rather than what would be expected for a cyst. T1 is quoted for manual regions of interests drawn within the cysts. Colour tables are identical for all panels shown, as in Siemens ShMOLLI distributions for ease of comparisons
Stress and rest T1 mapping in coronary artery disease (CAD)
Stress T1 mapping has obvious potential applications in patients with CAD and ischemic heart disease [28,47]. Liu et al. [28] demonstrated that, in normal myocardium, the resting T1 is normal, with a 6% rise during vasodilatory stress. In chronic infarcted myocardium, the resting T1 is typically significantly elevated compared to normal myocardium, with no change in T1 during stress (Fig. 4). In ischaemic myocardium subtended by a significant coronary stenosis, there is compensatory downstream coronary vasodilation even at rest; this is detectable as mildly elevated resting myocardial T1 values, but do not show further coronary vasodilatory response during stress, and, thus, no change in stress myocardial T1. Adenosine stress and rest T1 mapping may be used to distinguish normal, infarcted, and ischaemic myocardium, without the need for GCBA, due to their distinctive rest and stress T1 profiles [28] (Fig. 4).
Stress and rest T1 mapping in patients without obstructive CAD
Adenosine stress and rest T1 mapping may also be used to assess coronary vasodilatory reserve in patients without obstructive CAD. For instance, in patients with type 2 diabetes without obstructive CAD, early data have shown blunted stress T1 responses compared to controls, which may reflect microvascular dysfunction [48,49], and is a subject of further investigation. In patients with severe aortic stenosis but no obstructive CAD on invasive angiography, the increased demands of the pressure-overloaded and hypertrophied myocardium are accompanied by increased resting coronary blood flow and vasodilation [50][51][52]. This is detectable as elevated resting myocardial T1, but achieving the same maximal adenosine stress T1 response when compared to normal controls [27]. This impaired stress T1 response normalizes 7 months after relief of the pressure overload with aortic valve replacement [27] (Fig. 5). This finding supports the notion that, in severe aortic stenosis, increased resting myocardial T1 may mainly reflect changes in the intravascular compartment, rather than solely from diffuse myocardial fibrosis in the interstitial compartment as previously believed, although these two processes likely co-exist in this disease model. Other investigators have explored stress T1 Fig. 3 T1 maps using incremental thresholds demonstrate the predominantly non-ischaemic pattern of injury across a spectrum of acute myocarditis. Red indicates areas of myocardium with a T1 value above the stated threshold of at least 40 mm 2 in contigu-ous area. A T1 threshold of 990 ms was previously validated for the detection of acute myocardial oedema; other thresholds were selected for illustrative purposes. As originally published by Biomed Central in Ferreira et al. [13] mapping as a surrogate marker for myocardial blood volume change in heart transplant recipients [48,49]. We believe that stress T1 mapping holds promise for assessing coronary microvascular function and vasodilatory reserve in a number of cardiomyopathies as emerging clinical applications. Fig. 4 Myocardial T1 at rest and during adenosine stress at 1.5 T. a T1 values at rest in normal and remote tissue were similar and significantly lower than in ischemic regions. Infarct T1 was the highest of all myocardial tissue, but lower than the reference left ventricular blood pool of patients. During adenosine stress, normal and remote myocardial T1 increased significantly from baseline, while T1 in ischemic and infarcted regions remained relatively unchanged. b Relative T1 reactivity (δT1) in the patient's remote myocardium was significantly blunted compared to normal, and completely abolished in ischemic and infarcted regions. All data indicate mean ± 1 SD. *p < 0.05. As originally published by Elsevier in Liu [28] Fig. 5 Proposed myocardial water compartments in aortic stenosis. Proposed changes in myocardial water compartments at rest and stress in patients with aortic stenosis pre and post AVR, and controls (left). The T1 response to adenosine was mainly contributed to by vascular responses instead of interstitial space expansion which may be negligible. Note that T1 and volumes from vascular cross-sections are for qualitative comparison only and not to scale. As originally published by BioMed Central in Mahmod et al. [27] Splenic T1 mapping: a novel surrogate marker for adequate adenosine stress Stress adequacy is an integral component of the cardiac stress examination, which may impact on the diagnostic confidence, especially for ruling out significant obstructive CAD. Recently, stress T1 mapping of the spleen has been shown to be promising novel invention for assessing adenosine stress adequacy before stress perfusion clinical magnetic resonance (CMR) imaging [29] (Fig. 6). Whilst adenosine stress induces vasodilation in the coronary arteries, it simultaneously induces vasoconstriction in the spleen. This is manifest as the "splenic switch-off sign", which can be seen on nuclear stress perfusion [53] as well as CMR gadolinium-based perfusion imaging, and may serve as a marker for adenosine stress adequacy [54]. On CMR perfusion images, the spleen is typically visible in the field of view, and during peak adenosine stress, the spleen appears dark ("switch-off") compared to rest perfusion images when the spleen appears bright (as it takes up GBCA). The lack of "splenic switch-off" has been observed in more false negative perfusion CMR scans when compared to invasive coronary angiography in detecting significant CAD [54]. One limitation of the gadolinium-based "splenic switchoff" sign is that to visualize this interesting phenomenon, GBCA would have already been administered for first-pass perfusion imaging, and does not leave an opportunity to optimize the adenosine stress protocol on the fly. Splenic T1 mapping, on the other hand, does not require GBCA, and splenic vasoconstriction associated with adenosine stress significantly decreases splenic T1 values, which can be conveniently detected on stress T1 maps typically without additional planning [29]. This provides a pre-emptive opportunity to increase and/or prolong adenosine administration to achieve adequate adenosine stress before acquiring stress images to increase diagnostic confidence. Splenic T1 mapping is undergoing further validation for this indication.
Pitfalls of stress T1 mapping The impact of the chosen T1 mapping technique on rest and stress T1
It is widely recognized that T1 mapping techniques, even within a method family like MOLLI-based sequences, have different properties and diverging norms [1,55]. The impact of this issue is particularly apparent for stress T1 applications, as illustrated by two recent studies that used different MOLLI techniques to perform adenosine stress T1 mapping: Liu et al. [28] 6.2 ± 0.5% at 1.5 T and 6.3 ± 1.1% at 3 T [28], while Kuijpers et al. obtained a lower stress T1 response and larger standard deviations of 4.3 ± 2.8% in controls at 1.5 T [47]. In CAD patients with remote myocardium, Liu et al. noted a blunted T1 response of 3.9 ± 0.6%, while Kuijpers et al. reported lower averages of 2.6 ± 3.4%. Similarly, another group of investigators reported that MOLLI 5(3)3 achieved a stress T1 response of 3.3% (1.5 T) and 4.4% (3 T) [47,56]. Recently, stress T1 responses using regadenoson showed reactivity similar to those previously reported after adenosine administration [57].
It is encouraging that the stress T1 response can be elicited using more than one T1 mapping technique by independent groups of investigators and with different stress agents. At the same time, the fact that the ShMOLLI stress T1 response is larger by >40% than using MOLLI 5(3)3 based on the published numbers above deserves attention and discussion. Conventionally, CMR methods that compare images before and after an intervention (such as administration of a stress agent or GBCA, as in perfusion imaging and ECV mapping) within the same subject in a single scan session may improve inter-individual and inter-center consistencies of the imaging biomarker. However, the same cannot be said for stress T1 mapping based on current limited evidence. In the two studies cited above [28,46], the resting T1 (955 ± 17 ms in Liu et al. and T1 rest 977 ± 40 ms in Kuijpers et al.) amount to only a ~2% difference between these two techniques. In contrast, the inter-methodological differences in stress T1 responses differ by ~40% (approximately 6 vs. 4%, respectively)-i.e., a 20-fold worse agreement than for resting T1 values. Potential reasons for discrepancies in the stress T1 response between these two techniques may include factors such as selection of patients (with normal findings) as controls in the MOLLI study and control age differences, adenosine stress duration, adequacy, and maximal heart rate achieved; however, the technical differences between the ShMOLLI and MOLLI 5(3)3 T1 mapping techniques and their impact on the stress T1 response also require further consideration, as discussed below. Early standardization may be even more important for stress applications than for native resting T1.
The impact of heart rate variation on stress T1 mapping
ShMOLLI, with a sampling scheme of 5(1)1(1)1, is "frontloaded" by acquiring most samples upfront and is heart rate-independent due to its in-built conditional reconstruction algorithm [3,28]. This in contrast to earlier MOLLI techniques [58], which tend to be "back-loaded", with most samples acquired at the end of the sampling scheme, such as the classic MOLLI 3(3)3(3)5 design [2]. The MOLLI 5(3)3 variant aimed to reduce heart rate sensitivity also by front-loading [59] but does not ultimately eliminate it, as all data are used to fit to a single model, regardless of whether recovery epochs are adequately long or not (see Fig. 2). Given that stress T1 responses are relatively small, even a slight degree of residual heart rate sensitivity in the myocardial T1 range can impact on the observed stress T1 reactivity. Figure 7 illustrates the mechanism of stress T1 underestimation using 11-heart-beat 3(3)5 MOLLI due to heart rate dependency based on data published by an independent group of investigators [4]. The most recent MOLLI variant, using a sampling scheme of 5s(3s)3s, has been proposed to further reduce heart rate sensitivity [60]. However, even within the relatively limited range of T1 (0-1200 ms) validated for this technique (further details in Fig. 9 in [60]), the T1 and heart rate dependence are still evident. It remains unclear what proportion of the significant T1 underestimation seen in wider T1 ranges beyond 1200 ms, as reported by other studies for classic MOLLI [3,4], may remain for MOLLI 5s(3s)3s.
SASHA T1 mapping is heart rate independent under optimal conditions, although there have been no reports of its application during dynamic stress thus far. It remains to be tested in practice by further comparisons whether the lower signal-to-noise ratio (SNR) known for this technique and whether imperfect saturation conditions may introduce confounds based on the known error dependencies (shown in Figs. 2, 3, 4 in the original paper [6]).
Fig. 7
Mechanism for the impact of heart rate sensitivity on the measured stress T1 responses using MOLLI variants. MOLLIs generally underestimate T1, hence all coloured lines are under the unity line (grey dotted). ShMOLLI has no heart rate (HR) dependence, and behaves like the HR 40 (dark blue) line across the HR range of 40-100 beats per minute. As a result, when myocardial T1 increases during vasodilatory stress (solid blue arrow, x-axis), this corresponds to just moving along a single linear relationship (dark blue HR 40), and preserves the relative size of the T1 response (6%). The MOLLI 3(3)5 variant [4] illustrated here is HR dependent. Thus, when myocardial T1 increases during vasodilatory stress, the transition involves simultaneous switching between HR-dependent relationships (red arrow "HR"). This results in a lower ~4% stress T1 response using the MOLLI 3 (3) Factors other than heart rate that impact on stress T1 mapping There are factors other than heart rate that may impact on the stress T1 response for a T1 mapping technique, which include T1 sensitivities to T2, magnetization transfer (MT) effects, and breath-hold duration and motion during stress conditions.
With regard to T2 sensitivities and MT effects, these properties that confer MOLLI-based techniques their recognized sensitivity to detecting disease [35] are likely to enhance their sensitivity to the stress T1 response elicited by ShMOLLI. Assuming that the underlying mechanism of the stress T1 response is mainly related to an increase in blood volume, the increased water content will directly affect MT and T2 to synergistically increase the measured T1. Further, the stress ShMOLLI-T1 response is likely to be enhanced by residual sensitivity to T2 elevations due to underlying BOLD (blood oxygenation level dependent) effects. The surplus BOLD response is characteristic of normal vascular reactivity [61][62][63] and will also accentuate the contrast to pathological changes. Conversely, while stress T1 mapping has not been reported using saturation-recovery techniques, the lack of MT and T2 dependencies will likely diminish the stress T1 response, especially given the higher variability (noise) typically seen in T1 estimation using saturationrecovery methods [6,35,64]. Recently improved inversion pulses in recent MOLLI variants target the accuracy of T1 by reducing T2 and MT sensitivities, which may be paradoxically detrimental to their sensitivity to detect stress T1 responses [35].
For the more recent MOLLI variants that use sampling schemes measured in seconds (rather than in heartbeats) [60], at increased heart rates, there will be more image acquisitions per inversion recovery experiment, and more energy deposited into bound proton pools. For example, MOLLI 5s(3s)3s would deploy as MOLLI 5(3)3 at a heart rate of 60 beats per minute, but at 120 beats per minute, MOLLI 10(6)6 would be deployed, with twice as many readouts. Thus, MT sensitivity between the actual variants deployed at rest and during stress is likely to differ significantly. What happens exactly is largely academic, as its clinical application is likely to be more impacted by the breath-hold requirements. Human subjects undergoing dynamic stress using the 5s(3s)3s sampling scheme would need to hold their breath typically for 12-14 s, longer than classic MOLLI under the same stress conditions. Kuijpers et al. [47] had reported substantial motion artefacts using MOLLI 5(3)3 for stress T1 mapping, and these are likely to worsen using MOLLI 5s(3s)3s due to longer breath-hold requirements. Recent studies agreed that motion remains a substantial concern for MOLLI acquisitions for stress applications, which could not be overcome by inline MOCO [47,56].
It would be difficult to explain the observed dependencies quantitatively by simple partial-volume relaxivity summation. We draw attention to the complex spectrum of blood T1, T2, and volumetric reactivity between various vascular compartments in the brain (details in Figs. 11-14 in [61], supplemental material). Briefly, these demonstrate very significant differences in baseline values and stress reactivity of T1 and T2 of blood in various vascular compartments in the brain, with a disparately small blood volume attributed to the vaso-reactive arterial component when compared to the capillary and venous bed. These effects in the brain have been studied in response to CO 2 administration, but not with adenosine or the dynamically changing tissue stress that occurs with each heart beat. These factors are important, as the dynamics of compartmental volume redistribution depend on time scales and the types of stimuli in the brain [65]. Given the challenges to gather similar data for the heart, ultimately, the diagnostic performance of a method to study the heart during stress conditions will boil down to clinical evidence and independent head-to-head comparisons in clinical practice [64]. Computer simulations and phantom experiments, while helpful as initial guides to assess a new method, may not fully replicate or account for factors encountered in the in vivo environment [64], especially in a complex and dynamic organ like the human heart.
Future directions and implications
Stress and rest T1 mapping is a novel technique with potential to assess ischaemia, coronary vasodilatory reserve, and the health of the micro-coronary circulation, without the need for GCBA. T1 mapping is a nascent field, and the exact biological mechanisms of native and stress T1 signals in various conditions have not been fully elucidated. The effects of other modalities of stress, including exercise and pharmacological agents, as well as other modulators of vascular reactivity on T1 may be explored to fully determine its clinical applicability. Stress T1 mapping is an active area of scientific development, including validation against quantitative perfusion measures, invasive coronary measurements and diagnostic performance in a variety of cardiac conditions [55,[66][67][68][69][70]. Over time, collective evidence will allow better understanding of the mechanisms for the observed changes for this emerging technique and its clinical utility in a wider patient population, including those with contraindications to GBCA. | 2018-02-16T23:07:29.281Z | 2017-09-15T00:00:00.000 | {
"year": 2017,
"sha1": "fc0a30598353f429ba3377ae2e0c278fd26bfc85",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10334-017-0649-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a699ba5295b76a74dd89ec1ba57d98674b9843c0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25861003 | pes2o/s2orc | v3-fos-license | Comparability of Internet and telephone data in a survey on the respiratory health of children
1Direction de la Santé Publique de Montréal; 2Université de Montréal, Département de médecine sociale et préventive; 3Clinique interuniversitaire de santé au travail et de santé environnementale, Institut thoracique de Montréal, Montréal, Québec Correspondence: Ms Céline Plante, Direction de la Santé Publique de Montréal, 1301 rue Sherbrooke Est, Montréal, Québec H7L 3L1. Telephone 514-528-2400 ext 3285, fax 514-528-2459, e-mail cplante@santepub-mtl.qc.ca Response rates to surveys, particularly telephone surveys, have been declining in the past 30 years (1,2) and, for general population surveys in North America, have been reported to be approximately 50% to 60% (1,3). This decline is caused by a growing refusal to participate, and difficulties in contacting individuals (2) due to the increased use of answering machines, call screening devices (3) and cellular telephones (4). Because low response rates may affect survey validity (3,5-7), supplementary efforts are often necessary to maintain response rates in the acceptable range, which results in increased survey costs (3,6,8). The Internet, alone or in combination with other survey modes, has been proposed as a means to both control costs and improve response rates (9,10). Internet questionnaires are usually less costly to administer than postal questionnaires (11), and can be completed at anytime, regardless of where the respondents are located. However, the Internet is seldom the only mode of choice because Internet access is still not universally available (68% in 2006, and 76% in 2009 in North America [12]), and there is usually no available list of all users that would enable researchers to obtain a random sample of subjects. However, concern has been expressed that information from two or more sources of data within a single study may not be comparable. Several authors have reported differences in responses between selfadministered questionnaire (mail, computer or Internet) and interview (telephone or face to face) modes (3,8,13,14). Two main factors could explain these discrepancies: socioeconomic heterogeneity among respondents between the different modes; and differences related to the modes themselves (8). Regarding the former, individuals with access to the Internet differ from those who do not have access with respect to socioeconomic status, which is an important variable frequently associated with health outcomes. Regarding the latter, reading questions instead of listening to them read aloud on the telephone, answering at one’s own pace instead of being pressed to, choosing responses among multiple choices when reading versus listening, or answering intimate or sensitive questions posed by an interviewer versus a computer are among situations in which answers may not entirely be comparable. Indeed, several studies have highlighted differences in answers to sensitive or other types of questions elicited by Internet, computer or mail versus telephone questionnaires (9,15,16), orIgInal artICle
BaCkgRound: Mixing survey administration modes has generated concern about the comparability of responses between modes.oBJeCtIve: To explore the differences in respondent profiles, and responses between Internet and telephone questionnaires in a survey on respiratory diseases.MethodS: The data were generated from a mixed Internet and telephone survey of respiratory diseases among children in Montreal (Quebec), in 2006.Comparison of 12 selected questions was performed after standardization for respondent education and income.Stratification of analysis on education and income categories was also performed for the questions with significantly divergent responses.ReSuLtS: Six questions showed significant differences in responses between modes after standardization.The largest differences among the closed-ended questions were observed for highly prevalent symptoms, dry cough during the night (difference of 9% for positive answer [P<0.01]) and symptoms of allergic rhinitis (difference of 7% for positive answer [P<0.01]).A large discrepancy was also found in the multiple choice question and with an open-ended response (ie, free answer).For the three potentially sensitive questions, a desirability bias was probably present in one question on smoking habits (difference of 2.6 % for positive answer [P<0.05]).ConCLuSIon: The differences observed between Internet and telephone responses to selected questions were not completely explained by socioeconomic disparities among the respondents.In a mixed-mode survey (Internet and telephone), caution should be used when formulating sensitive, complex, open-ended and long-ended questions, and those related to highly prevalent and nonspecific symptoms.RéSuLtatS : six questions affichent des réponses significativement différentes entre les deux modes après standardisation.Les différences les plus importantes sont observées parmi les questions fermées concernant des symptômes très prévalents, la toux sèche nocturne (différence de 9 % dans les réponses positives, P<0.01) et les symptômes de la rhinite allergique (différence de 7 % dans les réponses positives, P<0.01).Une divergence importante est aussi constatée pour une question avec choix multiples et un item ouvert.Parmi les trois questions potentiellement sensibles, un biais de désirabilité semble présent pour celle concernant l'usage du tabac (différence de 3 % dans les réponses positives, P<0.05).ConCLuSIon : les différences observées entre les réponses des questionnaires Internet et téléphonique des questions sélectionnées ne sont pas complètement expliquées par les disparités socioéconomiques des répondants.Lorsqu'une enquête combine les modes Internet et téléphone, des précautions doivent être prises dans la formulation des questions sensibles, complexes ou comprenant un item ouvert, ou des questions sur des symptômes très prévalents et non spécifiques.
even after adjustment for selection bias or in the absence of socioeconomic variation (11,13).
The aim of the present survey was to quantify how the prevalence rates of respiratory diseases (asthma, rhinitis and/or infections) among children varied across the Island of Montreal (Quebec) and to identify the factors associated with their distribution.A mixed-mode survey using the Internet and telephone was chosen for the present study given the target group (young families familiar with the Internet) and the desire to reach the various socioeconomic groups.
Given the reported associations between socioeconomic status (SES), asthma rates and access to the Internet, we aimed to assess the comparability of data collected using these two modes.The main objective of the present study was to examine the differences in respondent profiles and the distributions of responses collected from the Internet and the telephone questionnaires.A secondary objective was to study the effect of contact methods (mail only versus both mail and telephone) and SES on mode selection by respondents.
Survey
The present study targeted children six months to 12 years of age living on the Island of Montreal.The survey was conducted by a private firm during the spring and summer of 2006.A probability sample of 17,661 names and addresses of targeted parents was obtained from the administrative database of the Régie de l'Assurance Maladie du Québec (RAMQ, Provincial Health Insurance Board).Through automated and subsequent manual processes, 12,678 addresses (72% of the initial sample) could be matched to a telephone number.Families were contacted using two procedures: a letter was first sent to each family of the sample and, telephone calls, when the number was available, were made subsequent to the letter.The letter invited parents or legal guardians to answer either the questionnaire directly on the Internet or to contact the firm by telephone.Interviewers initially offered the respondent an opportunity to complete the questionnaire immediately or later by telephone, or on the Internet.A personal identification number was included in the letter that permitted secured access to a dedicated Internet site and to complete the online questionnaire.A minimum of 10 calls were made to contact respondents.In an attempt to improve the response rate, a different interviewer made additional callbacks, and a second letter was sent to families for whom no telephone number was identified.Internet and telephone questionnaires were identical and completed during the same period.On average, the telephone survey lasted 23 min, but the duration of the Internet survey is unknown.There were 300 questions that focused on the child's current and past respiratory health, use of medication and health care services, family history of allergies, home environmental exposures, and sociodemographic, household and lifestyle factors.
The survey was completed by 7964 respondents, of whom 4155 (52.2%) answered on the Internet and 3809 (47.8%) by telephone.The response rate for the group for which a telephone number could be paired to an address was 71%, and approximately 30% for the remaining group without an available telephone number.This approximation was made on the assumption that the proportion of valid unreturned mailouts, which is unknown but needed to compute the response rate, is the same as the proportion of valid telephone numbers (known).
The study protocol was approved by the Montreal Public Health Department Human Subjects Research Ethics Committee.Consent was sought from all participants before they completed the questionnaire, and all were assured that the collected information would remain confidential and anonymous.analysis Socioeconomic characteristics of the respondents, including sex, country of birth, education and family income, were compared according to survey mode.The socioeconomic characteristics of Internet respondents were also compared according to contact procedure (mail contact only versus mail and telephone).The c 2 test was used to measure the significance of differences in response distributions.Log binomial regression, suitable for binary response, was used to study the effect of socioeconomic characteristics and contact procedure on the choice of survey mode.
A subset of questions representing diverse task complexities and important outcomes, and some pertaining to potentially sensitive issues (these were related to smoking habits during pregnancy, presence of pest animals, and breast-or formula-feeding of newborns), were selected.One question with multiple choice responses and one with an open-ended response (ie, free answer) were also included.For the question with an open-ended response, which pertained to changes in behaviour or home modifications made to alleviate the child's asthma symptoms, the seven most frequent answers were retained for analysis.All questions offered a nonresponse option.The differences between survey modes were tested using the c 2 test.Nonresponse frequencies were also compared.
Because the difference in responses between modes may be attributable to dissimilarity in the socioeconomic characteristics of respondents, the responses were standardized according to two categories of family income and two categories of respondent education, based on the entire sample proportions of those categories.The cut-off point for family income was $35,000/year, and the education level was classified below or at/above high school.The education level or family income was not available for 861 respondents, resulting in 7103 completed questionnaires.
If initial differences in responses to a question between Internet and telephone survey modes were statistically nonsignificant after standardization, it suggested that the differences were only related to variations in socioeconomic characteristics of respondents according to survey mode.If significant differences remained after standardization, the analysis was further stratified according to four education/ income strata to investigate possible trends in responses.To identify the most probable factors that explained the remaining differences, observed trends according to socioeconomic strata (eg, difference only observed among lowest socioeconomic stratum) and proportions of positive answers between the two modes, taking into account known factors from the literature (eg, a factor known to produce higher positive answers among Internet respondents) and details of the survey methods (eg, possible influence of the interviewer) were investigated.
ReSuLtS
Respondents' socioeconomic profile according to survey mode Table 1 summarizes the distribution of family income, respondents' education, sex and country of birth according to response mode.A clear trend can be observed: Internet respondents had higher incomes and education levels compared with telephone respondents.There was a slight statistical difference in respondent sex, with men responding more frequently by telephone.The distribution of respondents' origin indicated that immigrants, except those born in Europe, were less prone to answer on the Internet.This was especially true for respondents born in Africa, the Caribbean and the Bermudas.
Profile of Internet respondents according to contact procedure
Among the 1516 respondents who were contacted by mail only, 435 called the survey firm back and were interviewed, and 1081 completed the Internet questionnaire.This group of Internet respondents had different socioeconomic characteristics from the group of Internet respondents for whom a telephone number was available (n=3074) (Table 2).In fact, the former group had a lower family income and education level, and was comprised of more immigrants (Table 2).
Factors influencing the choice of survey mode
Table 3 summarizes the results of a regression model reporting the probability of choosing the Internet instead of the telephone, according to selected respondent's characteristics and contact mode.The regression analysis showed that a high education level and being contacted by mail only were the strongest factors among those significantly associated with a preference for the Internet.Higher family income, female sex, and coming from Canada or the United States were less strongly associated.
Response to questions according to survey mode
The frequency of nonresponse item (ie, refusal or don't know) for the selected questions was low, regardless of mode and varied from 0% to 4.8% (Table 4).Nevertheless, for 11 of the 12 questions, the nonresponse rate was higher in the questionnaires completed on the Internet, the greatest difference being 2.9% (question [Q] 7).
Six of the 12 Qs showed significant differences of responses between modes after standardization (Table 4).Among the eight nonsensitive simple 'yes/no' questions (Q1 to Q8 of Table 4), three displayed significant differences of response frequencies.Q2 on dry nocturnal cough and Q6 on allergic rhinitis symptoms showed relatively important discrepancies between modes, with 8.9% and 7.0% more positive answers, respectively, in the Internet questionnaires, while Q1 showed only a slight difference (2.2%).
For the potentially sensitive questions (Q9, Q10 and Q11), Q9 (concerning mother smoking during pregnancy) demonstrated a higher positive response rate among Internet respondents (15.9% for Internet versus 13.3 %) and Q10 concerning the presence of pest animals or insects in the house showed the opposite trend (18.0% for Internet versus 20.4%).These differences were slight but significant.For Q11 (on breastfeeding), the differences were not significant.
The answers to question Q12, concerning home or behaviour changes made because of the child's asthma, showed important differences between modes.The respondent could choose more than one answer among the first four items given, and other answers of his or her own (open-ended response), up to a maximum of five items.The first four items (listed choices) were more frequently chosen by Internet respondents, while the open-ended response was more frequently chosen by the telephone respondents, with the total number of different changes made in the home higher in this group.
The results of stratified analysis according to education and income categories for the questions showing significant differences are presented in Table 5, except for Q12, for which the number of responses per item were too small for stratification.For three questions (Q1, Q9 and Q10), differences between survey modes were not significant for three or all four categories of stratification.For the first question on wheezing, differences within strata seemed more important in respondent groups with low education, but not significantly.The same was observed for Q9 (smoking during pregnancy), with the difference being significant for the lowest education-income category.There was no clear trend observed for Q10 (presence of pest animals or insects).For Q2 (dry cough during the night) and Q6 (symptoms of allergic rhinitis), differences between survey modes were significant in three or all four categories.The percentage of positive answers was systematically higher among Internet respondents for these two questions, and the difference was more important among low education-income categories.
dISCuSSIon
When tasked with analyzing data obtained by telephone and Internet for the same survey, one should verify whether they are comparable.It is not an easy task when respondents between the two modes have socioeconomic disparities, which adds to potential errors due to the modes themselves, all possibly affecting comparability of data and validity.This was especially true in the present survey because asthma and the other respiratory diseases are known to be associated with SES.The main objective of the present analysis was to study the differences in respondent profiles and the distributions of responses between the Internet and the telephone questionnaires.A secondary objective was to investigate the effect of contact procedure (mail only versus both mail and telephone) and SES on mode selection by respondent.
It came as no surprise that the socioeconomic characteristics of respondents in the present survey differed between those who opted for the Internet and those for the telephone.Other studies have shown an association between administration mode and ethnic origin, age, education and income (13,17).In the present study, Internet respondents were typically more educated, had a higher income and were generally less likely to be immigrants.Among the Internet respondents, however, there were markedly more immigrants in the group contacted by mail only, compared with the group contacted also by telephone, especially those from Central and South America, Europe and Africa.This may indicate that these immigrants tried to avoid the use of a language that they did not speak fluently.It may have also been due to a greater number of addresses not paired with a telephone number among immigrants, who are more frequently tenants and move residences more frequently (data not shown).
Regression analysis showed that a high level of education had the strongest influence on the choice of Internet, even more than family income.However, the contact procedure had a stronger effect: those being contacted by mail only were 53% more likely to respond by Internet than those receiving interviewer's call(s) following the letter(s).In other words, if reached by telephone, respondents would tend to complete the questionnaire immediately over the telephone, and those contacted by mail only would tend to go online instead of calling the survey firm.This is probably due to the pressure exerted by the interviewers to obtain a completed questionnaire as soon as possible.In interpreting this result, one should keep in mind that some respondents had no choice because they did not have access to the Internet (unfortunately, we do not know who did not have such access).The rate of nonresponse (ie, don't know/refusal) was more frequent in the questionnaires completed on the Internet.Fricker and Schonlo (18) reported similar results in a literature review on Internet surveys.Greene et al (9) reported more nonresponse on Internet surveys for complex questions.It is quite possible that the telephone interviewers inadvertently pressed the respondents to answer or provided some explanations that helped the respondent to answer.
Five of the 11 closed questions chosen for comparison showed differences according to survey mode after standardization on income and education.Two of the three potentially sensitive questions showed significant differences between modes (smoking during pregnancy and pest animals or insects) -these probably being the most sensitive.For smoking during pregnancy, the analysis showed that positive answers were more frequent among Internet respondents and the stratification revealed that this trend was more pronounced in the two lowest education-income categories, thus suggesting a desirability bias.Regarding the question on pest animals, the differences observed do not suggest the presence of desirability bias.Other authors have found that selfadministered questionnaires are more efficient for sensitive questions than interviews (8,13,15).
As reported by Fricker et al (13), differences between survey modes increase with complexity such as in open-ended questions.This was the case with the question on behaviour or home modifications made to alleviate the child's asthma symptoms.The open item was more frequently used by telephone respondents while the four choices given more frequently by Internet respondents, especially those related to smoking habits.By systematically asking for an additional open item, the telephone interviewers may have increased the response to such an item.
The largest differences among the closed-ended questions were observed in the ones pertaining to dry cough during the night (8.9%) and that on symptoms of allergic rhinitis (7.0%).Positive answers were systematically more frequent among Internet respondents in all education and income strata.These symptoms are very frequent, the prevalence of dry cough in the present study being approximately 50%, and that of rhinitis symptoms approximately 30%.In the subquestion asking whether the coughing or wheezing woke the child during the night, the difference between the two modes was no longer significant.This may indicate that this more specific question was less influenced by various factors and interpretations and time to respond.Brøgger et al (19) also found greater discrepancy for questions on coughing when comparing respiratory symptoms and risk factors between telephone interviews and postal questionnaires on the same respondents.Regarding questions on symptoms of allergic rhinitis, it is also possible that reading instead of listening to the question modified or facilitated the understanding of this long question.Finally, the question on wheezing or whistling also showed a small but significant difference between the two modes.
There was no difference between the two survey modes for the questions related to the diagnosis of asthma, use of bronchodilator medication and mold in the house.This has important validity implications because these were key variables in our study (20).
Table 4 Comparison of positive responses according to administration mode on standardized data for income and education* Question (Q) †
*Data standardized on two categories of family income and two categories of respondent education, based on the entire sample proportions.The cut-off point for family income was $35,000/year and the education level was classified below or at/above high school; † Q1, Q2, Q3, Q4, Q5, Q7, Q8, Q9 and Q10 had three response categories: 'yes', 'no' and 'don't know/refused to answer'.Two response categories of smokers (everyday, sometimes) were aggregated ‡ P<0.05 (c 2 test); § The first four choices were given in the order presented, separate c 2 tests on the frequency of each item among respondents between the two modes | 2018-04-03T02:21:07.857Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "e94b0c89523bc2ce6e3f27e7fe1f8e00f9eac8b6",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/crj/2012/318941.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e94b0c89523bc2ce6e3f27e7fe1f8e00f9eac8b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17776534 | pes2o/s2orc | v3-fos-license | Forensic reporting of a case of sexual abuse broadcast on periscope
Children who are investigated or prosecuted for an action that is considered a crime by the law, or children who were placed in security facilities due to his/her actions are defined as “children forced into crime”. The period between ages 12–18 years is adolescence, during which crime rates are relatively high. The incidence of sexual behavior of adolescents on social media, which may be considered a crime, has increased in recent years due to technological improvements and increase in the use of social media. Also, the crime rates involving adolescents have increased due to environmental influences, familial factors, and mental disorders. Mental disorders such as conduct disorder, attention deficit hyperactivity disorder, and mood disorders have been found to be associated with sexual abuse in young persons in previous literature. In this study, we present the case of a boy who sexually abused his younger brother at the age of 14 years 2 months and broadcast this abuse on “Periscope”. In this case study, we aimed to discuss the relationships between sexual abuse, social media, and psychiatric disorders.
Introduction
Sexual abuse of a child is defined as the involvement of an underage child in an act that sexually satisfies a sexually mature adult. 1 Children who are investigated or prosecuted for an action that is considered a crime by the law, or children who are placed in security facilities due to his/her action are defined as children forced into crime. 2 The period between ages 12-18 years is adolescence, during which crime rates are the highest, as children have an ambiguous imagery of their self-identity during adolescence. 3 Adolescents who cannot cope with stressors such as psychiatric problems and congenital factors, in addition to negative environmental and familial factors, cannot show positive and acceptable behaviors, increasing his/her tendency toward committing crimes. 4 The use of social media and social networking sites such as Twitter and Facebook is on the rise, and a body of research that has emerged over the past few years has identified these platforms to reflect individual and population's psychological states and milieu. 5 Similarly, recent work has shown the utility of social media data for studying depression, but limited studies have investigated other mental health conditions using social media data. 5 When national and international scientific investigations are reviewed, crimes committed involving children on the Internet have been observed to be progressively increasing. It is important to study the regional and city-wise distribution of this increase to understand the causes of crimes involving children, which is why we believe that discussion of this case is important.
The case
Written informed consent was obtained from the patient and parents to publish this paper. A boy aged 14 years and 2 months was the first of two brothers in his family. He was referred to the Sakarya University Department of Pediatric Psychiatry in 2015 under police custody for a forensic report to determine whether he was aware of the legal meaning and consequences of his act and if he had developed the ability to guide his behavior. He was studying in class 9 and with a normal school success level. He reported having sexual intercourse with his 9-year-old brother for a total of three times, and on the last time he reported that their two cousins were also present in the room and they had sexual intercourse with the victim live on the social media platform periscope. He added that apart from him, the two cousins also had anal intercourse with his brother. He reported that he deeply regretted his actions and that he was presently in shock. He reported that he started experiencing sleep disturbances after these incidents were uncovered, started losing his temper quickly, and was continually yelling at home. The forensic investigations of this case had started after the Interpol informed Turkish authorities about the recordings of periscope being found on a pornographic website in the US. After the interview, it was concluded that he had a normal mental capacity. According to Diagnostic and Statistical Manual of Mental Disorders-IV, a working diagnosis of attention deficit hyperactivity disorder (ADHD) and comorbid conduct disorder were considered. An interview with the parents of this case was carried out, along with the inquest file, after which definite conclusions could be reached in the forensic report. Also, an order for social investigation of this child was given and health precautions to be taken were advised.
Discussion
A rich body of work in the social sciences, especially urban sociology and criminology, has examined the relationship between crime rates in urban environments and the general well-being of the residents. 6 During the last 5 years, the number of preadolescents and adolescents using social media sites has increased dramatically. According to a recent poll, .22% of teenagers log on to their favorite social media site more than ten times a day, and more than half of all the polled adolescents log on to a social media site more than once a day. 7 Relation between ADHD and violent offences is due to such as a high prevalence of externalizing behavior, fewer close friendships, and problems with educational achievements. Study showed that youth with ADHD more often committed crimes against people than property. 8 The boy in the present case reported using his Facebook account daily but said that he did not accept friendship requests of people he did not know personally. Sexual intercourse between two children, when the difference in age is 4 years or more, and exposure of a young child by force or by persuasion to commit actions aiming at sexual pleasure are considered sexual abuse. 9 In our case, the age difference between the brothers was determined to be 5 years. If "criminal responsibility" is to decide if the child may be held responsible for his/her criminal behavior with his/her individual judgment and understanding, that is, by understanding the legal value of crime, understanding this value is closely related to the ability of "judgment and decision". If reaching a judgment and conclusion (decision making) is considered solely in terms of cognitive functions, decision making is a mixture of cognitive and psychosocial factors. Decision making is not one dimensional. Although reasoning and judgment are both effective for decision making, they differ from each other. While reasoning is the capacity required for data processing, judgment means evaluation and processing the possible outcomes of decisions with different importance levels. 10 In a study on children aged 7-14 years, while all said that stealing and harming other persons prevented justice and peace, younger children said that stealing and harming other persons was not wrong because "they were against the laws or they brought punishment in the end" but because "they prevented the justice and prosperity of others." 11 Intervention and prevention of crimes involving children is not a factor that can be solved by security forces alone. There has been a recent increase in crimes against children especially via social media, and crimes committed via social media are also increasing. We frequently observe cases of adolescents sharing nude photographs, taking and sharing images of sexual intercourse, and creating false accounts to make severe accusations against other adolescents. Most of the children pushed into crime were found to be male adolescents. In medical literature, hostile behavior and being pushed into crime were reported more frequently in male children and adolescents. 12 The individual in our case was also male, which is in line with the reports of these studies. Additionally, 60%-70% of sexual abusers are relatives, teachers, neighbors, and authority figures that the child knows and trusts. 13 In our case, the child that had committed sexual abuse had repeatedly abused his brother. Galli et al had diagnosed conduct disorder in 94% of 22 adolescent sexual abusers, ADHD in 71%, major depressive disorder in 23%, and bipolar disorder in 27%. 14 History of abuse in children is a risk factor for future criminal behavior. A history of sexual abuse was found in 10%-80% of adolescents showing criminal behavior. 15 But in the first interview of this child, no history of sexual Forensic reporting of a case of sexual abuse abuse was detected; and it is known that in some cases of trauma, children and adolescents relate their traumatic events only after a relationship of trust with the therapist has developed.
In studies, the frequency of mental disorders was found to be increased in adolescents involved in crime, and conduct disorder and ADHD were the most frequently found mental disorders among adolescents. Development of conduct disorder and being pushed into crime rates were higher than children without ADHD and conduct disorder, and they were reported to be more inclined to commit crimes. 16 Similarly, in our case, the patient showed symptoms of ADHD, but he had not received any psychiatric treatments until then. This condition may have prevented this adolescent from calculating the risks and his development of impulse. Social media has good potential as a tool in detecting and predicting affective disorders in individuals. People are increasingly using social media platforms such as Twitter and Facebook to share their thoughts and opinions with their friends and acquaintances. 17 We did not consider the diagnosis of mood disorder or depressive disorder in this case.
Conclusion
It is believed that early mental evaluation and treatment may contribute to decreasing the number of children moving toward committing crimes. These prevention attempts should target social media and children. Children and adolescents require rehabilitation, follow-up, and mental treatment at each step of the legal process. A specialized education and a specialized unit on pediatric and adolescent psychiatry is necessary to meet these requirements. | 2018-04-03T01:05:21.193Z | 2016-08-19T00:00:00.000 | {
"year": 2016,
"sha1": "282a0a68442b2ef6d85d3300a2a87c5c18529df1",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=31980",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f88c408b75c47f8d358d38d1d7647ecae942deb",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26170591 | pes2o/s2orc | v3-fos-license | Association between the DNA Repair Gene XRCC3 rs861539 Polymorphism and Risk of Osteosarcoma: a Systematic Review and Meta-Analysis
Objective: Although there are a few studies investigating the relation between X-Ray Repair Cross Complementing 3 (XRCC3) gene rs861539 polymorphism and osteosarcoma (OSA), the results are inconsistent. Therefore, we performed this systematic review and meta-analysis to clarify the associations between XRCC3 rs861539 polymorphism and OSA risk. Methods: We have retrieved published literature from PubMed, Google scholar, and ISI Web of Knowledge up to 25 January 2017. Odds ratios were pooled using either fixed-effects or random effects models. Overall and subgroup analyses were performed. Statistical analysis was performed running comprehensive meta-analysis (CMA) 2.0 software. Results: A total of four studies with 515 cases and 1,109 controls were identified in order to investigate the association between XRCC3 rs861539 polymorphism and OSA risk. The results showed that XRCC3 rs861539 polymorphism was associated with OSA in allelic (T vs. C: OR= 1.563, 95% CI: 1.244-1.963, p= <0.001), homozygote (TT vs. CC: OR= 2.574, 95% CI: 1.573-4.212, p= <0.001), dominant (TT+TC vs. CC: OR= 1.255, 95% CI: 1.011-1.558, p= 0.039), and recessive (TT vs. TC+ CC: OR= 2.224, 95% CI: 1.393-3.552, p= 0.001), but not with heterozygote (TC vs. CC: OR= 1.361, 95% CI: 0.982-1.885, p= 0.064). The XRCC3 rs861539 polymorphism conferred susceptibility to OSA in Asians, but not in Caucasians. Additionally, we observed no evidence of publication bias. Conclusion: To the best of our knowledge, this is the first meta-analysis investigating the association between XRCC3 rs861539 polymorphism and OSA risk. Our results revealed a significant association between the XRCC3 rs861539 polymorphism and risk of OSA, especially in Asian populations. Future more comprehensive and well-designed case control studies with larger sample size are needed to warrant these findings.
Introduction
Osteosarcoma (OSA) is the most common primary bone malignancy in children and adolescents (Lauvrak et al., 2013). OSA is one of the three most common genuine primary bone malignancies (osteosarcoma, chondrosarcoma, and Ewing's sarcoma), of which these malignancies account more than 75% of malignant bone tumors (Davies et al., 2009). OSA is the most frequent malignant bone tumor, which approximately comprising 47% of all bone neoplasms among adolescents and young adults aged 15 to 29 years old (Bleyer et al., 2006). Approximately, 750-900 new cases are diagnosed each year in the USA, which 400 cases arise in pediatrics and adolescents younger than 20 years of age (Mirabello et al., 2009). According to current/available studies, the peak incidence of OSA occurs in the second decade of life which can be due to rapid bone growth and turnover associated with adolescence (Messerschmitt et al., 2009). According to the statics, the incidence of OSA in males was higher than in females; however, it occurred at an earlier age in females than in males (Bleyer et al., 2006). OSA is characterized by the production of immature bone or osteoid by the malignant cells; however, the diagnosis of OSA is also made based on these characters (Alpantaki et al., 2013;Sarkar 2014). OSA variants classified based on morphology as telangiectatic OSA, low-grade intraosseous OSA, and small cell OSA (Yarmish et al., 2010). In addition to humans, OSA is reported in the many other mammals, in particular, domestic dogs and canine (Mueller et al. 2007). It is held that the majority of pediatrics OSA is sporadic, while inheritance accounts for a minority of cases (Calvert et al., 2012). In older adults, nearly one-third of cases arise in the setting of Paget disease of bone or as a second or later cancer (Geller et al., 2010). Exposure to ionizing radiation is the most well documented environmental risk factor for OSA implicated in 3% of OSA cases (Kalra et al., 2007).
The XRCC3 polymorphism is associated with the risks of numerous types of cancer, such as lung, ovarian or gastric cancer; however, there is limited information regarding the analyzed gene polymorphisms in osteosarcoma (Talar-Wojnarowska). Although a few studies have investigated the relation between XRCC3 DNA repair gene variants and OSA Goričar et al., 2015;Jin et al., 2015;Yang et al., 2015), the results are conflicting rather than conclusive. One thing must be noted. A single study might not be powered sufficiently to detect a small effect of the polymorphisms on condition, particularly in relatively small sample sizes. In addition, various types of study populations and study designs might also have contributed to the disparate results. It is clear that meta-analysis can be used to pool data from individual studies to obtain sufficient statistical power to detect the potential effect of small to moderate sizes associated with the polymorphism. To clarify the effect of the XRCC3 rs861539 gene polymorphism on the OSA risk, we performed a systematic review and metaanalysis on all eligible case-control studies.
Inclusion and exclusion criteria
The inclusion criteria were as follows: (1) independent case-control or cohort design studies evaluated the association between XRCC3 rs861539 polymorphism and OSA; (2) studies provided sufficient published data for estimating an odds ratio (OR) with a 95% confidence interval (95% CI). Major reasons for exclusion were as follows: (1) case report or reviews, (2) cell line studies, (3) having irrelevant data. When there was more than one eligible article with overlapping data conducted by the same author, we included the recent or the comprehensive one.
Data Extraction
In the current meta-analysis, two authors (MM and HN) independently searched and identified the eligible articles based on the inclusion criteria. The authors independently extracted the following data: first author's name, year of publication, ethnicity or country, numbers and genotypes of cases and controls, and Hardy-Weinberg equilibrium (HWE) of controls.
Statistical methods
All analyses were performed with the comprehensive meta-analysis (CMA) V2.0 software (Biostat, USA). Two-sided P.values < 0.05 were considered statistically significant. The statistical significance of the pooled OR was determined using the Z-test and P.value less than 0.05 was considered statistically significant. The pooled ORs with 95 % CIs were calculated in five genetic models: allelic (T vs. C), heterozygote (TC vs. CC), homozygote (TT vs. CC), dominant (TT+TC vs. CC), and recessive (TT vs. TC+CC). Due to lack of heterozygote and minor allele homozygote genotypes frequency, subgroup analyses by ethnicity was available only in dominant genetic model throughout Jin et al., (2015) and Goričar et al., (2015) studies . Both the Cochran's Q statistic test for heterogeneity and the I 2 statistic test to quantify the proportion of total variation were used to measure heterogeneity between studies. An I 2 value of 25%, 50%, and 75 % represents low, moderate, and high heterogeneity, respectively (Higgins et al., 2003). Moreover, a random effects model using the DerSimonian was utilized to calculate the OR and 95% CI for comparisons with moderate to high heterogeneity (P-value > 0.1 and I 2 > 25%) (DerSimonian et al., 1986). Otherwise, a fixed-effects model using the Mantel-Haenszel method was used (Mantel et al., 1959). To assess the reliability of the outcomes in the current meta-analysis, a sensitivity analysis was performed by sequential omission of individual studies for various genetic models in the overall population and also for subgroup analysis by ethnicity. Then, publication bias was estimated graphically by Begg's funnel plot test and statistically Egger's linear regression test (Egger et al., 1997). Additionally, we applied graphically Begg's funnel plot test and statistically Egger's linear regression test to estimate the publication bias, and P<0.05 was considered statistically significant (Egger et al., 1997).
Study characteristics
Out of the 8 identified potential relevant studies, only four case-control studies met all inclusion criteria. Finally, Association of XRCC3 rs861539 Polymorphism with OSA
Sensitivity Analysis
We conducted the sensitivity analysis to evaluate the stability of the current meta-analysis results through removing each study sequentially. However, no obvious changes were found in the results confirming the stability of our results e under the five genetic contrasts for XRCC3 rs861539 polymorphism.
Heterogeneity
There was a moderate but not significant heterogeneity among these studies for dominant model; however, the heterogeneity obviously was disappeared after stratified analysis by ethnicity. Therefore, it can be concluded that ethnicity contribute to substantial heterogeneity among the meta-analysis.
Publication Bias
Egger's test and Begg's funnel plot were used to evaluate publication bias quantitatively and qualitatively, respectively. For pooling and stratifying by ethnicity, the examination of publication bias was conducted only for dominant genetic model because only two studies were included. However, the Begg's and Egger's tests did not show any obvious publication bias under dominant genetic model in terms of pooling (PBeggs = 0.308, PEggers=0.529; Figure 2A) and for Asians ((PBeggs = 1.000, PEggers=0.612; Figure 2B).
Discussion
Three excision repair (ER) pathways were involved in single stranded DNA (ssDNA) damage responses including Nucleotide excision repair (NER), base excision repair (BER), and DNA mismatch repair (MMR) (Shaheen et al., 2011;Iyama et al., 2013). In addition, organisms evolved two main DNA double-strand breaks (DSBs) four studies comprising 515 cases with OSA and 1,109 controls were included into the current meta-analysis Goričar et al., 2015;Jin et al., 2015;Yang et al., 2015). All the eligible studies were written in English and all included studies were conducted during 2015. Among those studies, 3 studies were performed in China Jin et al., 2015;Yang et al., 2015) and one study was conducted in Slovenia (Goričar et al., 2015). All genotype frequencies in the control group fitted well in the Hardy-Weinberg equilibrium (P > 0.05).
The main characteristics of studies included in the current meta-analysis are presented in Table 1. Table 2 depicts the main results of the meta-analysis regarding XRCC3 rs861539 polymorphism and OSA risk. When all the eligible studies were pooled into the meta-analysis of polymorphism, an obvious association between XRCC3 rs861539 polymorphism and increased risk of OSA in allelic (T vs. Figure 1A) and recessive (TT vs. TC+CC: OR= 2.224, 95% CI: 1.393-3.552, p= 0.001) was observed, but not heterozygote (TC vs. CC: OR= 1.361, 95% CI: 0.982-1.885, p= 0.064). In the stratified analysis by ethnicity, only dominant genetic model was available for Caucasian. The present meta-analysis showed that the XRCC3 rs861539 polymorphism was not associated with OSA risk in Caucasian (TT+TC vs. CC: OR= 0.713, 95% CI: 0.438-1.161, p= 0.174). However, there was a significant association between XRCC3 rs861539 polymorphism and risk of OSA in Asians in the dominant genetic model (TT+TC vs. CC: OR= 1.442, 95% CI: 1.133-1.835, p= 0.003; Figure 1B). Table 2. Results of Meta-Analysis for XRCC3 rs861539 Polymorphism and OSA Risk repair mechanisms to preserve genome integrity including nonhomologous end-joining (NHEJ) and homologous recombination (HR) (Mao et al., 2008;Lieber 2010).
Quantitative synthesis
To date, several polymorphisms in NER genes (e.g., XPD, XPF, ERCC1, XRCC1, XRCC3, XPA, XPB, XPC and hOGG1) have been identified (Improta et al., 2008), of which the known genetic polymorphisms of the XRCC3 have been studied most commonly (Yeh et al., 2005;Forat-Yazdi et al., 2015). XRCC3 gene was originally identified due to its ability to complement the DNA repair defect in a Chinese hamster cell line (Tebbs et al., 1995). It is localized on chromosome 14q32.3 by fluorescence in situ hybridization (FISH) and Southern blot hybridization by genomic DNA from panels of two independent hybrid clones (Tebbs et al., 1995). It consists of 7 exons, which lied in the region taking 13.5 kbs and its product is a small protein of 346 amino acids (Huang et al., 2015). The XRCC3 gene plays a critical role in maintaining genomic integrity through repairing ionizing radiation induced DSBs through homologous recombination (HR) pathway (Chistiakov et al., 2008;Borrego-Soto et al., 2015).
The SNPs of XRCC3 gene have been indicated in the susceptibility to different malignancies, such as breast cancer, lung cancer, and head and neck cancer (Namazi et al., 2015;Ali et al., 2016). To date, several polymorphisms have been identified in XRCC3 gene as Thr241Met (C18067T, rs861539), 5-UTR (A4541G rs1799794), and IVERSUS 5-14 (A17893G, rs1799796of which XRCC3 Thr241Met (C18067T, rs861539) in exon 7 is one of the most extensively investigated SNPs in the literatures (Chen et al., 2014). The XRCC3 Thr241Met gene polymorphism is characterized by impaired function of repair which may influence the function of the enzyme by removing a phosphorylation site (Talar-Wojnarowska). The XRCC3 Met/Met genotype has been associated with higher DNA adduct levels in lymphocytes of healthy subjects (Matullo et al., 2001) and individuals with 241Met or 241Thr allele repaired the DSBs to the same extent (Araujo et al., 2002). In addition, XRCC3 Thr241Met is associated with an increased number of micronuclei in lymphocytes of humans exposed to ionizing radiation (Zhao et al., 2013).
A few studies reported the association between DNA repair genes variants and risk of osteosarcoma.
However, several studies investigated the influence of genetic variability of DNA repair genes in OSA treatment outcome (Jin et al., 2015). For example, Wang et al., and Jin et al., have reported that some variants of NER and HRR pathways genes such as ERCC1 rs11615, ERCC2 rs1799793, and NBN rs1805794 modulate the risk of developing OSA or may be useful genetic prognostic markers for OSA in a Chinese population (Jin et al., 2015;Wang et al., 2015). In 2015, Yang et al., in a case control study on 152 OSA cases and 304 healthy controls found that the XRCC3 rs861539 polymorphism was significantly associated with increased risk of OSA in a Chinese population. However, a few months later, Goričar et al., (2015) did not observe any association in a Slovenian population. In the current meta-analysis, we found an association between XRCC3 rs861539 polymorphism and OSA was found under four allelic, homozygote, dominant, and recessive genetic models, but not under heterozygote model. Additionally, the XRCC3 rs861539 polymorphism conferred susceptibility to OSA in Asians, but not in Caucasians. However, due to lack of sufficient data especially in the Caucasian populations, the results are curious.
Heterogeneity between-study is to be expected in the meta-analyses. In the current study, there was a moderate heterogeneity in the dominant genetic model, the only model applied in all included studies , which could distorted the results of meta-analysis. In the subgroup analysis by ethnicity, the heterogeneity disappeared among both Asians and Caucasians. Therefore, it can be concluded the differences in the subjects' genetic backgrounds can result in heterogeneity.
Meta-analysis has advantages compared to individual studies; however, some potential limitations in the current meta-analysis should be considered. First limitation concerns the number of included studies and their sample sizes which were moderately small restricting the ability to detect the possible risk for XRCC3 rs861539 polymorphism with acceptable power. Second out offour included studies, three studies were conducted on Asians and only one on Caucasians; thereforem the results must be interpreted carefully. Further studies concerning populations in Caucasians and other ethnicity such as west Asians, North American and African are needed to distinguish the ethnic variation related biases. Third, because we included only published papers written in English, publication bias may have occurred; even though, statistical tests revealed nothing. Finally, gene-gene and gene-environment interactions were not addressed in the current meta-analysis. The pathogenesis of OSA has a genetic and environmental basis because in some cases OSA was associated with high doses of ionizing radiation from therapeutic or occupation-related exposures. However, most studies did not provide the data stratified by these risk factors. In addition, in this meta-analysis, we pooled the overall outcomes based on individual unadjusted ORs without adjustment for other risk factors such as age, sex, environmental exposures, OSA subtypes etc.
To our best knowledge, this is the first meta-analysis examined the association between XRCC3 rs861539 polymorphism and OSA risk. This meta-analysis suggests the association between the XRCC3 rs861539 polymorphism and OSA in Asians. However, more convincing evidence is required to draw comprehensive conclusion. Therefore, well-designed studies in large sample and in different ethnicity are recommended to confirm these findings. | 2017-09-07T13:56:08.156Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "5d01f8ed7f523f3d621962a5c6cc79f3fe5c921f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5d01f8ed7f523f3d621962a5c6cc79f3fe5c921f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14707904 | pes2o/s2orc | v3-fos-license | Corticothalamic Spike Transfer via the L5B-POm Pathway in vivo
The cortex connects to the thalamus via extensive corticothalamic (CT) pathways, but their function in vivo is not well understood. We investigated “top-down” signaling from cortex to thalamus via the cortical layer 5B (L5B) to posterior medial nucleus (POm) pathway in the whisker system of the anesthetized mouse. While L5B CT inputs to POm are extremely strong in vitro, ongoing activity of L5 neurons in vivo might tonically depress these inputs and thereby block CT spike transfer. We find robust transfer of spikes from the cortex to the thalamus, mediated by few L5B-POm synapses. However, the gain of this pathway is not constant but instead is controlled by global cortical Up and Down states. We characterized in vivo CT spike transfer by analyzing unitary PSPs and found that a minority of PSPs drove POm spikes when CT gain peaked at the beginning of Up states. CT gain declined sharply during Up states due to frequency-dependent adaptation, resulting in periodic high gain–low gain oscillations. We estimate that POm neurons receive few (2–3) active L5B inputs. Thus, the L5B-POm pathway strongly amplifies the output of a few L5B neurons and locks thalamic POm sub-and suprathreshold activity to cortical L5B spiking.
Introduction
A major input to the mammalian thalamus originates in the cortex from corticothalamic (CT) projection neurons in Layers 5 (L5) and 6 (Hoogland et al. 1987;Sherman 2001;Killackey and Sherman 2003). L5 CT axons target "higher order" thalamic nuclei, where they form large ("giant") synapses with thalamic proximal dendrites (Hoogland et al. 1991;Sherman and Guillery 1996;Veinante, Lavallee, et al. 2000;Killackey and Sherman 2003). Anatomical studies suggest that while these synapses are large, they are also sparse (Bourassa et al. 1995). While counts of L5 CT inputs per POm neuron are lacking, these properties differentiate L5 CT synapses from L6 CT synapses, which are small and numerous (Sherman and Guillery 2006). In brain slices, unitary EPSPs evoked from a single L5B axon can trigger action potentials (APs) in target POm neurons (Groh et al. 2008;Seol and Kuner 2015). This cortical "drive" of POm has been supported by in vivo experiments, as blocking cortical activity showed that POm spiking is contingent upon intact barrel cortex (BC) (Diamond et al. 1992;Groh et al. 2014) and is correlated with cortical Up states (Slezia et al. 2011;Groh et al. 2014). However, the strength and adaptive properties of the CT driver pathway in vivo are unknown. Consequently, the efficacy of spike transfer from the cortex to the thalamus (the CT transfer function) has not been quantified in vivo, and it is unknown which-if any-L5B spike patterns of evoke spikes in POm in the intact brain.
Putative CT spike transfer in vivo is likely to depend strongly on the spiking rate of individual L5B neurons, as L5B-POm synapses are characterized by pronounced, fast depression (Reichova and Sherman 2004;Groh et al. 2008;Seol and Kuner 2015); also see Li et al. (2003) for similar findings in the visual thalamus. Therefore, the strength of a synapse will depend on the spiking history of the upstream L5B neuron, and-as L5B neurons are the most spontaneously active neurons in the BC (de Kock et al. 2007;Oberlaender et al. 2012)-the transfer function of this pathway should adapt markedly. We hypothesized that frequencydependent synaptic depression could toggle CT spike transfer between different functional modes (Groh et al. 2008): in keeping with the original definition of "driver synapses" (Sherman and Guillery 2006), we refer here to "driver mode" as a fail-safe transfer mode between pairs of L5B and POm neurons, in which a single presynaptic L5B spike evokes one or more POm spikes. From in vitro measurements, this high gain mode is predicted to only occur for L5B spiking frequencies less than approximately 2 Hz, when the synapses are partially or fully recovered (Groh et al. 2008). In contrast, at higher frequencies, each L5B synapse would be depressed and the pathway would operate in a low gain mode, in which several coincident inputs are integrated to evoke POm spiking.
We address the properties of CT spike transfer in vivo by combining optogenetic manipulations with recordings of L5B and POm sub-and suprathreshold activity in urethane anaesthetized mice. The results show that POm is driven by very sparse CT input most likely of L5B origin. Furthermore, the L5B-POm pathway is not in a constant and stable state of depression, resulting in periodic transitions in CT gain following cortical Up and Down state activity.
Ethical Approval
All experiments were done according to the guidelines of German animal welfare and were approved by the respective ethical committees.
Depth of anaesthesia was continuously monitored by eyelid reflex, respiration rate, and cortical LFP, and additional urethane (10% of the initial dose) was given when necessary. Respiration rates were usually between 100 and 140 breaths per minute. In the case of isoflurane anaesthesia, concentration of anesthetic was adjusted to reach steady respiration rates around 100 breaths per minute. The skull was exposed, and small craniotomies above BC and POm were made (dura intact). For VGAT photostimulation experiments, the skull above BC was additionally thinned to permit better light penetration into the tissue. The head was stereotaxically aligned (Wimmer et al. 2004) for precise targeting of POm. Target coordinates relative to bregma were (lateral/posterior/depth; in mm) as follows: BC L5B: 3.0/1.1/0.7; POm: 1.25/1.7/2.8-3.0; Motor Cortex: 1.0/-1.0/0.6) In vivo juxtasomal recordings and biocytin fillings were made as described in Pinault (1996). In brief, 4.5-5.5 MΩ patch pipettes were pulled from borosilicate filamented glass (Hilgenberg, Germany) on a DMZ Universal puller (Zeitz Instruments, Germany). Pipettes were filled with (mM) 135 NaCl, 5.4 KCl, 1.8 CaCl 2 , 1 MgCl 2 , and 5 HEPES, pH adjusted to 7.2 with NaOH, with 20 mg/mL biocytin added. Bath solution was identical, except for biocytin. Single units were found by the increase of pipette resistance (2-2.5 times of the initial resistance) measured in voltage clamp mode. A L5B and a POm cell were recorded simultaneously with a ELC-01X amplifier (NPI Electronics, Germany) for POm and a Axoclamp 2B (Molecular Devices, USA) for L5B. Unfiltered and band-pass filtered signals (high pass: 300 Hz, low pass: 9000 Hz) were digitized at 20 kHz with CED Micro 1401 mkII board and acquired using Spike2 software (both CED, Cambridge, UK). Typically, recordings consisted of 1 single unit which was filled at the end of the experiment with biocytin using current pulses (Pinault 1996). Whole-cell single neuron current clamp recordings in POm were done using the "blind patching" approach as described in Margrie et al. (2002). Pipette solution was (in mM) 130 K-gluconate, 10 HEPES, 10 Na-phosphocreatine, 10 Na-gluconate, 4 ATP-Mg 2+ , 4 NaCl, 0.3 GTP, 0.1 EGTA, 2 mg biocytin, osmolarity approximately 300, and adjusted to pH 7.2 with KOH.
Cell Selection Criteria and Cell Reconstructions
For all L5B recordings, we used a combined photo-and sensory stimulation protocol to validate neurons' locations: L5B neurons were accepted for analysis if 1) photostimuli applied to the cortical surface resulted in rapid, unadapting spiking responses that persisted for the duration of a long photostimulus (3 s), and (2) each neuron responded within 100 ms to whisker stimulation, as the majority of L5B neurons in BC respond to whisker stimulation within this time period (de Kock et al. 2007). This protocol ensured that each putative L5B neuron was both in L5B ( photostimulation) and in BC (sensory response). In addition to these physiological parameters, L5B and POm neurons were also filled with biocytin for reconstruction of the locations and morphologies ( Fig. 1 and see Supplementary Fig. 1).
After the experiments, mice were euthanized with an overdose of ketamine/xylazine and transcardially perfused with 4% PFA in phosphate-buffered saline. Four hours after fixation, the brain was cut into 100 µm coronal slices and stained for cytochrome C to reveal the VPM/POm border and with DAB to reveal the soma and dendrite of the recorded neuron; both protocols are found in Groh and Krieger (2011).
Six POm neurons and 12 Chr2-L5B neurons were recovered and all showed dendritic parameters ( Fig. 1 and see Supplementary Fig. 1 and Tables 1 and 2) consistent with previously published descriptions of these neurons (de Kock et al. 2007;Meyer et al. 2010).
Tracing L5B-ChR2 Projections to POm
For retrograde labelling of POm-projecting cortical neurons, a retrograde tracer (50 nL Cholera toxin B-Alexa 647 conjugate, Invitrogen) was stereotaxically injected into POm of thy1-ChR2 mice as described in detail in Wimmer et al. (2004). After 4 days, the animals were killed with an overdose of urethane (3 µg/g body weight) and perfused transcardially with 4% PFA containing PBS. The brain was removed, and 100 µm coronal sections of the somatosensory cortex and thalamus were obtained on a vibratome (HR2, Sigmann Electronic, Germany). Fluorescence images were acquired using an Olympus FV1000 (Hamburg, Germany) confocal microscope with a ×20 oil objective (NA 0.9).
In Vivo Photostimulation Setup
Stimulation of ChR2 or VGAT neurons was achieved by a custombuilt laser setup consisting of a solid state laser (Sapphire, Coherent, Dieburg, Germany) with a wavelength of 488 nm and a maximal output power of 20 mW. The sub-millisecond control of laser pulses was achieved by an ultrafast shutter (Uniblitz, Rochester, NY, USA). The laser beam was focused with a collimator into 1 end of a multimode fiber (Thorlabs, Grünberg, Germany; numerical aperture = 0.48, inner diameter = 125 µm). For ChR2-L5B neuron activation, the maximal output power at the end of the fiber was 1 mW, resulting in a maximal power density of approximately 32 mW/mm 2 on the brain surface. Shutter control was implemented with Spike2 software (CED, Cambridge, UK). The optical fiber was positioned at an angle of approximately 86°(from the horizontal plane) and at a distance of approximately 100 µm to the cortical surface. For each neuron, we recorded an average of 72 ± 58 or 74 ± 47 trials for juxtasomal and intracellular recordings, respectively. For BC VGAT photostimulation, the optical fiber was positioned at the same angle, but at a distance of approximately 2.5 mm to increase the stimulated area to a disc with a diameter of approximately 800 µm above BC. For robust cortical inhibition (see Fig. 3C), we used a 40 Hz series of laser pulses (12.5 ms on, 12.5 ms off ) for 1 s with an approximate power density at the pia of 8.4 mW/mm 2 , based on the study by Zhao et al. (2011). For each neuron, we recorded an average of 53 ± 18 trials (1 s photostimulation trains).
Cortical LFP Recordings
To monitor cortical state, we acquired L5 local field potentials (LFP) simultaneously with single neuron recordings. Depth-resolved LFPs were recorded with a 16-channel probe (Neuronexus probe model: A1X16-3mm-100-177, Neuronexus, MI, USA). The probe was inserted 1.5 mm from the pia and a Teflon-coated silver wire chlorided at the tip was used as reference in the bath solution above the craniotomy. Signals were amplified and filtered with an extracellular amplifier (EXT-16DX, NPI Elektronics, Tamm, Germany). LFPs were band-pass filtered with 0.01 or 0.1 Hz and 500 Hz corner frequencies and amplified 1000-2000 times. All signals were digitized at 20 kHz with CED Micro 1401 mkII board and acquired using Spike2 software (both CED, Cambridge, UK). Only LFPs recorded at a depth of 750 µm, corresponding to L5B, were used for analysis. Same coordinates as above.
Muscimol Block of BC
To determine the specificity of L5B drive of POm, we blocked barrel cortex (n = 3, Fig. 3) via application of approximately 50 nL of 10 mM muscimol (Sigma Aldrich) injected to L5. Muscimol is a GABA-A receptor agonist and is widely used to locally inhibit neuronal activity in the intact brain (Letzkus et al. 2011;Xu et al. 2012). Under these conditions, muscimol spreads approximately 1 mm along the anterio-posterior axis (Letzkus et al. 2011), thus likely blocking activity in the entire barrel field, and possibly parts of S2 cortex known to form giant synapses with POm neurons as well (Liao et al. 2010). After establishing a whole-cell recording in POm, an injection pipette (Blaubrand) was lowered into BC to a depth of 800 µm below the pia, and the drug was slowly pressure injected into the cortex. Effects on the sub-and suprathreshold activity in POm were seen approximately 5-10 minutes after drug application. We monitored the LFP in motor cortex (MC) while recording from single POm neurons. Despite ongoing Up and Down state activity in MC, spikes and spontaneous large EPSPs in POm successively disappeared 5-10 min after muscimol injection into BC. This treatment was nonreversible in the time course of our experiments.
Data Analysis
Electrophysiology data were acquired using Spike2 software and then exported for analysis in Matlab version 9 (MathWorks, Natick, USA) using custom written software. Spike times were extracted by finding local maxima in the temporal derivative of recorded voltage traces (dV/dt) above a variable threshold (typically 40-50% of maximum dV/dt). Reported values are mean ± standard deviation, unless otherwise noted.
EPSP Extraction
We characterized POm sub-and suprathreshold responses to putative L5B spiking via whole-cell patch clamp recordings (n = 38 neurons; >50 000 EPSPs). EPSP amplitude was defined as the EPSP maximum, including all postsynaptic potentials such as low threshold calcium spikes. EPSP times and maxima were extracted by finding crossings in the first derivative of the membrane potential and validated and/or corrected by hand.
Identification of Up States
Up states were selected by hand as large deflections in the LFP. To further standardize transition points across recordings and Up transitions with different rates of change, each individual LFP transition trace was normalized to a height of 1 and the transition point was then set to be the time at which the trace reached 50% of this maximum (see Supplementary Fig. 3). For the display figures, the LFP signal was converted to a dimensionless z-score and then inverted so that positive deflections correspond to "Up states" (Hahn et al. 2006).
EPSP Adaptation
To predict the adaptation state of the L5B-POm pathway, including synaptic and intrinsic factors, we constructed a simple model combining intracellular EPSP measurements and L5B spontaneous spiking statistics. For "single input" POm neurons which 1) showed only one unadapted EPSP amplitude peak and 2) showed high correlation between EPSP amplitudes and inter-EPSP interval (IEI), we normalized all EPSP amplitude by the average unadapted EPSP amplitude. We then plotted normalized EPSP amplitude versus IEI for a subset of single input neurons (n = 5). We then fit a double exponential to this curve: M pred ðtÞ ¼ e 1ÀtISI =τ1 þ e 1ÀtISI =τ2 , where τ 2 ¼ 550 ms, τ 2 ¼ 550 ms, t ISI ¼ t À sp t ; and sp t is the most recent L5B spike relative to t. Those t ISI >2 s were truncated to 2 s, and we set M pred ¼ 0 for t ISI ¼ 0, corresponding to a completely depressed synapse. We then used this function to convert experimentally measured L5B spike trains ( juxtacellular recordings) into predicted POm EPSP recovery state.
Predicting POm Suprathreshold Events
POm intrinsic properties are highly nonlinear and show significant intrinsic bursting. Our goal here was to predict the timing of POm output relative to cortical input, not the precise spike count dependent on bursting mechanisms. To this end, instead of predicting discrete spikes times, we predicted POm suprathreshold events, in which an "event" could consist of one or more spikes. We first used the predicted POm EPSP recovery state to look up the predicted EPSP amplitude for each L5B spike time (completely recovered amplitude = 1). We then added a scaled version of an unadapted EPSP at each time point corresponding to an input L5B spike. EPSPs were modeled as a difference of exponentials fit to unadapted (IEI >700 ms) isolated (no subsequent EPSPs within a 50 ms window) experimentally measured EPSPs: EPSPðtÞ ¼ e ð1Àt=τ1Þ À e ð1Àt=τ2 Þ , with τ 1 ¼ 12:8ms and τ 2 ¼ 4:8ms. Time constant fitting was done using a minimum root mean-squared difference between the model EPSP and target normalized voltage trace (normalized to maximum of 1).
Predicted event rates were then found by finding regions of the predicted voltage trace V pred greater than a threshold θ; subsequent regions above θ were combined, corresponding to a minimum interevent interval of 1.5 ms. Unsurprisingly, predicted rates were quite sensitive to θ. For θ < 1, unadapted single EPSPs can drive POm events, whereas for θ ≥ 1, either coincident independent L5B inputs or closely spaced EPSPs driven by the same input L5B neuron are required to drive POm output spikes. Predicted event rates were calculated as the number of above threshold regions divided by the total length of the input L5B recording.
Estimating Input Number Based on Correlation
For POm whole cell recordings, we estimated input number based on the correlation coefficient r between POm EPSP amplitude and log 10 inter-EPSP interval. This strategy follows from the assumption of strong depression of the L5B-POm synapses (Groh et al. 2008). Single inputs should have a large r with an upper limit set by background noise from synaptic release noise (Groh et al. 2008) and membrane potential fluctuations controlling driving force and availability of the T-channel. It should be noted that this estimate is based in functional rather than anatomical data, that is, active L5B inputs (large and depressing) during spontaneous Up and Down states. The contribution of anatomical L6 inputs is negligible under these experimental conditions, (see Velez-Fort et al. (2014)).
To explore the range of r expected for single and 2 input neurons, we predicted the EPSP size generated in response to our group of simultaneous recorded L5B spike trains (n = 9 pairs), and r between IEI and EPSP size calculated for different levels of noise. For single inputs, all spike trains (n = 18) were used, and for double inputs, the paired EPSP trains were combined.
To extrapolate the predicted r values for >2 inputs, we generated mock spike trains by drawing from experimentally generated interspike interval distributions from up to 5 independent L5B recordings and then combining the EPSP trains and IEIs as above.
Results
We first measured the cortical input and thalamic output of the L5B-POm pathway by recording simultaneously from L5B and POm neurons (n = 12 pairs) in the juxtacellular configuration (Fig. 1A). These individual L5B and POm neurons in the paired recordings were most likely not connected, because POm is sparsely innervated by L5B (Bourassa et al. 1995). To record from a defined group of L5B neurons in BC, we used the ChR2-expressing thy1 mouse (line 18) that has been used to specifically photostimulate L5 neurons in vivo (Arenkiel et al. 2007;Stroh et al. 2013;Vazquez et al. 2014). This allowed us to confine our cortical data set to a relatively homogenous group of L5B neurons by searching for photo-responsive neurons in L5B during each experiment. Analysis of morphologies showed that ChR2-expressing neurons are thick-tufted L5B neurons ( Table 2). Single L5B (n = 12) and POm (n = 15) neurons and simultaneously recorded L5B neuron pairs (n = 9 pairs) which met the above criteria were included in some analyses. A further set of recordings were done in wholecell configuration from single POm neurons (n = 38) to quantify photo-evoked and spontaneous EPSPs.
L5B and POm Activity During Cortical Up and Down States
Cortical neurons follow spontaneous "Up state" cortical oscillations which occur during anesthesia (Timofeev et al. 1996;Steriade 1997;Constantinople and Bruno 2011). If the L5B-POm pathway supports efficacious CT spike transfer in vivo, then we expect to see correlated cortical and thalamic activity during such Up states. To first determine the relation between cortical Up states, L5B spikes, and POm spikes, we recorded simultaneously from L5B and POm neurons (n = 12 cortical/thalamic simultaneous recordings), as well as local field potential (LFP) in L5 of BC to monitor cortical Up states (schematic shown in Fig. 1A). L5B spiking was tightly correlated with cortical Up states. Interestingly, POm spiking was correlated with cortical Up states in a similar but more selective fashion. Both L5B and POm spiking occurred exclusively during Up states and peaked during Up state onsets. However, in contrast to L5B spiking throughout the entirety of each Up state, POm spikes were sparser and nearly always occurred at Down-Up state transitions (Fig. 1B).
To understand the changes in subthreshold activity which might underlie this marked difference between cortical and thalamic spiking, we simultaneously recorded POm membrane potential in whole-cell configuration and cortical LFP from L5 in BC. All POm neurons (n = 38) had large EPSPs evoked during spontaneous cortical Up states (Fig. 1C). In contrast, EPSPs were entirely absent during cortical Down states, matching the lack of spiking in L5B (Fig. 1B).
Spontaneous EPSPs in POm as shown in Figure 1E varied widely in amplitude (from 0.5 mV to larger than 20 mV, see Supplementary Fig. 7 for population distribution), with a median amplitude of 4.4 mV (1st quartile 2.6 mV, 3rd quartile: 7.3 mV). Larger EPSPs (>8 mV) often showed stereotyped slow depolarizations consistent with low-threshold calcium spikes (LTS) characteristic of thalamic relay neurons (Jahnsen and Llinas 1984;Landisman and Connors 2007;Groh et al. 2008). Such EPSPs typically triggered one or more APs, and these large AP-triggering EPSPs most often occurred at the beginning of Up states (first event in Fig. 1D). Furthermore, EPSPs showed strong adaptation, meaning that larger EPSPs were often followed at short-time intervals by small amplitude EPSPs (Fig. 1E).
To quantify these initial observations, we next used the Up transitions in the LFP to align and pool spiking, EPSP, and LFP data across recordings (see Methods and see Supplementary Fig. 3). Figure 2 compares the population average activity patterns in L5B and POm during cortical Up states (n = 16 L5B and n = 12 POm, juxtacellular; n = 22, POm intracellular). In all experiments, L5B and POm spiking was tightly coupled to spontaneous Up state transitions ( Fig. 2A) and absent during Down states. L5B spike rates (Fig. 2B) were higher than POm spike rates (Fig. 2C) by an approximate factor of 3 (mean spike rates: 1.9 ± 0.8 Hz and 0.63 ± 0.5 Hz, for L5B and POm, respectively, L5B, n = 16; POm, n = 12; 172-1964 Up states per recording, mean 583 ± 413).
Population EPSP analysis shows that POm EPSPs (Fig. 2D) and L5B spikes (Fig. 2B) follow a similar progression through the Up state: peaking at the beginning of the Up state and slowly declining for the duration, consistent with POm activity being dominated by large L5B EPSPs during spontaneous Up states. Mean spontaneous EPSP rate was 3.8 ± 2.1 Hz (n = 38), and EPSP amplitudes ( Fig. 2E) peaked in the beginning and declined by approximately 40% throughout the Up state. The time course of this adaptation suggests that the strength of L5B-POm synapses is periodically modulated by cortical Up and Down states and the associated changes in L5B spiking, with the result that CT spike transfer is most effective at Up state transitions when the L5B-POm synapse is maximally recovered after L5B inactivity during preceding Down states. (E) Population mean normalized EPSP amplitudes ± SD for data in D.
Previous in vitro work suggested that POm neurons might be driven by single L5B spikes from single L5B neurons or, when the L5B-POm synapses are depressed, integrate 2 or more L5B spikes (Groh et al. 2008). Here, in the in vivo intracellular data set, we categorized POm APs by the number of EPSPs in the preceding 30 ms window. A population median of 45% of all APs (1st and 3rd quartiles, 21% and 0.61%, respectively) was driven by single EPSP (median amplitude = 8.7 mV; 1st and 3rd quartiles, 6.4 and 14.8 mV, respectively) and the remaining 55% by 2 or more EPSPs (median amplitude 5.0 mV, 1st and 3rd quartiles, 3.2 and 7.4 mV, respectively). Single EPSPs that triggered APs were nearly twice the amplitude of integrated EPSPs (P < 0.05, rank sum). This analysis suggests that, regardless of the number of anatomical L5B inputs, POm spikes can signal either the integration of 2 or more L5B spikes, or the occurrence of single L5B spikes, and that EPSP adaptation transitions L5B-POm spike transfer between the 2 modes.
EPSPs and Spiking in POm Depend on Cortical Input
The tight coupling of L5B spikes and POm EPSPs (Figs 1 and 2) suggests a causal relation between L5B in BC and POm activity. To test this causality, we inhibited BC pharmacologically and optogenetically. Spontaneous large EPSPs and APs in POm were abolished by muscimol injection into BC, with EPSP rates declining from approximately 3 to 0 Hz (Fig. 3A,B). While muscimol injection abolished Up states in BC (see Supplementary Fig. 4), Up states persisted in motor cortex (MC) (Fig. 3A, middle), suggesting that the drug remained relatively restricted to somatosensory cortex. Similarly, inhibiting BC in a more spatially and temporally specific manner via cell-type-specific photostimulation of inhibitory VGAT interneurons (Fig. 3C) (Zhao et al. 2011) immediately and reversibly abolished spontaneous POm spiking (Fig. 3D,E). These data show that in the anesthetized animal, cortical input -most likely of BC origin-is required for POm spiking. These data are in agreement with previous, less region-specific manipulations such as cortical cooling (Diamond et al. 1992) and cortical spreading depression .
EPSPs in POm Are Evoked by Photostimulation of L5B Neurons in BC
To directly confirm the L5B origin of large EPSPs in POm (Reichova and Sherman 2004;Groh et al. 2008), we photostimulated L5 neurons in BC and recorded subthreshold responses in POm, as before . Photostimulation with short (5 ms, <32 mW/mm 2 ) laser pulses applied to the surface of BC evoked sharp deflections in the L5 LFP and short latency, high probability spikes in L5B and POm neurons (Fig. 4A,B). To measure EPSP latencies and test whether EPSPs were unitary, we made wholecell recordings of photo-evoked responses in POm (Fig. 4C). Under minimal stimulation conditions with low intensities, we observed failure trials with no responses interspersed with successful trials consisting of large, unitary EPSPs (Fig. 4D). In addition, these EPSPs were blocked by muscimol injections into BC (Fig. 4E), confirming that these events were driven by cortical input.
Additional cortical input to POm originates in cortical layer 6 (L6) (Hoogland et al. 1987;Bourassa et al. 1995;Killackey and Sherman 2003). However, our L5B photostimulation protocol did not activate L6 neurons, which do not express ChR2 in the thy-1 mouse line (Arenkiel et al. 2007), and secondary activation of L6 via L5 cortico-cortico pathways was only seen for laser strengths approximately an order of magnitude greater than that we used for our photostimulation experiments (see Supplementary Fig. 5). Additionally, both spontaneous and photo-evoked POm EPSPs are incompatible with L6-evoked inputs: L6 inputs to the thalamus evoke EPSPs that 1) are about an order of magnitude smaller than EPSPs evoked by L5B inputs, 2) scale linearly with stimulation strength, and 3) are accompanied by simultaneous hyperpolarization (Reichova and Sherman 2004;Landisman and Connors 2007;Mease et al. 2014).
Interaction Between Evoked and Spontaneous POm Activity
These data strongly suggest that photo-evoked EPSPs in POm result from direct input from L5B (Fig. 4C,D,F). We reasoned that if both spontaneous and photo-evoked POm EPSPs and spikes are triggered by the same L5B inputs, spontaneous and evoked events measured in a single POm neuron should show statistical interaction due to synaptic depression (Reichova and Sherman 2004;Groh et al. 2008).
Spontaneous EPSPs did indeed affect subsequent photoevoked EPSPs, in that the amplitudes of photo-evoked EPSPs decreased with the occurrence of spontaneous EPSPs preceding the photostimulus (Fig. 4G). Consistent with frequencydependent depression of the L5B-POm pathway (Li et al. 2003;Reichova and Sherman 2004;Groh et al. 2008), population analysis of photo-evoked EPSPs showed that EPSP amplitude increased with time to preceding spontaneous EPSPs (Fig. 4H), showing significant interaction within a window of 500 ms. This timescale of adaptation matches that described previously in vitro (Groh et al. 2008). Similarly, on the suprathreshold level, spontaneous POm spiking decreased the probability of spiking responses to subsequent photostimuli (see Supplementary Fig. 6). Thus, in agreement with previous anatomical and functional data from the L5B-POm pathway (Hoogland et al. 1987;Diamond et al. 1992;Reichova and Sherman 2004;Groh et al. 2008), these in vivo interactions of spontaneous and evoked supraand subthreshold activity suggest that both inputs originate in L5B of the BC.
Frequency-Dependent Adaptation of L5B-POm Pathway in vivo
The spontaneous and photo-evoked data show evidence of adaptation which should be strongly frequency dependent due to depression of the L5B-POm synapse (Reichova and Sherman 2004;Groh et al. 2008). We directly tested the in vivo frequency dependence of CT spike transmission with repeated (5) brief (5 ms) photostimuli presented at frequencies from 2 to 50 Hz (Fig. 5). L5B neurons spiked with high probability across the entire frequency range ( Fig. 5A-C, upper panels), while POm spike responses decreased with stimulation frequency (Fig. 5A-C, lower panels). Thus, the efficacy of CT spike transfer strongly adapts according to the frequency of L5B input, with the most pronounced CT gain adaptation occurring for frequencies of 10 Hz and more (Fig. 5C). Examining subthreshold adaptation in whole-cell POm recordings (Fig. 5D,E) shows that photo-evoked EPSPs adapt significantly to high frequency stimulation, although with occasional recovery likely due to T-type calcium channel deinactivation. In sum, this rapid gain adaptation allows the L5B-POm pathway to operate dynamically according to the spiking patterns of L5B neurons, as in the spontaneous Up state data (Fig. 2).
EPSP Adaptation Across the L5B-POm Pathway
The variability in EPSP amplitudes in individual POm recordings was high, spanning almost an order of magnitude (see Supplementary Fig. 7). While some degree of variability was due to varying membrane potential at EPSP onset (see Supplementary Fig. 7D), we reasoned that a large amount of amplitude variation was due to different degrees of depression in L5B-POm synapses induced by variable intervals between spontaneous input L5B spikes. In a given POm recording, intervals between input L5B spikes can be inferred from inter-EPSP intervals (IEIs) in the recorded recipient POm neuron. Assuming strong depression at the L5B-POm synapse (Groh et al. 2008), in a POm neuron receiving input from a single L5B neuron, EPSP size should increase with long IEIs that allow the synapse to recover from depression. We found that a subset of neurons indeed matched this expectation (Fig. 6A). These neurons could be identified by a characteristically strong correlation between EPSP amplitude and IEI (Fig. 6B), whereas the remainder of recordings showed a weaker correlation (Fig. 6C,D). We used this variation in adaptation to discriminate between POm neurons receiving different number of L5B inputs by calculating the correlation coefficient r between EPSP amplitude and log 10 IEI for each neuron. The logic is as follows: for a neuron with only one depressing input, EPSP amplitude should always be perfectly predicted by IEI (high r); in contrast, additional independent inputs will intersperse nonadapted EPSPs in the EPSP train and decrease r. A similar approach was used by Deschenes et al. (2003) to estimate the number of lemniscal inputs to VPM neurons.
Categorizing POm Neurons by Putative L5B Input Count
We used r to assign each POm neuron a category according to putative independent L5B input count. Nearly half (18/38) of the POm neurons showed a markedly simple relationship between EPSP amplitude and IEI: large EPSPs were always preceded by long IEIs, and small EPSPs occurred exclusively after short preceding IEIs (Fig. 6A). This reliable adaptation led to a high r between spontaneous IEI and EPSP amplitude (Fig. 6B). We categorized such neurons (r > 0.6) as "single input" neurons, as this high correlation could only arise if all observed EPSPs were driven by the same source L5B neuron (or if multiple L5B were always perfectly synchronized-a very unlikely situation). Single input recordings also had a clearly defined minimum IEI (∼3 ms see Fig. 6B lower histogram). We interpret this minimum IEI as corresponding to the highest spiking rate of the single active input L5B neuron.
The remainder (20/38) of cells showed relatively weaker correlation (r < 0.6) between EPSP amplitude and preceding IEI (Fig. 6C,D) and were termed "multiple input" recordings. These recordings showed mixes of small and large EPSPs not unambiguously predicted by IEI (Fig. 6C, arrows), suggesting 2 or more active L5B inputs. In contrast to single input neurons, multiple input neurons showed a continuous distribution of IEIs approaching 0 ms (Fig. 6D, lower histogram), further suggesting that the EPSPs arose from multiple independent L5B inputs.
Predicting the CT Spike Transfer Function and the Number of Active L5B Inputs per POm Neuron
The data presented so far suggest that CT gain in the L5B-POm pathway is a function of synaptic depression. In the following, we use experimental data to construct a simple model to predict POm spiking in response to L5B spiking patterns.
The observation of "single input" POm neurons allowed us to quantify POm EPSP amplitude as a function of IEI and thereby the in vivo adaptation of L5B-POm inputs. We used this adaptation curve (see Supplementary Fig. 8A) to predict POm EPSP amplitudes (unitless, with maximum of 1, corresponding to a completely recovered input) for L5B spikes recorded during Up states (Fig. 7A). Figure 7B shows the recovery of EPSP amplitudes towards 1 between L5B spikes, and the subsequent "adaptation" to 0 at the time of each L5B spike. The time course of predicted EPSP amplitude (Fig. 7C, lower)-the effective CT subthreshold gain-closely followed the in vivo Up state in the LFP (Fig. 7C, upper), supporting our experimental finding that CT gain is controlled by L5B spiking history.
By using instantaneous EPSP adaptation state controlled by L5B spikes (Fig. 7B) as a multiplier for a template POm EPSP sampled from whole-cell recordings (see Methods), we could create predicted EPSP trains in response to experimentally measured L5B spike trains (Fig. 7D). Using these simulated EPSP trains, we next predicted POm spiking events to input L5B spiking patterns using a variable threshold θ (dashed lines in Fig. 7D). The time course of predicted POm spiking event times during Up states was similar to the observed experimental time course (Fig. 7E). Furthermore, predicted POm event rates best matched experimental values (∼0.5 Hz) for θ corresponding to EPSPs recovered to 60-80% of maximal amplitude (see Supplementary Fig. 8). These predictions are consistent with a situation in which POm spiking during Up states are driven largely by L5B inputs, with temporal dynamics determined by subthreshold EPSP adaptation.
Estimating L5B Functional Convergence in POm
We next used 2 approaches-simulated EPSP trains and ratios of experimentally measured spike and EPSP rates-to estimate the number of L5B inputs converging on single POm neurons.
The logic of the simulated EPSP approach is to calculate r values from model-generated EPSP trains in response to defined numbers of L5B input patterns and compare those with the experimental r values from our intracellular data set (Fig. 8A). r values depend on 1) the number of L5B inputs, with r decreasing as the number of active inputs increase and 2) the variation in experimentally measured EPSP amplitude at a given IEI (EPSP noise). To first test this approach, we made simultaneous recordings from pairs of L5B neurons (n = 9 pairs) and used these spike patterns to generate simulated EPSP trains. We then calculated r values from simulated EPSP trains (see Supplementary Fig. 8B) from either 1) single L5B neurons (n = 18, Fig. 8B black) or 2) from pairs of L5B neurons (n= 9, Fig. 8B, red).
Predicted r for single inputs was greater than that predicted for 2 simultaneous inputs, and r decreased with the addition of EPSP noise. At noise levels matching those observed in vivo (∼15%), predicted r for single inputs was in agreement with the maximal r measured in experimental data (r = 0.87). For 2 L5B inputs, r values were very similar to the median of all experimental r values, suggesting that the number of active L5B inputs per POm neuron may be around 2. Furthermore, these results support the validity of using r to discriminate between POm neurons with single and multiple inputs.
To test for 3 or more L5B inputs, we created artificial L5B spike trains by bootstrap resampling (Efron and Tibshirani (1991), 500 repetitions) from in vivo L5B spike trains to simulate POm EPSP trains for up to 5 independent L5B inputs. As in the paired protocol, r decreased with input count and EPSP noise, and up to 4 inputs were discriminable by r value (Fig. 8C). The experimental median r value was between the simulated r values from 2 and 3 L5B inputs, suggesting that POm neurons receive between 2 and 3 active L5B inputs. Comparing the simulated r values from increasing numbers of L5B inputs to experimentally measured r values allows an estimation of the number of active L5B inputs converging onto individual POm neurons (Fig. 8D). We found that roughly half of the cells in our sample received 1-2 inputs, and the remaining, 3 or more inputs, resulting in a mean of 2.5 L5B inputs per POm neuron.
Next, we independently estimated L5B-POm convergence by comparing L5B spike and POm EPSP rates (Fig. 8D). From 500 bootstrap resamples of L5B spike trains, we calculate that 1, 2, 3, 4, and 5 L5B inputs should result in mean POm EPSP rates of 1.5 ± 0.8, 3.4 ± 1.2, 4.7 ± 1.2, 6.4 ± 1.6, and 8.3 ± 1.6 Hz, respectively. Thus, the mean experimental spontaneous POm EPSP rate of 3.8 ± 2.1 Hz (n = 38) measured here suggests that POm neurons on average receive input from 2-3 L5B neurons, in agreement with the estimation method using r. In summary, these estimates support a view in which L5B-POm functional convergence is sparse under conditions of slow cortical oscillations, with approximately 2.5 L5B neurons dominating the activity of postsynaptic targets in POm.
Discussion
The role of POm in the whisker system is not known, and recent independent demonstrations that whisker self-motion is poorly encoded in POm (Moore et al. 2015;Urbain et al. 2015) make POm even more puzzling. The absence of simple sensory modulation of POm activity highlights the possible importance of extra sensory inputs to higher order thalamus. Here, we investigate the input from cortical L5B to POm and ask how efficiently spikes can be transferred via this pathway in vivo. We determine the relation between the cortical activity patterns and CT gain and predict the convergence of L5B inputs on individual POm neurons.
We find that during low-frequency cortical oscillations typical for anaesthetized, sleeping, and "quietly wakeful" animals (Poulet and Petersen 2008;Constantinople and Bruno 2011;Vyazovskiy et al. 2011;Reimer et al. 2014), the POm membrane potential is characterized by the occurrence of large unitary ("giant") EPSPs ( Fig. 1C-E). In combination with a set of control experiments incorporating cell-type-specific photostimulation (Figs 3 and 4), pharmacology (Fig. 3), and EPSP analysis, these data provide evidence that during the cortical Up state oscillations occurring in vivo, spiking in POm is mainly driven by L5B.
Specificity of BC L5B Synaptic Input to POm
Previous anatomical (Hoogland et al. 1987;Bourassa et al. 1995;Killackey and Sherman 2003), synaptic physiology (Reichova and Sherman 2004;Groh et al. 2008), and in vivo (Diamond et al. 1992;Groh et al. 2014) studies demonstrated large ("giant") EPSPs in POm of BC-L5B origin. In addition to L5B neurons in BC, other sources may contribute to the POm activity investigated here: somatosensory cortex 2 (S2, Liao et al. (2010)), motor cortex (Hooks et al. 2013), and SpVi (Chiaia et al. 1991;Veinante, Jacquin, et al. 2000). These inputs are well-established on anatomical grounds, but physiological data about their contribution to POm activity during Up and Down state activity are missing. Here, we provide evidence that in the absence of sensory stimulation, POm activity is dominated by L5B neurons in BC.
Firstly, optogenetic control of L5B activity in BC evoked (Fig. 4) or eliminated (Fig. 3) large, unitary EPSPs in POm. Photo-evoked EPSPs had response latencies incompatible with polysynaptic activation (Fig. 4). Furthermore, L5B spikes in BC and POm EPSPs show very similar patterns during Up and Down states (Fig. 2).
Secondly, SpVi neurons in the brainstem also make large synapses in POm (Chiaia et al. 1991;Veinante, Jacquin, et al. 2000;Lavallee et al. 2005), but these inputs exhibit almost no background firing during anesthesia (Furuta et al. 2010;Groh et al. 2014) and are thus unlikely to be the origin of cortical Up state evoked activity in POm. The photo-evoked EPSPs had average latencies of approximately 3.5 ms and are thus unlikely be triggered via multisynaptic activation of SpVi, which is activated by the cortex with much longer latencies of approximately 10 ms (Furuta et al. 2010).
Finally, L5B in S2 (Liao et al. 2010) and deep layers of motor cortex (Hooks et al. 2013) are additional sources of CT synapses in POm and may potentially contribute to the activity we describe here. While the optogenetic and pharmacological suppression of BC was relatively region specific, suggesting BC as the dominant input during Up and Down states (Fig. 3), better spatial control of cortical activity is needed to tease apart any potential contributions of S2 to POm activity.
The Gain of CT Transfer Function Is Dynamic
Synaptic depression is a well-established feature of the L5B-POm pathway (Reichova and Sherman 2004;Groh et al. 2008). However, the consequences of synaptic depression on CT spike transfer in vivo were unknown. L5B spontaneous spiking rates of 3-4 Hz predict that the L5B-POm pathway is in a constant state of depression which prevents high gain CT spike transfer. However, the present in vivo data show that CT gain is not constant, but rather follows cortical Up and Down states, peaking at the transition point and declining sharply during the early phase of the Up (B) Mean ± SD of the correlation coefficient (r) between log 10 IEI and predicted EPSP amplitude for single (black) and paired experimental (red) L5B spikes, as a function of EPSP amplitude noise (additive Gaussian noise). Vertical gray bars and horizontal lines show experimentally measured noise level and correlation coefficient, respectively (median, first and third quartiles). POm EPSP noise was determined from unadapted EPSPs from "single input" whole-cell recordings: median noise value at given IEIs was 15% (1st quartile: 13%, 3rd quartile: 18%). (C) As in (B), but calculated for 1-5 artificial L5B spike trains resampled from interspike intervals (ISIs) of experimentally recorded L5B neurons. Each marker shows the mean predicted r, calculated for random combinations of 1-5 recorded neurons, 20 000 ISI draws. (D) Estimated distributions of L5B input count on POm neurons predicted by 2 different independent calculations: ratios between L5B spike and POm EPSP rates (rate) or correlation coefficient r between predicted EPSP amplitudes and IEI.
state. Large single EPSPs occur mostly during the beginning phase of the Up state (Figs 1 and 2), especially the very large EPSPs that are most likely associated with the T-type Ca2+ channel currents and bursting (Jahnsen and Llinas 1984;Seol and Kuner 2015). By evoking these "driver" EPSPs, isolated L5B spikes (i.e., spikes preceded by a Down state) have the highest chance to trigger one or more POm spikes; estimates from the intracellular data suggest that nearly half of APs are triggered by such "driver" EPSPs. Subsequently, as EPSP amplitudes decline during the Up state (Figs 2 and 7), 2 or more EPSPs must be integrated to trigger POm spiking; such integration can occur in single input neurons for EPSPs separated by short IEIs, or in multiple input neurons for near coincident EPSPs. These data demonstrate that the L5B-POm pathway shows pronounced frequency-dependent adaptation in vivo, and it is likely that synaptic depression is a main contributing mechanism. A simple model based on a few experimentally derived rules could recreate the time course and essential features of the L5B-POm spike transfer (Fig. 7), showing that the dynamics of POm spiking during Up states is largely explained by EPSP adaptation driven by L5B spontaneous spiking. Even though in vivo adaptation does not reach the extremes measured in vitro (Groh et al. 2008), we find that EPSP adaptation has functional consequences for CT spike transfer and underlies the dynamic gain of this pathway.
Given the complex nonlinear properties of POm neurons (Landisman and Connors 2007) and the voltage and time dependence of thalamic intrinsic mechanisms such as the T-type calcium and HCN channels (Jahnsen and Llinas 1984;McCormick and Pape 1990;Sherman 2001), it is noteworthy that EPSP adaptation is ensured by multiple intrinsic mechanisms in combination with presynaptic depression. The amplitudes of temporally isolated "driver" EPSPs in particular were decreased by depolarization (see Supplementary Fig. 7D), consistent with the presence of a T-type calcium component. In agreement with recent in vitro T-type calcium knockdown findings (Seol and Kuner 2015), these data suggest that the T-type calcium current contributes significantly to thalamic excitability to specifically enhance isolated or low frequency events. Thus, the interplay between multiple pre-and postsynaptic mechanisms strongly suggests that adaptation is a key feature of the L5B-POm pathway.
Finally, it remains to be determined exactly how the in vivo EPSP adaptation we report here interacts with changes in membrane potential elicited by modulatory inputs, in particular from CT L6 pathways (Lam and Sherman 2010;Mease et al. 2014;Crandall et al. 2015) and subcortical inhibition (Veinante, Lavallee, et al. 2000;Bartho et al. 2002;Trageser and Keller 2004;Lavallee et al. 2005;Bartho et al. 2007).
Expected L5B-POm Spike Transfer in the Awake Animal
In the awake rat, L5B neurons spike at 3-4 Hz Sakmann 2008, 2009;Oberlaender et al. 2012), predicting that this pathway may predominantly operate as an integrator of inputs. However, even at intermediate gains expected at these rates, only a few simultaneous L5B inputs would be needed to elicit POm spikes. This is a very different situation compared with thalamocortical connections, in which many synchronous thalamic inputs are required to trigger cortical spiking (Gabernet et al. 2005;Bruno and Sakmann 2006;Jia et al. 2014). Furthermore, in the awake animal, cortical spiking occurs in structured patterns (Luczak et al. 2007) with periods of inactivity, suggesting that CT spike transfer may in principle occur with high gain in the awake state. It is likely that inputs from higher order cortical areas such as S2 (Liao et al. 2010) and deep layers of motor cortex (Hooks et al. 2013) contribute substantially to POm spiking in the awake animal. Furthermore, L6 CT neurons-which probably contributed very little to POm activity in this study, due to "ultrasparse" spontaneous firing rates of approximately 0.1 Hz (Velez-Fort et al. 2014)-likely play a more important role during wakefulness. While recent reports show that POm neurons are indeed quite active in the awake animal (Moore et al. 2015;Urbain et al. 2015) and produce relatively complex spikes trains with long and short interspike intervals, the relationship between cortical and POm spiking described here remains to be investigated under nonanesthetized conditions.
Possible Role of the L5B-POm Pathway in Transferring Cortical Spike Output Through CT Circuits
It has been suggested that the majority of brain activity reflects "internal states," that is, spiking activity that is independent of sensory input, and that sensory inputs serve to modulate or suspend this activity (Llinas and Pare 1991;Raichle et al. 2001;Kenet et al. 2003;Ringach 2009;Destexhe 2011). In human fMRI studies, Raichle and colleagues (Zhang et al. 2008) report strong correlations between the cortex and the thalamus during spontaneous oscillations associated with the "default network state" (Raichle et al. 2001) of the resting brain. Spread of such internal cortical state throughout the cortico-thalamo-cortical network may employ CT signaling via higher order thalamic nuclei.
The idea that higher order nuclei route cortical activity to other cortical areas was first formulated by Sherman and colleagues (Sherman andGuillery 1996, 2006;Reichova and Sherman 2004).
Here we provide evidence that in vivo, the higher order nucleus POm is indeed strongly activated by cortical input from L5B, particularly isolated L5B spikes occurring after periods of silence. However, a direct measure of CT convergence, that is, count of the number of anatomical L5B inputs per POm neuron, has yet to be achieved. Here, as an indirect first estimate of CT convergence, we find that during Up/Down state oscillations, each POm neuron receives functional input from a low number of active L5B neurons. Estimates from 2 different methods suggest that under these experimental conditions, approximately one-third of the POm neurons have only one active L5B input, with an average of 2.5 L5B input neurons per POm neuron (Fig. 8). Thalamus-projecting L6 neurons are ultrasparse firing (Velez-Fort et al. 2014) and evoke small and slow EPSPs (Reichova and Sherman 2004;Landisman and Connors 2007), making it unlikely that L6 inputs contributed significantly to this convergence analysis. However, it should be noted that both the level of functional CT convergence and the contribution of L6 input are most likely dependent on behavioral state.
These results suggest that single or synchronized spikes of a few BC L5B neurons can be amplified at the CT driver synapse and "broadcast" via POm simultaneously to motor, primary, and secondary sensory cortical via the widespread projections POm makes to various cortical areas (Deschenes et al. 1998;Meyer et al. 2010;Theyel et al. 2010). Consistent with this amplification and broadcasting idea is the net excitatory effect of POm on cortical networks (Bureau et al. 2006;Petreanu et al. 2009;Theyel et al. 2010;Viaene et al. 2011;Gambino et al. 2014;Jouhanneau et al. 2014) to enhance and prolong cortical sensory responses (Mease et al. 2016). | 2018-04-03T04:14:46.136Z | 2016-05-12T00:00:00.000 | {
"year": 2016,
"sha1": "8198537cfa7f3142d640b14d6a4a0d71724ddd5a",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/cercor/article-pdf/26/8/3461/17331412/bhw123.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8198537cfa7f3142d640b14d6a4a0d71724ddd5a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247107576 | pes2o/s2orc | v3-fos-license | Outcomes of subchorionic hematoma‐affected pregnancies in the infertile population
Abstract Objective To determine the implications of an incidentally noted subchorionic hematoma on pregnancy outcomes in the infertile population. Methods Retrospective cohort study at a tertiary care, university‐based facility. All patients with intrauterine pregnancy on initial obstetric ultrasound presenting to an infertility clinic between January 2015 and March 2018 (n = 1210), regardless of treatment cycle, were included. Nonviable pregnancies were excluded. The main outcome measured was association between subchorionic hematoma and first trimester miscarriage. Results The prevalence of subchorionic hematoma was 12.5% (n = 151) and did not differ by type of fertility treatment. There was no association between subchorionic hematoma and first trimester miscarriage; however, among patients with subchorionic hematoma, those who reported both bleeding and cramping had an increased probability of miscarriage compared to those without symptoms (0.62 vs. 0.12, P <0.001). The live birth rate in this sample was 81.3% and there were no statistically significant differences in pregnancy outcomes between those with and without subchorionic hematoma. Conclusion Among an infertile population, there was no increased risk of miscarriage when subchorionic hematoma was seen on early ultrasound; however, when patients noted both vaginal bleeding and cramping, their probability of miscarriage was significantly increased.
| INTRODUC TI ON
A subchorionic hematoma (SCH) is a common finding on obstetric ultrasounds, and the prevalence is often higher among the infertile population-affecting between 18%-40% of those pregnancies in recent studies. [1][2][3] SCHs are collections of fluid seen on ultrasound between the chorion and the uterine wall, representing a collection of blood. 4 SCH may be noted incidentally at the time of initial obstetric ultrasound or during subsequent ultrasounds, such as when women present for bleeding in the first trimester. Given the tendency for early pregnancy ultrasounds in women undergoing fertility treatment, we hypothesized that the detection of SCH may be increased in the infertile patient population without any impact on pregnancy outcomes.
The impact of SCH on early pregnancies has been disputed; prior studies in the fertile population have shown increased rates of pregnancy loss, [4][5][6] while recent studies demonstrate no increased risk. 1,7 Few studies have evaluated SCH among the infertile population, and these have focused primarily on pregnancies conceived by in vitro fertilization (IVF). Additionally, the majority of these studies have focused on identifying risk factors for SCH and not on evaluating the impact of SCH on pregnancy outcomes such as first trimester loss or live birth rate. Anderson et al. is the only study in the infertile population that included patients' symptoms in their analysis, but it did not analyze how symptoms impacted the rate of miscarriage. 3 As such, SCH is of unknown significance when considering pregnancy outcomes among the infertile population.
Given the high reported incidence of SCH among the infertile population, understanding the influence of SCH in these pregnancies is of paramount importance for clinical management, including counseling patients on expectations surrounding pregnancy outcomes.
Thus, the objectives of this study were (1) to determine risk factors for development of SCH in the infertile population, (2) to determine whether SCH was associated with first trimester pregnancy loss, and (3) to evaluate pregnancy outcomes of SCH-affected pregnancies.
| MATERIAL S AND ME THODS
This was a retrospective cohort study of all patients who presented to a single fertility clinic between January 2015 and March 2018 for an obstetrical ultrasound between 5 and 9 weeks gestation. Terminology codes (76 817, 76 815, 76 816, 76 802, 76 801) were used to identify all obstetric scans performed.
Current Procedural
While ultrasound was routinely performed between 6 and 7 weeks estimated gestational age, it was performed as early as 5 weeks for those with ectopic risk factors or symptoms, and as late as 9 weeks for patients who conceived spontaneously. All images were obtained from a GE Voluson S8 ultrasound system using a combination of C1-5-RS transabdominal probes (2)(3)(4)(5) and IC9-RS transvaginal probes (2.9-9.7 MHz). Pregnancy episodes, defined as the time period from confirmation of an intrauterine pregnancy with a fetal heartbeat until the conclusion of that pregnancy, were included for analysis. Cases were excluded if there was a pregnancy of unknown location, an ectopic pregnancy, or a pregnancy failure, defined according to the guidelines endorsed by the American College of Obstetricians and Gynecologists. 8,9 For patients who had more than one pregnancy during this time period, subsequent pregnancies were included and considered separate pregnancy episodes in the data analysis if they met the inclusion criteria detailed above.
SCH was defined as a fluid collection visualized on ultrasound between the gestational sac and the uterine wall. The SCH was recorded as the average size and largest diameter by trained sonographers and all images were reviewed the same day by a Reproductive Endocrinologist. SCH were stratified into small, medium, and large sizes post hoc using the ratio of mean SCH diameter to mean gestational sac diameter with a small SCH comprising <5% of the gestational sac, a medium SCH comprising 5%-25% of the gestational sac, and a large SCH comprising >25% of the gestational sac. 4 During the ultrasound appointment, patients were routinely asked about the presence of bleeding or cramping symptoms by the sonographer and documented in the ultrasound report.
Chart review was performed by three reviewers with random cross-checks to ensure inter-reviewer consistency. Data on patient demographics, fertility treatments, presence or absence of SCH, symptoms of vaginal bleeding or cramping in the first trimester, and pregnancy outcomes were extracted from the medical record. All fertility treatments were included: natural cycles, oral cycles (clomiphene or letrozole), injectable gonadotropin cycles, hybrid cycles, fresh IVF cycles, frozen IVF cycles, and donor egg cycles. The primary outcome for this study was miscarriage, defined as appropriately decreasing beta-hCG following a previously documented viable intrauterine pregnancy, a pregnancy ≥7 mm without a cardiac activity less than 10 weeks (embryonic demise), or a fetal demise or spontaneous loss of a pregnancy between 10-20 weeks. 9,10 Secondary outcomes included live birth rate and obstetrical complications.
All variables were analyzed using descriptive statistics such as median and proportions, where appropriate. The normality of continuous variables was assessed by inspecting skewness and kurtosis, and by the Shapiro-Wilk test. All continuous variables were non-normally distributed and thus expressed using median and interquartile range; comparisons were performed using Wilcoxon Rank test. Comparisons between categorical variables were performed using Chi-square or Fisher's exact test, where appropriate. Bivariate analysis was performed to compare demographic characteristics, fertility treatments, and pregnancy symptoms between patients with and without SCH. Prevalence of SCH by fertility treatment type was compared using Chi-square or Fisher's exact test. To examine the relationship between pregnancy symptoms (bleeding and/or cramping) and miscarriage among patients with SCH, we adjusted for independent predictors of miscarriage (recurrent pregnancy loss [RPL] and maternal age) and reported the results of the logistic regression models as predicted probabilities. Patients with cramping alone were not included in the logistic regression given the small number of patients with these findings. Bivariate analysis was also performed to compare pregnancy outcomes across patients with and without SCH. All data management and statistical analysis were performed using SAS software version 9.4 (SAS Institute Inc., Cary, NC, USA), and statistical significance interpreted using 95% confidence intervals.
Ethical approval of this study was granted by the University of Michigan Institutional Review Board (HUM00150576). Informed consent was waived as this is a retrospective review of existing data included in the standard care of patients.
| RE SULTS
A total of 1210 viable intrauterine pregnancies were included in the analysis. Women were a median age of 36.7 years old, had a median body mass index of 25.0 kg/m 2 , and were primarily non-Hispanic white (78.0%) and nulliparous (70.4%) ( Table 1). The prevalence of SCH was 12.5% (n = 151) with 2.7% (n = 4), 8.0% (n = 12), and 89.4% (n = 135) of patients having a small, medium, and large SCH, respectively. Of the 151 patients with SCHs, the SCH was noted to stay the same size in 2.7% (n = 4), increase in size in 12.6% (n = 19), decrease in size in 9.3% (n = 14), and resolve in 15.9% (n = 24) of patients on subsequent ultrasounds. The remaining 90 patients either did not have additional ultrasounds at our center or the size of the SCH was not documented. In comparing characteristics of pregnancies with SCH to those without SCH, male factor infertility was more prevalent among patients with SCH (34.4% vs. 24.6%, P = 0.009), but there were no other differences in infertility diagnoses across SCH categories. There were no differences in stage of transfer (blastocyst or cleavage); trophectoderm grade (good, fair, and poor); or simplified SART grade (good, fair, and poor) between IVF patients with SCH and those without. 11 Use of 81-162 mg aspirin was more common among patients with SCH compared to those without (49.7% vs. 39.6%, P = 0.0095); however, there was no difference in the miscarriage rate for those using aspirin (data not shown). Patients with SCH more often reported vaginal bleeding (40.4% vs. 10.2%, P <0.001) or both vaginal bleeding and cramping (15.9% vs. 9.1%, P = 0.009) compared to those without SCH.
The overall prevalence of first trimester miscarriage in this population was 17.4% (n = 210). There was no relationship between SCH and first trimester miscarriage-19.2% (n = 29) of those with SCH and 17.1% (n = 181) without SCH ended with first trimester miscarriage (P = 0.521) ( Table 2). Post hoc power analysis demonstrated an 80% power (alpha error of 5%) to detect an 9.8% increase in miscarriage rate.
To examine the impact of reported symptoms on miscarriage in SCH-affected pregnancies, Figure 2 shows predicted probabilities of miscarriage calculated based on a logistic regression model, adjusted for maternal age and recurrent pregnancy loss. Patients with SCH who reported symptoms of bleeding had no significantly increased predicted probability of miscarriage compared to those with SCH and without bleeding (0.17 vs. 0.12, P = 0.366). However, among patients with SCH, those who reported bleeding and cramping had an increased probability of miscarriage compared to those without symptoms (0.62 vs. 0.12, P <0.001).
The live birth rate for the entire cohort was 81.3% and the median gestational age at delivery was 39 weeks. There were no statistically significant differences in pregnancy outcomes between patients with and without SCH, including live birth rates, gestational age at delivery, preterm birth, birth weight, abruption, placental anomalies, preterm premature rupture of membranes, preterm labor, and maternal comorbidities such as hypertensive disorders and diabetes (Table 2).
| DISCUSS ION
In this retrospective chart review of over 1200 pregnancies at a single fertility clinic, we found that the presence of SCH did not vary by fertility treatment cycle or infertility diagnosis, aside from male factor infertility. Importantly, we found that incidental SCH in the infertile population was not associated with an increased risk of miscarriage or adverse pregnancy outcomes. However, patients presenting with SCH and both vaginal bleeding and cramping were at an increased risk of miscarriage compared to patients with no symptoms or only vaginal bleeding.
Prior similarly designed studies in the infertile population have found conflicting results regarding risk of SCH by fertility treatment cycle. Truong et al found no difference in rate of SCH by fertility treatment cycle; however, they did not differentiate between fresh and frozen IVF. 2 Asato et al found the frequency of SCH was significantly higher in the IVF group than the non-IVF group, and upon further analysis, SCH was more prevalent among frozen rather than fresh embryo transfer. 12 A subsequent study found that SCH was more common among fresh embryo transfer rather than frozen embryo transfer, 13 while a recent study found no difference in these populations. 3 Zhou et al proposed that SCH was more common in fresh embryo transfer due to excess exogenous estrogen altering the uterine environment and placental implantation and development. However, a recent study of frozen embryo transfers found that the incidence of SCH was not associated with estradiol levels. 14 As such, the role of exogenous estrogen and mechanism of SCH formation remains unclear in the infertile population. Indeed, it has been suggested that the formation of SCH is likely more complex than just estrogen and progesterone levels or protocols, and may be related to lack of the corpus luteum and factors it produces such as relaxin and vascular endothelial growth factor. 15 While our study did not account for route or dose of supplemental estrogen or progesterone, the finding that SCH did not vary by fertility treatment cycle suggests that it is also not simply related to presence of absence of a corpus luteum.
Instead, factors associated with SCH included male factor infertility and aspirin use. The association of SCH with male factor infertility is a novel finding, as no other study has included infertility diagnoses in their analysis. While it is not entirely clear why male factor infertility would be associated with SCH, sperm defects have previously been associated with RPL. For example, increased DNA fragmentation has been associated with early pregnancy loss 16 Notably, these studies did not assess for the presence of symptoms and whether they impacted risk of miscarriage.
In our sample, when patients with SCH reported symptoms of vaginal bleeding, there was no increased probability of miscarriage compared to those without symptoms. However, when patients reported both bleeding and cramping, the probability of miscarriage was markedly increased. While this is a novel finding in the infertile population, it builds on research in the fertile population. Other studies have shown that, among fertile patients admitted for vaginal bleeding in the first trimester, those with SCH had significantly increased rates of miscarriage compared to those without SCH 19 and, of patients with SCH, those with vaginal bleeding that prompted antenatal admission were more likely to have a preterm birth. 20 Vaginal bleeding alone may not have been significant in our study due to differences in severity of symptoms, with our population having a wider range of vaginal bleeding-from spotting to heavy bleedingcompared to more significant bleeding necessitating inpatient admission. Intuitively, it makes sense that more severe symptoms portend a higher risk of miscarriage. Future research should elicit not just symptoms, but also severity, to understand how they impact risk of miscarriage.
We found that SCH is not associated with adverse pregnancy outcomes in the infertile population and unlike the meta-analysis by Tuuli et al, our study failed to demonstrate a significant increase in the incidence of placenta abruption among patients with SCH, however, only 15 patient experienced placental abruption in our cohort. 5 Few studies have evaluated outcomes of SCH-affected pregnancies in the infertile population, and these findings have often been limited to birth weight, with contradictory conclusions. 3 is necessary to better assess these outcomes.
The detection of an SCH on an early ultrasound is often distressing for patients. The findings of this study are thus helpful in informing a diverse, infertile patient population on the potential adverse outcomes of SCH. Patients with incidental SCH can be comforted that they are not at increased risk of miscarriage, while patients with both vaginal bleeding and cramping should be considered at higher risk for miscarriage and more thoroughly counseled. Furthermore, patients with SCH can be reassured that there did not appear to be associated adverse maternal or obstetrical outcomes. These findings should inform more focused counseling for highly anxious patients and providers. Future research is needed to examine the impact of these symptoms on SCH-affected pregnancies, with more attention to detail regarding the severity of symptoms.
In conclusion, risk factors for SCH in the infertile population include male factor infertility and aspirin use, but not fertility treatment type or embryo grade. There is no increased risk of miscarriage among pregnancies impacted by SCH, but if patients with SCH report both vaginal bleeding and cramping, their probability of miscarriage is increased. There are no differences in pregnancy outcomes for patients with SCH. These findings should reassure patients and providers with asymptomatic SCH and improve counseling for patients with symptomatic SCH.
ACK N OWLED G M ENTS
Sarah Block, BGS provided help with proofreading and manuscript formatting.
CO N FLI C T S O F I NTE R E S T
The authors report no conflicts of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
No. Research data are not shared. | 2022-02-26T06:23:41.647Z | 2022-02-25T00:00:00.000 | {
"year": 2022,
"sha1": "75421241b05acf374c669023c8859e6da22db4a5",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ijgo.14162",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "3e1588b60f428b2ef01d47a05d9a896755004d45",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51654685 | pes2o/s2orc | v3-fos-license | Does a mandatory non-medical switch from originator to biosimilar infliximab lead to increased use of outpatient healthcare resources? A register-based study in patients with inflammatory arthritis
Objectives National Danish guidelines in May 2015 dictated a mandatory switch from originator infliximab (INX) to biosimilar CT-P13 in patients with inflammatory rheumatic disease. We investigated if this non-medical switch changed use of outpatient hospital resources. Methods Observational cohort study. Switchers were identified in DANBIO. Rheumatic outpatient contacts, visits and services were identified in the National Patient Registry. The 6-month rate for (1) number of visits (or services) and (2) days with ≥1 visit (or service) were compared before/after switching (paired t-tests). Visits per week per patient before/after the switch date were analysed with graphical interrupted time-series analysis. Results In 769 switchers (372 males, median age 54 years (IQR 44–66)), 1484 outpatient contacts, 6718 visits and 9243 days with services (693 on switch date) were identified. Mean visit rate was 3.89 before and 3.95 after switch (p=0.35). Total number of services was 19 752 (2019 on switch date). Mean rates before/after switch for 16 service categories were small and differences close to zero. Visits per week per patient appeared similar before/after switch with peaks every ≈8 weeks (standard INX infusion interval). Conclusion Changes were marginal with no clinically relevant increase in use of outpatient health care resources 6 months after compared with 6 months before mandatory switch from originator to biosimilar infliximab.
Original article
Does a mandatory non-medical switch from originator to biosimilar infliximab lead to increased use of outpatient healthcare resources? A registerbased study in patients with inflammatory arthritis Bente Glintborg, 1,2 Jan Sørensen, 3 Merete Lund Hetland 1,4 To cite: glintborg B, Sørensen J, Hetland Ml. Does a mandatory non-medical switch from originator to biosimilar infliximab lead to increased use of outpatient healthcare resources? a registerbased study in patients with inflammatory arthritis. What does this study add? ► We found no evidence of changes in use of outpatient health resources following switch from originator to biosimilar infliximab. Availability of the cheaper biosimilar disease-modifying antirheumatic drugs (bsDMARDs) has raised financial incentives to change treatment practice. 1 2 Recent European League Against Rheumatism guidelines state that biosimilars should be included in treatment algorithms on equal terms as the originator. 3 However, recommendations are debated regarding non-medical switching (ie, switching for economic reasons in patients on stable treatment with the originator). [4][5][6] In May 2015, Danish nationwide guidelines dictated a mandatory switch from originator (Remicade, INX) to biosimilar infliximab (Remsima, CT-P13) since the cost of CT-P13 was 36% of INX. 7 Non-medical switching might potentially induce uncertainty for patients and lead to an increased number of contacts to the healthcare provider for patient education or closer monitoring of the disease. [8][9][10] This could impose additional costs on healthcare services potentially off-setting savings from the cheaper product. Existing evidence provides no meaningful data about the cost consequences of switching. Previous budget impact and pharmacoeconomic analyses in autoimmune diseases have mainly included drug costs and prescription patterns and disregarded the impact on use of healthcare services. 2 11-13 RMD Open RMD Open RMD Open The nationwide clinical DANBIO registry prospectively collates detailed clinical information among patients with inflammatory arthritis. 14 These data, combined with administrative national health registries, offer a unique opportunity to assess use of healthcare resources in relation to switching. Thus, we aimed to investigate if switching from INX to CT-P13 in patients with inflammatory arthritis affected outpatient consultation rates and services provided within departments of rheumatology 6 months after compared with 6 months before the switch.
MaTeRials and MeTHOds study population
The study population was identified in DANBIO and included INX-treated patients (rheumatoid arthritis, psoriatic arthritis or axial spondyloarthritis) who switched from INX to CT-P13 between 20 March and 1 January 2016 (=switchers). 7 Treatment regimen and switch date were obtained from DANBIO. Switch date was validated by the local departments of rheumatology. 7 Vital status (=alive) by 1 August 2016 was confirmed through the central person register. Only patients alive during the full 6-month observation period were included.
The Danish National Patient Registry is virtually complete. It is organised with one record for each patient contact (which could be a series of visits for the same health problem). 15 Information on all hospital outpatient contacts (completed and ongoing) was obtained. Patients were included if they had ≥1 outpatient contact related to inflammatory joint disease at departments that registered patients in DANBIO. In Denmark, infusions with biological DMARDs are administered at these departments.
Each outpatient contact consists of a number of outpatient visits (ie, physical visits) with corresponding visit dates and records of services provided. Services are coded by date and type of service. Only records for visits and services dated 0-6 months before and after the switch date were included in this analysis. Services were categorised into 16 meaningful groups of relevance to INX treatment for inflammatory rheumatic diseases (shown in table 1). All visits and services provided by physicians and nurses (but not secretaries) related to outpatient contacts were included in the analysis.
Analyses were performed in Stata V.15.1 (StataCorp). For individual switch patients, the following rates were calculated for the 6-month period before and after switch: (1) days with ≥1 outpatient visit and (2) number of visits. Similar calculations were done for outpatient services. Visits and services performed on the switch date were analysed separately.
Differences in rates before versus after switch were compared by paired t-tests.
Switchers were also categorised according to change in number of services (fewer/the same/more) before versus after switch, and the service rates were compared for these categories.
The weekly rate of outpatient visits before and after the switch date was analysed with graphical interrupted time-series analysis. 16 Treatments Treatments Treatments
Patient cohort and outpatient contacts
Among 802 switchers identified in DANBIO, 769 patients were included in the analysis (figure 1). In these patients, 5091 outpatient hospital contacts (completed and ongoing) were identified, whereof 1484 contacts were related to INX treatment at departments using DANBIO (figure 1). The 769 included patients had a total of 6718 visits that occurred within 6 months before (n=2995, 45%), on (689, 10%), or 6 months after (n=3034, 45%) the switch date. Overall, 19 752 services were provided (2019 of these were on the switch date).
The weekly rate of visits per patient appeared stable during the time period (figure 2). The peaks observed approximately every 8 weeks were consistent with clinical practice (standard INX infusion interval).
days with services
The total number of days with services were 4131 before (mean 5.4 days SD 2.8) and 4400 after switch (mean 5.8 days SD 2.8) (p<0.01, paired t-test). After the switch, 259 patients (34%) had fewer (mean −2.4, SD 1.7), 169 patients (22%) had the same and 341 patients (45%) (mean 2.6, SD 2.0) had more days with services than before switch. Figure 2 Weekly rates of physical visits per patient 6 months before and 6 months after switching.
analysis of services provided
The rates of provided services declined after switch for INX treatment and increased for telephone consultations, patient guidance, intravenous medication, clinical investigations and controls, and observations. Numerical differences were, however, small and close to zero (table 1).
The number of patients with changes in provided services before versus after switch (less/more) appeared similar (table 1) disCussiOn In this study of 769 patients with inflammatory arthritis who switched from INX to CT-P13 in routine care, we found only minor, clinically insignificant changes in the use of outpatient health resources during a 6-month time period after the switch compared with before the switch.
The biosimilars are expected to provide similar standard care at lower costs, thus facilitating better access to treatment and perhaps earlier treatment during the disease course, which has major potential social perspectives. 4 8 The availability of bsDMARDs has been reported to cause changes in prescription practice, although the uptake of bsDMRARD varies markedly between countries. 8 Previous pharmacoeconomic and budget impact analyses have mainly included direct costs of the medication. 11 13 To our knowledge, no previous studies have investigated the use of healthcare resources accompanying switching to bsDMARDs.
Discussions on how to use the cheaper biosimilar drugs in patients on stable treatment with the originator drug are ongoing. 5 6 According to the Danish guidelines, switching was mandatory and INX was no longer routinely available. 17 Within few months, CT-P13 acquired the majority of the INX market share across indications in Denmark. 18 We have previously reported 1-year effectiveness outcomes in the patient cohort. 7 In the current study of the impact of non-medical switch on the use of healthcare resources, we used three different indicators: physical visits, days with services and number of services provided. Neither gave indications of substantial changes after switching. The visit rate was unchanged, while the number of services increased slightly after the switch with a minor increase in telephone consultations and clinical investigations/controls. However, these services were only provided for a small proportion of the patients (7%-16%). Although the use of telephone consultations increased slightly, the associated activity-based fee (DRG-tariffs (18€)) 19 is modest.
Previous studies have reported that patients and healthcare professionals often are somewhat reluctant to implement the use of bsDMARDs. 9 Negative expectations (=nocebo effects) are suspected to have an impact on treatment effects and costs. 10 20 21 It is reassuring that we found only minor changes in the use of healthcare resources, despite the switch being mandatory and that no specific strategy for patient information was set up. The fact that patients were informed about the switch at an already scheduled appointment could potentially reduce the nocebo effect-treatment was handled 'as usual'. Some patients may not have been aware of the switch, but this cannot be explored with the available data.
This study has strengths and limitations. The availability of valid and virtually complete data from routine clinical and administrative national registries was a strength. The validity of diagnoses and treatments in DANBIO is high. 22 The switch date in DANBIO coincided with a physical visit for the vast majority of patients, only ≈10% of patients had no physical visit (figure 2) or service provided on that date. This might be due to imprecise registration in DANBIO or errors in data coding. The registration of physical visits and days with services in the national patient registry has high validity. 15 Although there seemed to be some variation in the routines regarding registration of services between departments, the paired analyses were robust to such variation. The service rate during the 6-month period following switch seemed stable, which indicates that the time period that was chosen for the investigation of a potential impact of the switch was relevant. However, we have no information regarding the duration of the individual services provided; thus, if the switch induced longer and more elaborate consultations/phone calls, this has not been captured.
In conclusion, this study does not provide evidence of an increased use of outpatient healthcare resources in departments of rheumatology following the nationwide mandatory switch from originator to biosimilar INX. | 2018-08-01T20:48:44.203Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "ffc6bbd579f7e13cc34cbd45de0735fb8dddc5e4",
"oa_license": "CCBYNC",
"oa_url": "https://rmdopen.bmj.com/content/rmdopen/4/2/e000710.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffc6bbd579f7e13cc34cbd45de0735fb8dddc5e4",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221700723 | pes2o/s2orc | v3-fos-license | Improving the Drilling Process by Its Adaptive Control
The experience from the use of NC tool machines has shown that it is not enough to optimise cutting conditions, without more detailed knowledge of machining process [1], [7]. Numereical control of a tool machine enables new possibilities to oprimise machining and provide more space to the technologist to deal with technological process in a creative way. This process is suggested on an example of a simple helical drill. Similar approach cenbe adapted for milling when the cut width increases gradually, when turning threads, teething and other frequent machining operations.
Indroduction
Classical production machining methods are based on cutting conditions: vc, ap; f. These are usually dept contact during the entire machining process of a centrain workpiece area [8], [12]. The change in cutting speed occurs e. g. when turning the face plate. The change in cutting speed occurs e. g. when turning the workpiece face at a constant spindle speed. The cutting depth varies when machining forgings and castings that have technological bevels [14]. The feed rate does not normally change during machining, as this ensures a constant ualiz of the machined surface. The stability of the cutting conditions determines the uality of the machined surface. The stability of the cutting conditions determines the machine time for which the general relation formula: f n l . , [7], [12], where l is the leght of the surface to be machinined; n -the rotational freuency of the workpiece (or tool); f -the feed rate per rotation of the workpiece (or tool) [6], [11].A deeper analysis of the machining process shows that a purposeful change in cutting conditions (n,f) can positively affect the cutting process and its results [5], [14].
Drilling process control
With classical helical drills, according to Fig. 1, cutting speed and tool geometric parameters change along the cutting edge. Besides those changes, the existence of vertical cutting edge makes the drill incision more difficult, from its first contact with the workpiece up to reaching full contact along the whole length of cutting edges.
Fig. 1 Change of drill geometric parameters when incising the workpiece
Face angle changes considerably, on which the process of chip formation depends. Similar unfavourable conditions occur when the drill leaves the hole and this often leads to priority drill wear.
In Fig. 2 there is the result of experiments aimed at measurement of axial cutting force when drilling steel and alloy with drills made of high-speed steel. It can be seen that when the drill enters the feed, axial cutting force (Fo+2Ff) grows steeply. It is caused by the fact that only lateral cutting edge is in feed at the first contact. Face angle is negative, which means there is much higher rate of plastic deformation of cut material when it is transformed into the chip. The feed gradually spreads to both cutting edges, by which the forces Ff grow. The character of the increase of axial cutting force depends on the rigidity of technological system. The higher the rigidity, the steeper the increase of axial force. When drilling steel, the zone of fluent drilling becomes evident by observable fluctuation of the force, which is the effect of the chip formation characteristic for steel (fluent chip with slides, which affects whole chip cross section). Higher dynamics of cutting force is shown in drilling alloy because here occurs crack-formation process together with the formation of a segmented chip. Similarly, when the drill leaves the feed, the decrease of the cutting force is different for alloy and for steel. When drilling alloy, more fluent decrease has been recorded, which is probably the result of gradual release of a flexibly deformed system. For steel, this decrease is steeper.
Experimental tests have shown that the character of cutting force change when entering and leaving the feed can be influenced by the change of drill rotation frequency or the change of feed, which can be provided by the correction of NC machine control programme. Drill durability for standard cutting conditions and changing shift have been experimentally compared. The tests have been performed for drilling alloy with coated tools made of hight-speed steel HS12-1-2 with diameter 12mm and lenght 40mm. Standard cutting conditions recommend to use feed f = 0.21mm. In modified mode, the feeds, when the drill was entering and leaving, decreased by 40-50%. At the same time, the feeds when drilling in the central part of the hole have been increased by 20-25%. Corresponding feeds have been: f1,3= 0.18mm, f2 = 0.25mm. This procedure can be reasoned by the fact that during standard drilling, the selected feed is a little smaller than the allowed one due to complex feed conditions when the drill enters. In Fig. 3 there is a technological cycle of drilling in standard and modified modes. In Fig. 4 there is a comparison of average durability of drill series when machining in standard and modified conditions.
Fig. 4 Tool durability when drilling in standard conditions
(1)and varying feed (2) Significant increase of tool durability has been recorded when drilling holes with varying feed. At the same time, machining time has decreased by cca 10%.
Following tests have been oriented towards the study of the influence of cutting speed on drill durability. The tests have been performed when drilling holes in box parts made of grey alloy with flowing coat on the leaving side of drills outward the feed. Standard helical drills with diameter 17.5mm made of highspeed steels HS7-4-2-5 and HS12-1-2 and sintered carbud K10 have been used as tools. Holes 50mm long have been machined. Different cutting speeds have been obtained by fluent change of spindle rotation frequency, for fluent drilling and leaving the feed; for standard machining, cutting speeds of normatívov 16.5m.min -1 for drills made of high-speed steel and 26 m.min -1 for drills made of sintered carbid have been selected from standards.
When the drill has been leaving the feed, the cutting speed has been decreased by 25%, i.e. to 12.5m.min -1 for high-speed drills and 21m.min -1 for drills made of sintered carbid. The feed has remained constant, f = 0.32 and 0.28mm.
Technological process of drilling in both cases is shown in Fig. 5.
Fig. 6 Average drill durability when drilling with standard modes (black columns) and varying cutting speed
Another series of tests have been devoted to the conditions of fluent incision of drills into cut material. The zone of incision has been divided into several sections on which the cutting speed was changing. The tests have been performed with drills with diameter 12mm, with which the course of incision 4.8mm corresponds. This section is divided into three parts, 1.5mm each. Based on the analysis of machining scheme, different cutting speed has been selected for each section.
When incising in the first section under the hindered conditions, when the lateral cutting edge deforms the material, the feed has changed twice and the spindle rotation frequency increased 4 times. Analogically, in the second section, the feed decreased by 20-25%, with corresponding increase of spindle rotation frequency 1.5 times. In the third section, the conditions have satisfied the standard ones.
The results of measuring axial force are demonstrated in Fig. 7. It can be seen that with the change of spindle rotation frequency and shift, a more fluent course of cutting force can be obtained, at the same time shortening of machine time by 30 -35% can be reached.
Fluent change of spindle rotation frequency presentes a more effective method to improve the conditions of incision. Recent CNC systems allow to vary the spindle rotation frequency and shift fluently, according to designated law, on an optional machining system. The results of the studies have proved that the change of feed and cutting speed at the drill entering and leaving the feed can considerably improve the character of transition processes in this area. Another positive result of experimental tests is the fact that besides improved conditions of drill operation on transition section, the probability of drill breaking also decreases.
Optimation of cutting conditions when boring in the center of the hole
In the previous chapter, the optimation of cutting conditions in transition conditions has been described. Next, the optimation of the drilling process in the section of stable machining will be dealt with.
The first part of the experiments has been performed only with regulated feed. The range of regulation, which is enabled by the system of adaptive control has been 20 -120% with a step 10%. It means that the standard feed is considered to be 100%, in the course of regulation the feed can decrease by 80% and increase by 20%.
In Fig. 8 there are the results of measurment of axial force when drilling the base part of the opening in standard drilling conditions and with the use of the system of adaptive control with different feeds. The courses a and b show that in comparison with standard mode, it is not possible to obtain smaller fluctuation of cutting force for the use of adaptive control with the range of regulation 20 -120 % and fixed performance 0.7kW. At the same time, machine time increases by 20%. It can be explained by the fact that when drilling steel, a fluent chip is formed and after it breaks off, the axial force decreases. When drilling with adaptive control, the system reacts to the change of this force and at the regulation it evokes even higher force fluctuation as a result of intertia, mainly for a wide range of regulation. With the decreased range of regulation (Fig. c, d, e), the dynamic element of cutting force gradually decreases and in the range of regulation 100 -120% it is much smaller compared to the operation without regulation. At the same time, machine time shortens by cca 10%.
Conclusion
The application of adaptive control of machining process on CNC tool machines enables new possibilities to optimise the process. The result is the increase of tool durability and shortening of machine time. The programmer is required to master the machining process and know the importance of tool geometry and the influence of cutting conditions of the results of machining. An important effect can be seen on workpieces where there occurs a change of some geometric parameter, e.g. width of machined surface of the cross-cut section of cut-off layer within the course of machining. | 2020-08-20T10:05:05.811Z | 2020-08-18T00:00:00.000 | {
"year": 2020,
"sha1": "3c5b416c1c28dfccb8cd493a4e90753a58a7ddab",
"oa_license": "CCBYNC",
"oa_url": "http://journalmt.com/doi/10.21062/mft.2020.021.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4be9b8d416c1bcc8fa8ef6a017d88b5efb931ff6",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
265634829 | pes2o/s2orc | v3-fos-license | Analysis of Kevin Lynch's Theory of City Image (Case Study of The Tanah Abang Area in Jakarta)
Building an image in an area is important to strengthen identity, and as a reference in determining the orientation of the appropriate spatial values. In this study, researchers used a case study approach to explore data as a basis for providing in-depth meaning to the selected cases, namely Kevin Lynch's Theory Analysis of City Image in the Tanah Abang area. The researcher has collected all the found data and described them, related to the five main image formers of the Tanah Abang area which consist of Paths, Edges, Nodest, District, and Landmarks. Researchers can at least describe the description of the Tanah Abang area which has five main access roads as links between the outer areas, found six findings as area boundary markers in the form of a river flow and middle road dividers, there are seven districts divided based on their function, there are four main crossroads meeting points, and four landmarks as symbols that can be perceived about the Tanah Abang area. Broadly speaking, each of the elements of regional image formation has built the image of the Tanah Abang area as an economic area, green recreation, transit location, and a messy location.
Introduction *
Tanah Abang as an area that is part of the city of Central Jakarta is a quite vital area, because Tanah Abang is one of the historical areas and one of the areas with quite high population mobilization in the city of Central Jakarta.Even so, currently the Tanah Abang area is known as a dense area, irregular spatial planning, and one of the sources of high traffic jams in Jakarta.In the midst of rampant development and regional rivatization, as well as the strong influence of globalization in Jakarta, of course it will more or less affect the uniqueness or the image of the area in Tanah Abang itself.(Mangunwijaya, 1998).Kevin Lynch in his theory of city image, explains that awareness of an image of the area can be built through five main elements forming the image of the city, namely paths, edges, districts, nodes, and landmarks.
Tanah Abang itself is a historical area that has a long story, the journey of the formation of the Tanah Abang area has changed several times.According to Abdul Chaer in his book Tenabang Tempe Doeloe, previously Tanah Abang was an area that was relatively empty and still thick with trees, the contours of the area (Chaer, 2017).Before the colonial period, the Tanah Abang area was still an area full of swamps, and within the territory of the Pakuan Padjajaran Palace, centered in Bogor (Pasar Jaya, 1982), this can be proven by the discovery of a slate commemorating the founding of the palace by Sri Baduga in 1333 AD.Entering the colonial period of the Dutch East Indies occupation, the Tanah Abang area was included in the territory of the Batavian residency, Tanah Abang at that time was also known as Weltevreden, an area owned by the Europeans in rotation. .Originally owned by someone named Anthony Pviljoen, the Tanah Abang area was mostly used for leasing land to ethnic Chinese groups who at that time were ethnic groups of farmers to make it a plantation.The impact of this was that the Tanah Abang area was known as a plantation area.Then the Tanah Abang area moved over its control by Coernelis Shasstelesin in 1697 which later the Tanah Abang area also became an elite residential center for European groups living in Batavia, entered in 1733 the Tanah Abang area was then sold to a wealthy man named Justinus Vinck (Pasar Jaya, 1982), an increasingly crowded situation.the establishment of the Tanah Abang market as we know it today.Tanah Abang was also known as a Chinatown in Batavia as a result of the passenstelsel and wijkenstelsel policies, a policy of the Batavian government as a form of limiting freedom to live and socialize in the Batavia area, the policy came out not long after the Chinese commotion occurred in 1740 (Setiono, 2008) The researcher considers that the Tanah Abang area is an area that has a certain value historically, in the course of several centuries it has broadly become an area that has a strong image as an economic center producing agricultural and livestock commodities and as an area for elite housing, besides that the Tanah Abang area has also been known as an area of impact of ethnic conflict.In contrast to the current condition of the Tanah Abang area which is a densely built area, tends not to be well organized, and is an area with dense mobility.So thus, the researcher wants to examine how image formation can be built on the current Tanah Abang area, using the analysis of the city image formation theory from Kevin Lynch.
Region
Region is an area that has functional boundaries, which in other terminology are often described more specifically (Pontoh & Setiawan, 2008), Therefore, areas that are limited by limitations based on function or use are referred to as areas.An example is a residential area, in Government Regulation No. 14 of 2016 a residential area is an environment that is outside other areas, such as urban, rural, protected areas, and others.area in Law no.24/1992 is defined as a geographical spatial unit with all elements related to it, which makes the boundaries of the area determined based on its functional aspects .In a regional planning process, three main theories can be used, namely the first, the ground figure is used to analyze the relationship between open space and the mass of physical buildings to read the pattern or composition of an area.Second, Linkage is an approach through circulation networks of an area as the driving force of an area itself.Third, Place as a tool that analyzes the relationship between a place and its inhabitants, related to the history, social and culture that runs in the area (Roger, 1986).In the spatial arrangement of administrative areas functionally in a city, they may clash due to the dichotomy of regulations at the top and bottom levels, resulting in an irregular spatial layout (Thahir, 2018).
Tanah Abang
There are several versions of the mention of the Tanah Abang area.First, based on knowledge of the formal way of writing during the Dutch East Indies period, namely De Nabang, the word "Tanah" originates from writing the word "De", and Nabang is a term for a type of tree that grew on a hill in the Tanah Abang area at that time (Nurudin, 2015).In the pronunciation of the native group's vocabulary, De Nabang sounded Tenabang, and finally Tenabang became a term that was considered common for the mention of Denaban, the general term was then corrected again by giving the term Tanah Abang by the Dutch East Indies railroad company in 1890.
In another version of history, the term Tanah Abang is connected with the attack by the troops of the Mataram Kingdom on Batavia City in 1628 to seize Batavia City, which is a port city that has a strategic position in terms of trade.The attack by the troops of the Mataram Kingdom started from the south towards the city of Batavia, an area currently known as Tanah Abang.At the same time, the area was a military base for the Mataram troops, due to the condition of the land in the area having a red color, the troops of the Mataram Kingdom called it "Tanah Abang" (in Javanese) which means Tanah Merah (Pasar Jaya, 1982).
In the final version, the term Tanah Abang is known to be related to the term "brother and sister".In the story, the younger brother wanted to have a house, and asked his older brother, who was called Abang, until in the end the older brother built a house built on land belonging to his brother or his brother, the story became popular until now as Tanah Abang.Of the three existing versions, the second version is the most rational because Tanah Abang, which means red land, received its name from Sultan Agung's troops from the Mataram Kingdom after a futile attack on the city of Batavia and in the 17th century.Tanah Abang at that time it was an area that sold a lot of agricultural products in the Batavia Capital area (Jakarta Post, 27/05/2015).
Image Formation of a Region
Departing from the theory given by Kevin Lynch about the image of the city, he explained that the image formation of an area and public perception of the area will affect the image of an area itself.Kevin Lynch states that the image can be formed from five main forming elements, namely (1) Paths are the main elements of forming the image of an area.This element covers related access contained in an area or a medium in an area as a path that can be passed by its inhabitants to reach a destination from one place to another, (2) Edges are another image-forming element of an area, a description of the Boundaries of an area.Edges are not included in the dominant image formation element for an area, but this element can be a benchmark for marking the boundaries of an area, (3) Districts, elements that arise due to different descriptions in each area according to their function, usually these elements can describe the division of areas based on the dominant functional aspects in the area, (4) Nodes, is a meeting point between roads, Nodes become an element that is quite important because an element that can influence the decision of a person or group in choosing a destination based on their interests, (5) Landmarks, being the last element in forming the image of an area, this element is considered important and very simple to understand, because usually this element is a symbol or a striking sign of an area, these symbols or signs can usually be assumed in the form of letters on buildings, statues, monuments, monuments, trees, including luxury buildings.The symbol or sign is tied to a meaning, which is produced by history, events, or the social and cultural meaning attached to the symbol.(Lynch, 1960).
In the typology of perception, internal and external factors have interrelated relationships in identifying the imageforming elements of an area that has historical value.(Pettricia, Wardhani, & Antariksa, 2014).In forming the image of an area, in its implementation technically the elements that form the image of the city could have been fulfilled, but principally it was not successful in building the image of an area, as in the results of research on the study of imageforming elements of the city of Bitung it was found that there were several deficiencies in fulfilling the elements that formed the image of the city, so that the image of the city of Bitung was not firmly established in the perceptions of the people of Bitung city.(Wahab, Rondonuwu, & Poluan, 2018)
Research Method
Based on the topics taken in this study to explore a deeper meaning or lesson from certain cases, both in the singular and plural, the research was conducted using a qualitative case study approach.Case studies are research that seeks to describe an object to obtain a deep, comprehensive image or meaning (Yunus, 2010).The case study approach has a specialty for emic research, which is the presentation of the subject's view of the thing being studied, a thorough description in accordance with the conditions experienced by its readers in everyday life, as an effective means of liaison between researchers and informants, and is open to assessing the context related to the meaning of the phenomenon under study (Mulyana, 2013).In terms of position in government administration, Tanah Abang is a district-scale area in the Administrative City of Central Jakarta, DKI Jakarta.From a geographical perspective, Tanah Abang sub-district is located at 106⁰ 47' 30" to 106⁰ 49' 23" East Longitude and 6⁰ 10' 53" to 6⁰ 13' 45" South Latitude.Tanah Abang currently has a population of 175,107, of which 88,305 are male and 86,802 are female.
Paths
According to Kevin Lynch, paths are elements that people can directly feel when crossing the area.Kevin Lynch gives a simple example such as road access that can be passed by both vehicles and pedestrians.Tanah Abang has at least five connecting highways between the Tanah Abang area and areas outside the Tanah Abang area, each of which has provided various alternative access for people who want to enter or leave the Tanah Abang area depending on where or where people want to go.Of the five connecting access roads, there are two direct connecting access roads with the inner-city Toll Road as access that can provide further coverage, the five roads include: 2) Asia Afrika Street, on this access road there are two access points in and out of the elite area in Tanah Abang, Jl Asia Afrika has connected the entry access from the old Kebayoran area with the effective purpose of being a business, sports and government center, connecting the exit access to the Pal Merah and Kebayoran Baru areas.The condition of this entry access can be easily accessed by private vehicle, and on foot, but there are no public facilities such as city buses, but for exit access to the Tanah Abang area, city buses are available.When entering this road access, the public will be shown hotel buildings, shopping, Gelora Bung Karno Stadium (GBK), and central government buildings.
3) Jenderal Soedirman Street, the access road actually has two entry and exit points, but those entering the Tanah Abang area are limited to one entry and exit access point, that is, people can enter via the Kebayoran Baru area and be directed out to the Menteng area.This road access can be passed by private vehicles, public transportation such as city buses and the MRT, and can be passed by pedestrians.Every community that passes through this access will be presented with a large view of the street, wide pedestrian access and visits to the sports center.
4) Penjompongan Raya Street, this access road has two access points in and out that connect Tanah Abang with the Palmerah area, and access to the inner-city Toll Road 5) Gatot Subroto Street, this access road has two access points in and out, which connect the Tanah Abang area with the Mampang Prapatan area and the Red Pal area.This access road is usually used only for crossings in and out, because this access has limited transit destination centers, there are only a few business buildings, and central government buildings.In the analysis of the map of the Tanah Abang land area, the border area of the southern region which borders the old Kebayoran area is Kali Grogol and the boundary with the new Kebayoran area is the middle boundary of Jalan Jenderal Soedirman.In the western border area which borders the Red Pal area, it is marked with the KS Tubun road.
Edges
On the eastern border area that borders the Setia Budi area, namely the middle boundary of Jalan Jenderal Soedirman.
In the border area of the northern region which borders the Menteng area, namely the Cideng Channel and the Boundary with the Gambir area, namely Kebon Sirih Street.
District
According to Kevin Lynch, disticts are elements that arise due to different descriptions in each area according to their function, usually these elements can describe the division of areas based on the dominant functional aspects of the area, such as residential areas, businesses, government, and green open spaces.The Tanah Abang area is divided into at least seven dominant zones, namely the purple colored mark is a map of the distribution of Office and Trade areas, most of which are in the northern and eastern parts of the area.Marked in yellow is a distribution map of the Settlement area which is divided into sub-district equipment, luxury killers, and vertical killers which are dominant in the northern, western, and central areas.The signs in dark green and light green are a map of the distribution of park areas, green roads and green recreation in the center and south, especially in the GBK stadium area.The mark in red is a map of the distribution of government areas that are below the west of the area, namely the DPR/MPR RI government complex.Marks in brown are a map of the distribution of public service areas consisting of health, education, socio-culture, sports, public services and terminal facilities, spread in almost all areas of Tanah Abang.The mark in orange is a mixed area distribution map.
Nodes
According to Kevin Lynch, the node element is a meeting point between roads.Researchers assess the Tanah Abang area as having five main meeting points because it has become a location that can connect several external accesses with the main access in the Tanah Abang area.
The first meeting point is the Tanah Abang KRL Station, which is the busiest meeting point location in the Tanah Abang area because it is connected to many other areas.The second meeting point is the crossroads in front of Tanah Abang Market, located right in front of building A Tanah Abang Market which provides access options for the public to use the Jl K.H Wahid Hasyim route which has access in and out of the Tanah Abang area with the Setia Budi area, and the Jl K.H Mas Mansyur route which has access in and out through the Menteng, Gambir and Setia Budi areas.
The third meeting point is the crossroads located in front of the Jakarta Convention Center (JCC) building, providing access options for the public using the Jl Gatot Subroto route which has access in and out of Tanah Abang through the Pal Merah and Kebayoran Baru areas, as well as Jl Pemuda Gate which has access in and out through the Pal Merah and Kebayoran Lama areas.The fourth meeting point is the crossroads located in front of the Pal Merah Station KRL crossing, providing access options for the public using the Jl Penjompongan Raya line which has access in and out of the central area of the Tanah Abang area, and the Jl Gatot Subroto line which provides access in and out of the western area which is close to the Red Pal area.The fifth meeting point is the crossroads located near Jalan Ciliwung and the Karet KRL Station, providing options for access to and from the area through the central, northern, western and eastern parts of the area.
Landmark
According to Kevin Lynch, landmarks are important elements and are very simple to understand, because usually these elements are a symbol or a striking sign of an area, these symbols or signs can usually be assumed in the form of letters on buildings, statues, monuments, monuments, trees, including luxury buildings.The symbol or sign is tied to a meaning, which is produced by history, events, or the social and cultural meaning attached to the symbol or sign.
Based on the results of a survey filling in the opinion polls of people who routinely carry out activities in the Tanah Abang area, researchers found at least four landmarks that were considered the most identical to the Tanah Abang.The first being the Graha BNI building located on the border of St. Sudirman , This building has a striking architectural form that makes it easy to remember.The second landmark is the Tanah Abang Market building which is the largest fabric and clothing retail market in Southeast Asia.The Tanah Abang Market building is very synonymous with the Tanah Abang area in public perception, but the image formed from these landmark elements has also built a negative image perception of Tanah Abang, because of its disorderly condition, mainly caused by the failure of the arrangement of street vendors by vendors and the Jakarta government itself (Hasanuddin, 2019).The third landmark is the Glora Bung Karno Stadium (GBK) building which is the center for green areas, green recreation and sports in the Tanah Abang area, this location is sure to always be crowded on holidays, the GBK Stadium is a historical symbol as the strength of the non-aligned movement as a movement of developing countries which Soekarno wanted to show to developed countries, the construction of which coincided with the construction of other historical buildings and monuments as well (Rizaldy, Syukur, & Humaidi, 2020).The last landmark is the Karet Bivouac Monument Park building, which is a National Hero Cemetery located in the Karet tengsin area, this landmark has high historical value and has quite striking architectural characteristics in the middle of the city area.
Conclusion
In analyzing Kevin Lynch's theory of image formation of an area, researchers traced data from the case study of the Tanah Abang area which includes five elements that form the image of an area.Researchers have recorded all findings in the form of physical buildings and community perceptions, in the Paths aspect at least researchers have found five main road accesses that connect the Tanah Abang area with six other areas outside it, in some of these accesses are wide roads with adequate facilities and infrastructure for private vehicles, public transport, and pedestrians to pass.On the aspect of the edge as a boundary marker for the Tanah Abang area, six boundary signs were found which border the six surrounding areas.These signs come in two forms, namely the flow of water in the form of rivers and canals, and road dividers.In the zoning aspect, researchers found at least seven functional areas based on their classification, namely office and trade areas, residential areas, public service areas, green park areas, creative areas, central government areas, and mixed areas.In the node aspect, the researchers found at least four intersection points that became meeting points for residents' activities in the Tanah Abang area, namely the intersection in front of building A Tanah Abang Market, the intersection near the KRL Palmerah Station, the intersection near the JCC Building, and the intersection near the Ciliwung River and Karet Station.In terms of landmarks, researchers found four main symbols that represent people's perceptions of Tanah Abang, namely the Gelora Bung Karno Stadium (GBK) building as a center for green recreation and sports areas, and as a location that has historical value.The Tanah Abang Market Building, as the largest textile market area in Southeast Asia, also gives a negative impression as a crowded and irregular location.Wisma BNI Building, as a luxury office building that has a striking architectural character.Likewise with the TPU Tugu Karet Bivouac which is a National Hero Cemetery that has high historical value.
Figure 2 .
Figure 2. Main Road in Tanah Abang Area 1) K.H Mas Masyur Street, the access road has two access points in and out that can connect Tanah Abang with the confinement area in the south, and in harmony to West Jakarta in the north.Conditions on this road access are access for pedestrians, public transportation such as public transportation and city buses.If you enter the Tanah Abang area, the community will be shown a view dominated by shop buildings and small kiosks, and people who use this road usually aim to go to the market area.
Figure 3 .
Figure 3. Territorial Boundaries Sign in Tanah Abang Area
Figure 4 .
Figure 4. Zoning Maps in Tanah Abang Area
Figure 5 .
Figure 5. Main Meeting Point in Tanah Abang Area | 2023-12-05T16:02:46.017Z | 2023-07-25T00:00:00.000 | {
"year": 2023,
"sha1": "da8605664761f60b9d4d804960254138c8ee84ae",
"oa_license": "CCBYNCSA",
"oa_url": "https://jurnal.ahmar.id/index.php/daengku/article/download/1914/1249",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c03e9b502182d32eaf573d0ea43e4c9c20823e5c",
"s2fieldsofstudy": [
"Geography",
"Sociology"
],
"extfieldsofstudy": []
} |
102492430 | pes2o/s2orc | v3-fos-license | Mining Temporal Patterns to Discover Inter-Appliance Associations Using Smart Meter Data
With the emergence of the smart grid environment, smart meters are considered one of the main key enablers for developing energy management solutions in residential home premises. Power consumption in the residential sector is affected by the behavior of home residents through using their home appliances. Respecting such behavior and preferences is essential for developing demand response programs. The main contribution of this paper is to discover the association between appliances’ usage through mining temporal association rules in addition to applying the temporal clustering technique for grouping appliances with similar usage at a particular time. The proposed method is applied on a time-series dataset, which is the United Kingdom Domestic Appliance-Level Electricity (UK-DALE), and the results that are achieved discovered appliance–appliance associations that have similar usage patterns with respect to the 24 h of the day.
Introduction
Revolutionizing the smart grid environment has been the main focus of many governments as a consequence of the expeditious rise in energy demand [1].A smart grid is a network of power plants, utilities, substations, and smart meters for transferring electricity in a bidirectional way [2].Smart meters are considered to be the "cornerstone of the smart grid", as they send consumption data frequently based on a time-interval basis [3].The widespread deployment of smart meters has granted the availability of smart meter data.As a consequence, a massive amount of data is being generated.These precious data are analyzed to study and understand home residents' preferences.However, mining smart meter data is a challenging task.First, the continuous massive amount of data being generated requires mining data progressively without mining the whole database whenever new data is transmitted.Second, extracted findings are changing continuously with time, so it is important to maintain previously discovered findings in addition to discovering new findings.
Demand response (DR) programs are proposed by utilities for ensuring an efficient use of energy.DR is the change of home residents' behavior and power usage in response to the change of electricity prices [4].The key enabler for promoting these programs is to gain home residents' trust and respect their preferences when using their home appliances.Extracting such preferences can be achieved by analyzing smart meter data to find out the patterns by which the appliances are being used.
Home residents' usage behavior follows some regular routine patterns.Appliance usage patterns can be described by appliance-time associations and appliance-appliance associations.Appliance-time association is the correlation of using an appliance at a particular time.For example, a coffee machine is usually used at 8:00, so this appliance has higher priority to be used at this time more than any other appliance.Appliance-appliance association is the correlation for using two or more appliances together.For example, home theater usage is always associated with television (TV) usage, so this preference reveals that the two appliances should be working together, since it is useless to have the home theater on without using the TV.Thereby, DR programs should be designed based on user preferences without lowering their comfort level to encourage them to use energy efficiently.Raising home residents' awareness and providing them with the needed knowledge regarding their consumption will guide them to have a better usage behavior and use energy efficiently.
In the proposed work, appliance-appliance associations are represented by using association rules and hierarchical clustering.Association rules are extracted by employing the Utility-oriented Temporal Association Rules Mining (UTARM) algorithm [5].Agglomerative hierarchical clustering is used to group appliances with similar usage behavior together.There are two strategies of hierarchical clustering: divisive and agglomerative strategy.Divisive clustering uses a top-down approach, where all objects are initially in one cluster, and then this one cluster is split when going down.Agglomerative clustering is the opposite, where each object has its own cluster initially, and then clusters are merged when going up based on similarity or dissimilarity measures [6].The proposed work has been applied on the United Kingdom Domestic Appliance-Level Electricity (UK-DALE) dataset, which holds consumption logs for each appliance in five dwellings [7].
The rest of this paper is organized as follows: related work is presented in Section 2. Section 3 introduces the proposed algorithm.Section 4 evaluates the results, and finally the conclusion and future work are derived in Section 5.
Related Work
Researchers have been focusing on extracting the correlation between appliances' usage, which is known as appliance-appliance association or inter-appliance association.Two methods of correlation are studied: frequent pattern mining and sequential pattern mining.Both methods are similar, except that sequential pattern mining is sensitive to the order by which the appliances are used.
Regarding the sequential pattern mining approach, the authors in [8] presented the StrPMiner algorithm using a batch-free approach to mine appliances' sequential patterns.The authors in [9] extracted appliances' sequential patterns using PrefixSpan on Apache Spark.Some other work present methods for extracting the relation between home activities and the appliances used.In [10], the authors developed a rule-mining algorithm using the JMeasure metric to extract appliances that are associated with activities.In [11], the authors used the Sequential PAttern Discovery using Equivalence classes (SPADE) algorithm to extract appliances' sequential patterns and then introduced its results to a proposed prediction model.In [12], the authors extracted appliances' priority based on the activity context, which may vary from one context to another.In [13], the authors developed a system that guarantees that the total power consumption will not exceed a certain limit by prioritizing the appliances based on user preference during activity context and rescheduling the unneeded ones.
Regarding the frequent pattern mining approach, the authors in [14] designed an algorithm using the sliding window technique to extract frequent usage patterns, and built up a recommendation system using the extracted patterns.The authors in [15] presented a usage notation to develop the Correlation Pattern Mining System (CPMS), extending the PrefixSpan algorithm.Then, in [16], the algorithm was modified to consider appliances' usage probability.
Since the data is being generated continuously by smart meters, it is essential to develop algorithms that mine usage patterns progressively so that they maintain the old discovered association rules in addition to extracting new ones; this was achieved in [17] and [18].In [17], the authors enhanced CPMS, which extended the PrefixSpan algorithm to mine data progressively, and in [18], the authors developed an algorithm extending the pattern growth approach to extract appliance-appliance associations progressively.
Most of the work done using clustering techniques has focused on grouping customers with similar load profiles and paid a little attention to grouping appliances with similar behavior.The clustering of customers' load profile has been studied in [19][20][21][22][23][24][25][26][27].The clustering of appliances has been studied in [28,29].The authors in [28] proposed a proof of concept for clustering appliances' usage with respect to time-but for only one week-using hierarchical clustering and representing it using a dendogram.The authors in [29] extracted a load profile for each appliance, and then appliances with similar load profiles were grouped together.
Proposed Methodology
The proposed approach has extended the UTARM algorithm [5] for extracting appliances' association support values to a certain time.This time can be an hour, a day, a week, a month, or a season.Thereby, the chosen time factor is used to partition the temporal database.In this context, we have used the hour as our time factor.We have represented an appliance by its activity state as being OFF or ON, i.e., S = {0, 1}, and the value of power consumed is ignored.
The basic idea of extending the UTARM algorithm is that it calculates the association support value through taking temporal factors and utility factors into account.In our approach, the temporal factor is the hour, and it is used as a partitioning factor, while the utility factor is the probability of using an appliance at a certain hour.
Temporal association mining is the process of extracting temporal association rules from time-series data.Temporal association rules are an extension of frequent items' association rules with an aspect of time dimension.The basic idea of the time dimension is that each association rule can be valid for a period of time, which is sometimes called as an exhibition period or a lifespan [30].Thus, they discover a group of items that frequently appear together at a specific time and last for an exhibition period.
Utility-oriented mining is the process of extracting frequent itemsets subjected to a weight or importance factor.Mining frequent itemsets assumes that all items in the itemset have the same weight, while in utility-oriented mining, each item in the itemset has a different weight that reflects users' preferences.The key for including the utility factor is to enhance the quality of the discovered association rules by considering their importance [31].
The proposed approach processes data in batches of 24 h at the end of each day.First, the raw data is preprocessed by generating a usage matrix; then, the utility value of each appliance is updated per each hour.Finally, the appliance-appliance association is discovered using association rules and hierarchical clustering.The proposed approach is illustrated in Figure 1.
Big Data Cogn.Comput.2019, 3, x FOR PEER REVIEW 3 of 15 clustering of customers' load profile has been studied in [19][20][21][22][23][24][25][26][27].The clustering of appliances has been studied in [28,29].The authors in [28] proposed a proof of concept for clustering appliances' usage with respect to time-but for only one week-using hierarchical clustering and representing it using a dendogram.The authors in [29] extracted a load profile for each appliance, and then appliances with similar load profiles were grouped together.
Proposed Methodology
The proposed approach has extended the UTARM algorithm [5] for extracting appliances' association support values to a certain time.This time can be an hour, a day, a week, a month, or a season.Thereby, the chosen time factor is used to partition the temporal database.In this context, we have used the hour as our time factor.We have represented an appliance by its activity state as being OFF or ON, i.e., S = {0, 1}, and the value of power consumed is ignored.
The basic idea of extending the UTARM algorithm is that it calculates the association support value through taking temporal factors and utility factors into account.In our approach, the temporal factor is the hour, and it is used as a partitioning factor, while the utility factor is the probability of using an appliance at a certain hour.
Temporal association mining is the process of extracting temporal association rules from timeseries data.Temporal association rules are an extension of frequent items' association rules with an aspect of time dimension.The basic idea of the time dimension is that each association rule can be valid for a period of time, which is sometimes called as an exhibition period or a lifespan [30].Thus, they discover a group of items that frequently appear together at a specific time and last for an exhibition period.
Utility-oriented mining is the process of extracting frequent itemsets subjected to a weight or importance factor.Mining frequent itemsets assumes that all items in the itemset have the same weight, while in utility-oriented mining, each item in the itemset has a different weight that reflects users' preferences.The key for including the utility factor is to enhance the quality of the discovered association rules by considering their importance [31].
The proposed approach processes data in batches of 24 h at the end of each day.First, the raw data is preprocessed by generating a usage matrix; then, the utility value of each appliance is updated per each hour.Finally, the appliance-appliance association is discovered using association rules and hierarchical clustering.The proposed approach is illustrated in Figure 1.The proposed approach is achieved through three phases: • Data Preparation.
Data Preparation
In this work, we have used the UK-DALE dataset [7].The dataset holds consumption logs for five houses with different durations.The raw consumption log consists of a timestamp and the power consumed in watts.The readings were logged every six seconds for each home at appliance level producing a dataset with more than 1.1 billion records.We have preprocessed our data in chunks of 24 h.For each day, we have stored only one record in the database holding the date and a generated usage matrix of size 24 * N, where N is the number of appliances.Each cell in the matrix holds a zero or one indicating the state of each appliance during the corresponding hour as an OFF or ON state, respectively.For example, house 1 holds data for 52 appliances during 4.3 years from 2012 to 2017.The total number of logs generated by this house is equal to 52 (appliances) * 4.3 (years) * 365 (days) * 24 (h) * 60 (min) * 10 (s), which is approximately around one billion records.In our approach, we have transformed the logs of each day to only one record.Thereby, the total number of logs generated are equal to 4.3 (years) * 365 (days), which is around 1570 records for the same house, achieving a much better performance and reducing the dataset size significantly.
Calculating Appliances' Utility
The aim of this phase is to study the preferences of home residents for using their appliances.These preferences are extracted by calculating the utility values for each appliance per each hour.The utility values are calculated by extending the utility association rule mining introduced in the UTARM.In our temporal database, each appliance (a) has a state (s) logged per each hour (h).The appliance state is represented by a one or zero, indicating whether the appliance is active or not, respectively.The Internal Utility (IU), External Utility (EU), and Utility (U) values are calculated for each appliance per each hour in the 24 h, since we have used the hour as our time granularity and partitioning factor.The IU is a quantitative value that measures the quantity of an item in an itemset.The quantity refers to the number of days that the appliance is logged as active.The IU is calculated as the summation of the state (s) values, since the activity state is represented by a zero or one.The EU is a value that reflects the significance of an item in an itemset.The significance is represented by the probability of having an appliance active in an hour all over the recorded days.The EU is calculated as the number of the active days divided by the total number of days (n).The U is a subjective value that reflects the weight of an item in an itemset.It is expressed by a function in terms of IU and EU.The U of an appliance is equal to the IU value multiplied by the EU value.The IU, EU, and U values are calculated using Equations ( 1)-(3), respectively.
where n is the total number of days, and s is the state of the appliance at hour h in day i.
EU h a = number of active days total number of days (2) In the next step, Transaction Weighted Utility (TWU) is the utility value determined per each partition.It is calculated by multiplying the maximum IU value and the maximum EU value generated by any appliance for each hour (h).It is described in Equation ( 4): where h is the hour.
Extracting Appliance-Appliance Associations
The aim of this phase is to extract appliance-appliance association rules considering utility values computed from the previous phase.The first step in the algorithm is to generate candidate 2-itemsets of home appliances, i.e., a1 and a2 represent a single candidate.For each candidate, the utility and temporal values are calculated per hour.The Frequency (FU) is the number of days for which a1 and a2 are active at a certain hour.The utility value (U) of candidate 2-itemsets, which are described in Equation ( 5), is calculated as the summation of the appliances' utility values.The Frequent Temporal Utility (FTU), which indicates the candidate support value and is described in Equation ( 6), is a function of the candidate 2-itemset's utility value and TWU: where a1 and a2 are appliances in the candidate 2-itemset, and L is the number of appliances in an itemset.Algorithm 1 outlines the steps used for the proposed approach by extending the UTARM algorithm.Algorithm 1 requires a minimum support (minsup) value, which is a threshold value for eliminating the infrequent patterns.The next step of the algorithm is to generate association rules for discovering appliance-appliance associations.The appliance-appliance association is the extraction of appliances that are preferred to be used together: for example, the washing machine and the dryer.Association rules are expressions in the form of X =⇒ Y [32], indicating that the usage of appliance Y is associated with the usage of appliance X at hour h for an exhibition period.For each discovered association rule, support and confidence values are calculated.The support value, which is calculated using Equation ( 6), indicates the frequency of using the two appliances together at an hour.The confidence value, which is calculated using Equation (7), indicates the frequency of using appliance Y in the case of using appliance X at an hour h: Algorithm (2) outlines the steps used for generating the appliances' association rules.Algorithm 2 requires a minimum support (minsup) value and a minimum confidence (minconf) value.The minsup is a threshold value for processing only the frequent patterns.The minconf is a threshold value for eliminating the insignificant association rules.Finally, hierarchical clustering is applied on the candidate 1-itemset using the FTU support value, which indicates the frequency of using an appliance at each hour.Hierarchical clustering groups appliances with similar usage behavior with respect to time.
Evaluation and Results
In this section, a comprehensive analysis was conducted to explain our results.To the best of our knowledge, our proposed approach is the first to consider appliances' utility with respect to temporal mining.The architecture of the proposed approach has succeeded to mine smart meters' data progressively without mining the whole database whenever new data is transmitted.The progressive approach is achieved by utilizing the computed utility data in addition to mining the newly generated data only at the end of the day.
Clustering analysis has been represented using a dendogram.The horizontal axis of the dendrogram represents the similarity or dissimilarity distance between clusters, and the vertical axis represents appliances clustered by their similar usage, as represented in Figures 2-6.
The results of house 1 are represented in Figure 2. Some appliances are associated together at the same hours: for example, the usage of the samsung_charger and bedroom_chargers represents a cluster.Also, the usage of the amp_livingroom and subwoofer_livingroom represent another cluster having similar support values.
The results of house 2 are represented in Figure 3.The usage of a cooker, rice_cooker, represents a cluster that might reveal their usage together during the cooking activity, and this observation proves that home activities can be identified from appliances' usage.
The results of house 3 are represented in Figure 4.
If the similarity or dissimilarity threshold is set to one, then three clusters will be extracted, which are {electric_heater, kettle}, {projector}, and {laptop}.If the similarity or dissimilarity threshold is set to two, then two clusters will be extracted, which are the {electric_heater, kettle, projector} and {laptop}.
The results of house 4 are represented in Figure 5.
It is noted that the freezer and the gas_boiler are represented in one cluster, which makes sense, since these appliances have a thermostat component making them active during the 24 hours so they have similar activity usage.
The results of house 5 are represented in Figure 6.
We can observe that appliances such as the toaster, kettle, stream_iron, and nespresso_pixie are grouped together having no support values, which means that these appliances have no frequent usage patterns.In this section, a comprehensive analysis was conducted to explain our results.To the best of our knowledge, our proposed approach is the first to consider appliances' utility with respect to temporal mining.The architecture of the proposed approach has succeeded to mine smart meters' data progressively without mining the whole database whenever new data is transmitted.The progressive approach is achieved by utilizing the computed utility data in addition to mining the newly generated data only at the end of the day.
Clustering analysis has been represented using a dendogram.The horizontal axis of the dendrogram represents the similarity or dissimilarity distance between clusters, and the vertical axis represents appliances clustered by their similar usage, as represented in Figures 2-6.
The results of house 1 are represented in Figure 2. Some appliances are associated together at the same hours: for example, the usage of the samsung_charger and bedroom_chargers represents a cluster.Also, the usage of the amp_livingroom and subwoofer_livingroom represent another cluster having similar support values.
The results of house 2 are represented in Figure 3.The usage of a cooker, rice_cooker, represents a cluster that might reveal their usage together during the cooking activity, and this observation proves that home activities can be identified from appliances' usage.
The results of house 3 are represented in Figure 4.If the similarity or dissimilarity threshold is set to one, then three clusters will be extracted, which are {electric_heater, kettle}, {projector}, and {laptop}.If the similarity or dissimilarity threshold is set to two, then two clusters will be extracted, which are the {electric_heater, kettle, projector} and {laptop}.
The results of house 4 are represented in Figure 5.The usage of a cooker, rice_cooker, represents a cluster that might reveal their usage together during the cooking activity, and this observation proves that home activities can be identified from appliances' usage.
The results of house 3 are represented in Figure 4.If the similarity or dissimilarity threshold is set to one, then three clusters will be extracted, which are {electric_heater, kettle}, {projector}, and {laptop}.If the similarity or dissimilarity threshold is set to two, then two clusters will be extracted, which are the {electric_heater, kettle, projector} and {laptop}.
The results of house 4 are represented in Figure 5.It is noted that the freezer and the gas_boiler are represented in one cluster, which makes sense, since these appliances have a thermostat component making them active during the 24 hours so they have similar activity usage.
The results of house 5 are represented in Figure 6.Regarding the discovered association rules, we can observe that the priority of appliances' usage differs with respect to time; also, residents' behavior changes over the time, so some findings may expire with time.Thus, each association rule is associated with a certain hour and has an exhibition period indicating the validity of the discovered association rule.Moreover, it is noted that appliances that are always active during the 24 h; for example, the fridge results in associations with all appliances at any time.In our work, we have set the minimum confidence value to be 75% to limit the number of the rules discovered.
The duration of the logged in data of house 1 is around 4.3 years.The number of association rules discovered is 1280 rules.Table 1 represents a sample of the associated discovered rules of house 1.It is noted that the freezer and the gas_boiler are represented in one cluster, which makes sense, since these appliances have a thermostat component making them active during the 24 hours so they have similar activity usage.
The results of house 5 are represented in Figure 6.We can observe that appliances such as the toaster, kettle, stream_iron, and nespresso_pixie are grouped together having no support values, which means that these appliances have no frequent usage patterns.
Regarding the discovered association rules, we can observe that the priority of appliances' usage differs with respect to time; also, residents' behavior changes over the time, so some findings may expire with time.Thus, each association rule is associated with a certain hour and has an exhibition period indicating the validity of the discovered association rule.Moreover, it is noted that appliances The rules reveal that the subwoofer_livingroom, tv, and amp_livingroom are associated together by being active at hours 0, 21, 22, and 23.Also, the tv and kitchen_lights are associated together at hours 22 and 23.
The duration of the logged in data of house 2 is around seven months.It is found that the speakers, server, and router are highly associated with each other, and are always active in the background during the 24 h.Table 2 represents a sample of the discovered association rules of house 2, excluding the speakers, server, and router appliances, which are always active in the background.It is observed that the monitor and the laptop are associated together.However, the confidence of the laptop usage during the monitor usage has a higher value than the usage of the monitor during the laptop usage.The duration of the logged in data of house 3 is 37 days.Table 3 represents a sample of the associated discovered rules of house 3. We can observe that the usage of the laptop is associated with the usage of the electric_heater but with different confidence values based on the hour.In hour 2, the confidence was 100%, while it was 71.8% in hour 9.This observation is because the electric_heater was always active in the background, and the laptop was associated with hours 2 and 9. Thereby, an association between those two appliances is extracted at hours 2 and 9.
The duration of the logged in data of house 4 is around five months.Table 4 represents a sample of these background associations in house 4. It is noted that appliances such as the tv_dvd_digibox_lamp, gas_boiler, and freezer are always working in the background.
The duration of the logged data of house 5 is around 4.5 months.Table 5 represents a sample of the association rules discovered for house 5, excluding appliances that are always active in the background.It is observed that the appliance i7_desktop has a high confidence value for being used during the usage of the primary_tv, 24_inch_lcd, and oven.This is because the appliance i7_desktop is highly associated with hours from 11:00 to 00:00; thus, any appliance that is associated with these hours will result in an association with the i7_desktop.
The conducted results succeeded in extracting appliances' associations using hierarchical clustering and association rules mining.Table 6 shows a comparison between both methods.From this comparison, hierarchical clustering can be described as a generic approach, and association rules can be described as a specific approach for extracting applianceappliance associations.
The proposed approach is developed using Python and MongoDB on an Intel(R) Core(TM) i7-6500U CPU and a RAM of 8.00 GB.The proposed approach was evaluated by comparing its conducted results with the pattern-growth approach, since that most of the previous work extends it.It succeeded in extracting associations that have a higher weight in addition to achieving a better runtime performance.Figure 7 shows the runtime analysis for mining the generated data for only one day.The x-axis represents the number of appliances, and the y-axis represents the execution time in seconds.The experiment is performed on smart meter data generated in one day, since the data is processed at the end of each day.It is observed that the proposed approach and the pattern-growth approach have similar runtimes for a small number of appliances.However, the proposed approach has a better performance as the number of appliances increases.That's because the cost of building the frequent-pattern tree and pruning the infrequent patterns is high when the number of appliances is increased.
The conducted results can integrate with the DR management techniques developed in [33,34] through home energy management systems (HEMS).Thereby, they respond to DR programs and reschedule home appliances while keeping in consideration the extracted preferences of home residents.One example is if there are two clusters of appliances that are associated together, and the first cluster has higher confidence values than the second one.Then, at peak hours, HEMS should keep appliance associations that have higher confidence values active together and reschedule the others to another time.
Conclusions and Future Work
Developing DR programs has become an interest for saving energy in the residential sector.Energy is wasted by home residents due to their lack of knowledge about their consumption.Raising their awareness will guide them toward an efficient use of energy.Preserving home residents' comfort level is a key factor for motivating them to respond to DR programs.Thus, a lot of research is presented in order to mine smart meter data for extracting the preferences of home residents.The conducted results can be integrated with home energy management systems to respond to DR programs In this work, we have extended the UTARM algorithm to discover associations between appliances.The basic idea of using UTARM is that an association is measured based on two factors: the temporal factor, which was the hour, and the utility factor, which was the weight of using an appliance at the hour.Our work mine data progressively at the end of each day in chunks of 24 h.Initially, the utility values are updated; then, FTU support values are calculated, revealing the appliance association level to an hour.Then, hierarchical agglomerative clustering is applied using FTU support values to group appliances with similar usage together.The results achieved are represented using a dendogram.
The hierarchical clustering and UTARM algorithm succeeded in discovering applianceappliance associations.However, the UTARM algorithm identifies the validity of the association rule through its exhibition period, as some rules may expire as residents' behavior changes.The experiment is performed on smart meter data generated in one day, since the data is processed at the end of each day.It is observed that the proposed approach and the pattern-growth approach have similar runtimes for a small number of appliances.However, the proposed approach has a better performance as the number of appliances increases.That's because the cost of building the frequent-pattern tree and pruning the infrequent patterns is high when the number of appliances is increased.
The conducted results can integrate with the DR management techniques developed in [33,34] through home energy management systems (HEMS).Thereby, they respond to DR programs and reschedule home appliances while keeping in consideration the extracted preferences of home residents.One example is if there are two clusters of appliances that are associated together, and the first cluster has higher confidence values than the second one.Then, at peak hours, HEMS should keep appliance associations that have higher confidence values active together and reschedule the others to another time.
Conclusions and Future Work
Developing DR programs has become an interest for saving energy in the residential sector.Energy is wasted by home residents due to their lack of knowledge about their consumption.Raising their awareness will guide them toward an efficient use of energy.Preserving home residents' comfort level is a key factor for motivating them to respond to DR programs.Thus, a lot of research is presented in order to mine smart meter data for extracting the preferences of home residents.The conducted results can be integrated with home energy management systems to respond to DR programs.
In this work, we have extended the UTARM algorithm to discover associations between appliances.The basic idea of using UTARM is that an association is measured based on two factors: the temporal factor, which was the hour, and the utility factor, which was the weight of using an appliance at the hour.Our work mine data progressively at the end of each day in chunks of 24 h.Initially, the utility values are updated; then, FTU support values are calculated, revealing the appliance association level to an hour.Then, hierarchical agglomerative clustering is applied using FTU support values to group appliances with similar usage together.The results achieved are represented using a dendogram.
The hierarchical clustering and UTARM algorithm succeeded in discovering appliance-appliance associations.However, the UTARM algorithm identifies the validity of the association rule through its exhibition period, as some rules may expire as residents' behavior changes.
Table 1 .
Sample of House 1 Appliance-Appliance Associations.
Table 2 .
Sample of House 2 Appliance-Appliance Associations.
Table 3 .
Sample of House 3 Appliance-Appliance Associations.
Table 4 .
Sample of House 4 Appliance-Appliance Associations.
Table 5 .
Sample of House 5 Appliance-Appliance Associations.
Table 6 .
Comparison between Hierarchical Clustering and Association Rules. | 2019-04-08T04:08:30.974Z | 2019-03-29T00:00:00.000 | {
"year": 2019,
"sha1": "eed952dd44dc88c5509d2c6530bd2ea3ab72b9bb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-2289/3/2/20/pdf?version=1553851957",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "eed952dd44dc88c5509d2c6530bd2ea3ab72b9bb",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
218642533 | pes2o/s2orc | v3-fos-license | A Cardioversion and Renal Dysfunction Cascade: Cardioversion for Atrial Fibrillation, Acute Kidney Injury, and Recurrence of Atrial Fibrillation
A 62-year-old woman with hypertension presented with progressively worsening shortness of breath due to acute decompensated heart failure with atrial fibrillation in rapid ventricular response. During admission, she was managed with diuretics, goal-directed medical therapy for heart failure with successful DCCV (Direct current cardioversion) for first episode atrial fibrillation. However, one day after discharge, the patient presented with a recurrence of dyspnea with atrial fibrillation in rapid ventricular response and a reduction in urine output with elevated serum creatinine. In this case report, we describe the syndrome of acute kidney injury following cardioversion for atrial fibrillation known as AFCARD (Atrial Fibrillation Cardioversion Associated with Renal Dysfunction), highlight its incidence and reflect on renal dysfunction subserving the recurrence of atrial fibrillation after successful DCCV.
Introduction
Renal dysfunction following direct current cardioversion (DCCV) for atrial fibrillation (AF) is a recognized complication following cardioversion for atrial fibrillation and presents with an interval worsening of renal function [1]. Post-cardioversion renal failure is heralded by renal hypoperfusion which then results in a rise in serum creatinine [2,3]. This case report illustrates the occurence of Atrial Fibrillation Cardioversion Associated with Renal Dysfunction (AFCARD), the importance of active surveillance for its occurrence in patients with atrial fibrillation following cardioversion, and the predictors of AFCARD and its effect on the recurrence of atrial fibrillation after cardioversion.
Case Presentation
A 62-year-old female with hypertension and diabetes presented to the emergency department with one month of shortness of breath, orthopnea, worsening exercise tolerance, paroxysmal nocturnal dyspnea, palpitation, and bilateral leg swelling. She had been compliant with her medications which included Nifedipine, Irbesartan and Metformin.
Electrocardiogram revealed atrial fibrillation with the rapid ventricular rate (RVR) of 150bpm and a chest x-ray revealed bilateral pleural effusion with mild pulmonary vascular congestion. However, no pulmonary emboli were identified on computed tomography pulmonary angiogram (figures 1, 2). An Echocardiogram showed an left ventricular ejection fraction of 55-65%, with grade II diastolic dysfunction, moderate to severely dilated left atrium, severe mitral regurgitation, moderate tricuspid regurgitation (thought to be functional regurgitation, no structural valve abnormality seen), dilated inferior vena cava and increased pulmonary artery systolic pressure (figure 3).
Associated aortic and mitral valve regurgitation.
The patient was diagnosed with acute decompensated diastolic heart failure with new-onset atrial fibrillation in rapid ventricular response with a CHADSVASc of 4. Patient was administered diltiazem initially for rate control and commenced on intravenous furosemide 40mg twice daily and later continued on metoprolol succinate PO 25 mg twice daily for rate control and Rivaroxaban 15 mg PO daily. On day 7 of admission, she had significant improvement in symptoms and was clinically euvolemic with atrial fibrillation in controlled ventricular response on metoprolol succinate. A transesophageal echocardiography (TEE) was done which showed similar findings to the initial echocardiography but with no evidence of thrombus in the atrial appendage with interval improvement in mitral and tricuspid valve regurgitation. During the index admission, we achieved a sinus rhythm with direct current cardioversion of 200 joules after the TEE and was afterward discharged on Rivaroxaban, Metoprolol succinate, Amiodarone, Losartan, and Furosemide.
However, she was admitted 24 hours after discharge with shortness of breath which got worse with exertion, orthopnea, paroxysmal nocturnal dyspnea and decreased urine output despite being compliant with her discharge medication. She was dyspneic and required BiPAP, and she was later switched to 2L intranasal oxygen by nasal cannula. Her vital signs were as follows: HR of 93bpm, RR of 45cycles/min, BP of 116/95mmHg. Examination revealed crackles at the mid lungs bilaterally and bilateral pitting edema. Laboratory investigations revealed BNP-130, BUN 56mg/dL, Cr 2.2mg/dL (initial Cr from previous admission was 1.1 mg/dL), Na-130mmol/L, WBC-6.2, HB 13g/dL. Urine microscopy showed many white blood cells, but no muddy casts. This admission was further complicated by bradycardia, hypotension, hyponatremia, and hyperkalemia and was managed conservatively by withholding ACEI/ARB and beta-blockers. The patient continued to receive intravenous furosemide.
A repeat transthoracic echocardiogram showed an ejection fraction of 55-65%, no wall motion abnormalities, Doppler parameters consistent with restrictive physiology indicative of decreased left ventricular diastolic compliance and/or increased left atrial pressure, right ventricular volume, and pressure overload as evidenced by the diastolic and systolic flattening of the ventricular septum, moderate mitral and tricuspid regurgitation with normal IVC size. Subsequently, the serum creatinine increased to 2.5mg/dL and then plateaued before gradually trending downwards to 1.9mg/dL after a few days (table 1). On 4th day of admission, recurrence of atrial fibrillation was noted, despite the fact that the patient was on amiodarone for rhythm maintenance after DCCV. The patient was switched to metoprolol 12.5mg and amiodarone was discontinued. The patient was seen in clinic 3 months after with a creatinine level of 1.3 showing continuing renal improvement since discharge.
Discussion
Atrial fibrillation is the most commonly sustained arrhythmia encountered in daily practice [1].
While prospective studies have not demonstrated differences in mortality between rate control and rhythm control strategies, sinus rhythm may provide significant symptomatic improvement, and many patients with persistent atrial fibrillation undergo electrical or pharmacological cardioversion [4]. The indications for DCCV can be divided into 2 major categories: namely, the treatment of acute tachyarrhythmias and elective cardioversion for chronic atrial fibrillation and flutter. About 90% of cases of atrial fibrillation are successfully converted to sinus rhythm, whereas 95-97% of cases of ventricular tachycardia can be terminated by DCCV [5]. Another invasive procedure to restore and maintain sinus rhythm is catheter 'ablation' technology, but it is used less frequently than cardioversion [6].
DCCV is associated with risks including thromboembolism, complications associated with sedation/anesthesia (e.g. aspiration or respiratory arrest), sinus bradycardia, hypotension and rarely, pulmonary edema [7]. However, renal dysfunction post-DCCV (otherwise called post cardioversion renal failure) is less well described. In a recent study in 2013 evaluating the incidence and prognosis of renal dysfunction following cardioversion of atrial defibrillation, 17% of patients were reported to have developed AFCARD [1]. In another prospective study published in 2018 at the Mayo Clinic, the incidence was 5.7% [2]. In another study done in Jerusalem, post-cardioversion renal failure had an incidence rate of 9.7% [3].
Studies did not show a consensus definition of AFCARD. However, several studies alluded to AFCARD as absolute increase in serum Creatinine by ≥0.3mg/dL or a greater or equal to 50% increase compared to baseline within 48hrs of DCCV [2]. One study defined AFCARD as a rise in serum creatinine greater than 25% from baseline within a week following DCCV [1]. Another study defined post-cardioversion renal failure as a rise in serum creatinine greater than 25%, or greater than 0.5 mg/dl from baseline [3]. In the case described, the patient had been admitted and treated for heart failure secondary to new-onset AF. A week after, she underwent cardioversion and was discharged. Serum creatinine rose from the previous admission from 1.1mg/dL with eGFR of 50ml/min/1.73m2 (CKD stage 3A) to a serum creatinine of 2.2mg/dL with GFR of 23 within 48hrs of DCCV during this admission. This represented a 100% increase from baseline. A pointer to AFCARD in this patient was the timing of onset of the acute renal dysfunction in relation to cardioversion (her initial presenting symptoms had improved and urine output/renal function was normal prior to DCCV and patient was euvolemic on discharge post DCCV).
Many risk factors have been shown to be associated with AFCARD, many of which our patient had. In a study to determine the incidence, timing, risk factors and outcomes of post -cardioversion renal failure (PCRF) by Hellman et al., two strong predictors were congestive cardiac Failure and chronic renal failure [3]. Similar to the case described, a prospective maintained database review conducted over 14 years on patients undergoing DCCV at Mayo Clinic found prior diuretic use, inpatient status, and a lower heart rate post-DCCV to be strong associations [2]. The index patient in this study had similar implicating risk factors including chronic kidney disease. In addition, diabetes mellitus was seen as a predictor of AFCARD due to occult renal disease complicating renal function following DCCV [1].
In 2011, Schmidt et al. reported a series of 159 patients with persistent AF who underwent successful cardioversion in whom the renal function was assessed by estimated glomerular filtration (eGFR) at baseline and who were followed for one year in order to determine subsequent eGFR and recurrence of AF. The authors concluded that in patients with persistent AF who undergo successful cardioversion, impaired renal function is directly associated with a risk of AF recurrence [8]. Following this proven fact, it is not surprising that the patient had a recurrence of AF shortly after the post cardioversion syndrome. In one study, for example, the lower the eGFR, the more there was a likelihood of AF recurrence: eGFR <30ml/min, hazard ratio 6.82, P<0.001; eGFR 30-59ml/min, hazard ratio 3.31, P=0.01; eGFR 60-90ml/min, hazard ratio 2.10, P=0.13. In the same study, maintenance of sinus rhythm was associated with improvement in eGFR in patients with mild or moderate renal insufficiency [9].
The pathophysiology of AFCARD though not clearly understood is related to the hemodynamic changes that occur after cardioversion and attaining sinus rhythm, which results in renal hypoperfusion [1]. The outcomes of AFCARD include a higher incidence of advanced heart failure, diabetes mellitus, worsened chronic kidney disease and increased mortality rates. In the Hellman et al. study, the 1-year survival rate for patients with PCRF was 50% compared to 89% in controls [3].
There is currently an ongoing prospective study in Jerusalem, Israel, scheduled for completion in December 2020. It aims to evaluate the risk of ARF following cardioversion. Importantly, hemodynamic changes, fluid balance, and sodium levels will be evaluated as potential mechanisms for both acute renal failure and pulmonary edema post cardioversion. This holds promise as it may provide answers to the improperly understood pathophysiology of Atrial Fibrillation Cardioversion Associated with Renal Dysfunction [10]. However, DCCV resulting in renal injury is a phenomenon that needs closer attention and renal function may need to evaluated closely especially in patients with risk factors.
Conclusions
AFCARD is a recognized phenomenon and portends bad clinical outcomes including a cascade of recurrent atrial fibrillation, heart failure, a decline in renal function and increased mortality. Therefore the anticipation of and monitoring for renal dysfunction following DCCV should be a part of post-cardioversion surveillance for patients with atrial fibrillation who achieve sinus rhythm after cardioversion. | 2020-04-16T09:07:05.949Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "d21c73684cc6531da8c564159935a4d4fda27800",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/27712-a-cardioversion-and-renal-dysfunction-cascade-cardioversion-for-atrial-fibrillation-acute-kidney-injury-and-recurrence-of-atrial-fibrillation.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20074e7594fab003b80951571abc1df3f197588d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119288899 | pes2o/s2orc | v3-fos-license | Thermal emissions and climate change: Cooler options for future energy technology
Global warming arises from 'temperature forcing', a net imbalance between energy fluxes entering and leaving the climate system and arising within it. Humanity introduces temperature forcing through greenhouse gas emissions, agriculture, and thermal emissions from fuel burning. Up to now climate projections, neglecting thermal emissions, typically foresee maximum forcing around the year 2050, followed by a decline. In this paper we show that, if humanity's energy use grows at 1%/year, slower than in recent history, and if thermal emissions are not controlled through novel energy technology, temperature forcing will increase indefinitely unless combated by geoengineering. Alternatively, and more elegantly, humanity may use renewable sources such as wind, wave, tidal, ocean thermal, and solar energy that exploit energy flows already present in the climate system, or act as effective sinks for thermal energy.
2
Despite decades of significant global warming, humanity is only now beginning significantly to address the reduction of CO 2 emissions caused by power generation and transport (1). It is now clear that CO 2 emissions must be largely eliminated during the first half of this century in order to minimise the risk of dangerous climate change (2,3,4). However, what has not been widely understood is the likely climate impact of thermal emissions from power generation and use, which may cause significant additional warming beyond the middle of this century.
Energy technologies such as nuclear (fission or fusion), fossil fuels and geothermal power plants are human-made sources of heat energy which flows into Earth's climate system. Such thermal emissions contribute directly to Earth's heat budget and cause global warming. In contrast, most renewable energy technologies, such as wind, wave and tidal power, harvest energy from Earth's dissipative systems, and thus do not directly add to Earth's heat budget. Solar electricity generation, a promising and fast-expanding energy technology, acts in a more complex way because it exploits an existing energy flow (incoming solar radiation) but in so doing, for the purpose of efficient energy generation, it typically lowers the albedo of Earth's surface at the solar collector location, thus adding to Earth's heat budget. Still, the thermal impact of solar generation may be less than that of heat-based energy sources like nuclear and geothermal power, because solar collectors take the place of terrain which was already absorbing a significant proportion, typically from 60-90%, of incident solar energy.
The flow of human-made heat into the climate system plays only a small part in present-day global warming, but as the world moves to a low-carbon energy economy 3 increasingly dominated by electricity generation, this transition, together with expected growth in consumption, will lead to serious warming effects in addition to those previously caused by human-made CO 2 . At present, the reduction of CO 2 emissions must be humanity's paramount concern, and any cost-effective zero-carbon technology is preferable to a carbon emitting one. However by midcentury technologies will need to be in place to generate usable energy without significant thermal emissions integrated over the full cycle of generation, transmission and energy consumption. This consideration has major implications for long-range funding choices between competing energy technologies such as fusion, wind, and solar energy, which could potentially contribute substantial proportions of the world's energy supply from midcentury onwards.
In this paper we begin by considering the global temperature forcing arising from thermal emissions from heat-based energy sources -fuel burning and geothermal power.
The resultant forcing is compared to a typical estimate of CO 2 forcing assuming responsible measures are taken to control CO 2 emissions (2). Thermal emissions are shown to contribute increasingly to total forcing, threatening to prevent the decline in forcing from midcentury onwards which climate scientists have assumed will occur after CO 2 emissions have fallen significantly (1). We then turn to an evaluation of the likely impact of solar energy, considering various scenarios for collector albedo based on different types of solar technology. One of these options could, speculatively, lead to solar power generation combined with a net negative temperature forcing. Finally, we consider the impact of ocean thermal energy conversion, which contributes a transient, but potentially large, negative temperature forcing. 4 Current global primary energy use is increasing at about 2%/yr (5), and apart from short-term variations is likely to continue increasing for the foreseeable future.
Following the assumptions of Ref. 6 we assume a constant growth rate of 1%/yr with a transition to a zero-carbon energy economy based on electrical generation, occurring during the period up to 2100. As a baseline for considering a transition to renewable technologies, we first evaluate a scenario where this transition is based on nuclear and/or fossil fuels with carbon capture and storage, assuming an electrical generation efficiency of 35-50% (7).
The resulting thermal forcing in W/m 2 is plotted as the pale red band in Fig. 1, together with the CO 2 forcing (black curve) resulting from emissions in the 'Coal Phase-Out' scenario of Kharecha and Hansen (2). Instead of peaking and subsequently decreasing from midcentury onwards as in Ref. (2), the total forcing from CO 2 and thermal emissions stabilises for about 100 years at a level of nearly 3 W/m 2 , corresponding to an equilibrium temperature rise of about 3 -4°C (3), and then the forcing rises further (full red band). Based on virtually all accepted climate models, this would lead Earth into a period of dangerous climate change, either late this century or early during the next one.
If heat-based energy sources could be fully supplanted by renewables such as wind, wave or solar energy, thermal emissions would be much less significant -only heat generated in plant construction and maintenance, and possibly second-order changes in the climate system owing to perturbations of natural energy flows by these energy conversion systems, would play a role. However, current forecasts suggest that such energy sources, while important, cannot supply all of humanity's energy needs, and much research, technology development and manufacturing is currently being 5 Fig. 1: Global temperature forcings due to human-made CO 2 and thermal effects. The CO 2 forcing is based on the 'Coal Phase-out' scenario of Kharecha and Hansen, and the thermal contribution ('Nuclear' or 'Solar') is based on 1%/yr growth of total energy use, with electrical generation efficiency in the indicated range, and non-electrical energy (phased out between 2020 to 2100) approximated as 100% efficient. 6 devoted to solar-based electricity generation, with the photovoltaic (PV) market growing at a near-term projected rate of nearly 50% according to some estimates (8).
In a recently published Solar Grand Plan, Zweibel et al. have proposed a strategy to transform the US energy economy from its current fossil-fuel rich mix to one dominated by solar power (6). In their scenario fuel costs are kept at acceptable values, and total energy production grows by 1%/yr during the course of this century. By 2100 the transition to a solar energy economy with modest contributions from wind and geothermal power is essentially complete as solar electrical output plus other renewables reach over 90% of the total energy supply including transport. Zweibel et al.
where A is the global area covered by solar collectors. We now address the issue of global temperature forcing represented by this equation.
In order to do this we again assume 1%/yr growth of total energy use, and within this, following Zweibel et al., the solar contribution is assumed to grow by about 6%/yr between 2020 and 2100, after which it becomes the dominant energy source and grows by 1%/yr constrained by our assumption on total energy use. Excess heat production is One way to compensate for the temperature forcing caused by solar power is to use a form of geoengineering (12) known as 'albedo engineering' in which an area of relatively low albedo is replaced by a high-albedo surface. For the highest solar efficiency of 50%, the area of high-albedo surface needed to compensate this is roughly 9 an order of magnitude smaller than the solar collection area, depending on the terrain albedo where it is installed. Thus, if solar collectors are economically feasible, this additional technology is presumably feasible as well.
Another, more speculative, approach incorporates albedo engineering directly into PV technology by backing a thin active PV layer with a reflective, or partially reflective, substrate. Given a suitable choice of PV material and device structure, this enables much of the unused energy in the long-wavelength spectrum of sunlight (below the semiconducting band gap) to be reflected back out of the entry surface of the PV cell, thus raising its effective albedo. While a full evaluation of this effect is beyond the scope of this paper, we make some simplified model calculations. We consider an idealised PV cell with a simple active layer 1 (coloured blue) and a substrate layer 2 (coloured red) as illustrated in Fig. 3a. Photons with energy E above the bandgap E g of layer 1 are assumed to be converted with 100% quantum efficiency, i.e. with energy efficiency E E g / , while a proportion A of photons below the bandgap energy are assumed to be absorbed in layer 2, i.e. not reflected back through the entry surface or transmitted through layer 2. For simplicity we assume that A is independent of wavelength. The quantities R and T shown in Fig. 3a are the fractions of the subbandgap light reflected and transmitted, respectively, by layer 2.
In this simple model the energy absorbed by the cell is a sum of the energy converted to charge carriers in layer 1 and the energy absorbed in layer 2, i.e.
, where N indicates the photon flux and E the mean photon energy in the relevant energy range, above (>) or below (<) the bandgap. The 'thermal efficiency' of our model cell, defined as the fraction of this absorbed energy converted to electrical energy, is In general this efficiency is higher than that defined in terms of electrical output divided by incident solar energy flux, because the denominator includes only the absorbed light.
For a reflective solar cell, i.e. one with no transmission through the back of the cell, A=1-R and the effective albedo of the cell is given by the ratio of reflected to incident energy fluxes, efficiency is in the range ~55-65%, higher than can be achieved with conventional energy technologies. In addition, cell albedo is typically higher than terrain albedo, offering a theoretical possibility that PV technology could produce a negative temperature forcing supporting global cooling.
As an example, we present the time evolution of temperature forcing for a global solar grand plan using reflective PV technology with R=0.9 (A=0.1). The impact of such a technology is shown in Fig. 4 returning global temperature to historical levels during the next century. 13 The curve for net global temperature forcing using the wider bandgaps now lies below the prediction for CO 2 forcing because of the cooling effect of the wide bandgap PV technology. Although the trend of these results is clear, the predicted curves for the different PV materials in Fig. 4 are not quantitatively exact, given the simplifications used in our analysis.
Finally, we show the potential impact of the developing technology of ocean thermal energy conversion (OTEC), which generates electricity by pumping heat from warm ocean-surface waters to the cooler, deeper ocean (13). Here the heat sunk per output electrical energy, ε / 1 , is high due to the relatively low thermodynamic conversion efficiency, ε , of heat pump technology. This determines the amount of heat taken from the climate system after accounting for use of electrical output power, where P is global electrical output and the term 1 − accounts for heat created from electricity use. We assume as in most climate models that the ocean surface couples strongly (i.e. rapidly) to the atmosphere, we treat the slow transport of buried heat back to the ocean surface as negligible on the time scale of our predictions, and we assume 06 . 0 = ε , close to the maximum expected from OTEC model simulations (14), so providing a conservative estimate of 1 − ε .
The result, shown in Fig. 4 by the blue dashed curve, is dramatic. Even a substantially smaller contribution of OTEC to global energy generation, producing a proportionately smaller negative temperature forcing, could be an important contribution to stabilising global surface temperature. The key principle here is that heat is pumped to the deep ocean much faster than is achieved by natural ocean heat transport processes. In this way, OTEC, with an appropriate magnitude and spatial distribution of generating capacity, could help control and even reverse the rising trend of ocean surface temperature which is driving fast, potentially dangerous, climate feedbacks. These ideas appear, in outline, to offer a synergistic combination of power generation and environmentally compliant geoengineering for responsible future energy use.
We have shown that thermal effects from human energy consumption will play an increasingly significant role in global temperature forcing in the future. Consequently it is important to discriminate between renewable energy sources that inject heat into Earth's climate system (geothermal energy), those that rely on Earth's dissipative systems (wind, wave, tidal energy), and those that may potentially remove heat energy (suitably chosen solar technology, OTEC, and perhaps other future technologies).
Correct technology choices will reduce the magnitude and time period of future global warming caused by current CO 2 emissions. Conversely, nuclear fusion, which may potentially come on stream as a significant energy source several decades hence, will be too late as a replacement for CO 2 -emitting technologies, and inherently (15) will not meet contemporaneous thermal emissions criteria for a sustainable global environment.
We suggest a re-evaluation by the science and engineering communities, taking thermal cycle analysis into account, so that the most promising future technologies for zerocarbon, thermally-compliant energy generation can be targeted for research and development during the next decade. | 2019-04-12T18:24:24.088Z | 2008-11-04T00:00:00.000 | {
"year": 2008,
"sha1": "aab6fdb862aa4bcf6048c85b8b8ba832bd30269b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aab6fdb862aa4bcf6048c85b8b8ba832bd30269b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
102493209 | pes2o/s2orc | v3-fos-license | Structural and Electrochemical Properties of Dense Yttria-Doped Barium Zirconate Prepared by Solid-State Reactive Sintering
For practical utilization of proton-conducting ceramic fuel cells and electrolyzers, it is essential to lower the sintering temperature and processing time of BaZrO3-based proton conductors. We investigated the effect of sintering temperature and time on the structural and electrochemical properties of dense BaZr0.8Y0.2O3−δ (BZY) prepared by a solid-state reactive sintering process, using NiO as a sintering aid. The sintered BZY prepared from the micronized precursor powder exhibited a density higher than 93%, and an average grain size in the range of 0.6 to 1.4 μm. The orthorhombic BaY2NiO5 phase was also observed in the sintered BZY from the combined conventional and synchrotron X-ray diffraction measurements. Electrochemical impedance spectroscopy showed that the total proton conductivities of BZY can be modulated by sintering temperature in a wet reducing atmosphere. The maximum total ion transport number achieved was 0.89 at 600 ◦C, and the maximum power density of the symmetric BZY electrolyte supported cell with Pt electrodes was 5.24 mW·cm−2 at 900 ◦C.
Introduction
Y-doped BaZrO 3 (BaZr 1−x Y x O 3−δ ) is a suitable material for use as an electrolyte in proton-conducting ceramic fuel cells (PCFCs) and electrolyzers, because of its excellent chemical stability and adequate proton conductivity [1][2][3].The main advantages of PCFCs over conventional solid oxide fuel cell using (O 2− conducting electrolyte) are the lack of dilution of fuel by the generated steam and a low operating temperature (below 600 • C).However, to realize dense BaZr 1−x Y x O 3−δ requires a high sintering temperature (≥1600 • C) and long sintering time (≥24 h) [4][5][6][7], which limits the practical utilization for PCFCs.Furthermore, it results in the volatilization of the barium component during the high-temperature sintering process, and consequent precipitation of a secondary phase, such as Y 2 O 3 [8,9], which significantly reduces the proton conductivity of the BaZr 1−x Y x O 3−δ electrolyte [4].
Therefore, for the development of practical PCFCs, increasing effort has been applied to lowering the temperature and time for BaZr 1−x Y x O 3−δ sintering.Recently, various transition metal oxides have been explored as sintering aids to decrease the sintering temperature and time [6,[10][11][12][13][14][15][16].Babilo and Haile [6] demonstrated that adding small amounts of transition metal oxides to BaZr 1−x Y x O 3−δ powder significantly enhanced the densification of BaZr 1−x Y x O 3−δ .Tong et al. [17][18][19] suggested a solid-state reactive sintering (SSRS) process to produce refractory proton-conducting oxides by combining solid-state synthesis and reactive sintering processes.In the SSRS process, the addition of only a small amount of a sintering aid was sufficient to achieve proton-conducting oxides in a desired crystalline phase.In addition, the further densification/grain growth of the synthesized phase to a fully dense specimen could be realized through a single-step process.NiO is one of the ideal sintering agents to reduce BaZr 0.8 Y 0.2 O 3−δ sintering temperature and processing time in the SSRS [19].The addition of 2 wt.%NiO as a sintering aid did not reduce the proton conductivity of BaZr 0.8 Y 0.2 O 3−δ electrolytes prepared by the conventional solid-state reaction [20]; however, the detrimental effect of NiO sintering aid on the transport properties of BaZr 1−x Y x O 3−δ is also reported in the literature [21,22].Although there have been reports on the preparation, conductivity, and mechanical behavior of dense BaZr 1−x Y x O 3−δ sintered using NiO sintering aids [17][18][19]23,24], systematic investigation of the structural and electrochemical properties of dense BaZr 0.8 Y 0.2 O 3−δ prepared by means of SSRS with a NiO sintering aid is still needed.In the present study, we investigate the effect of sintering conditions, including sintering time and temperature during SSRS, on various properties, such as the crystal/micro structure, total/grain/grain boundary conductivity, electromotive force, and fuel cell performance.To the best of our knowledge, this is the first systematic study on the structural and electrochemical properties of dense BaZr 0.8 Y 0.2 O 3−δ prepared by SSRS, with 2 wt.%NiO used as a sintering aid.Stoichiometric amounts of BaCO 3 (≥99%, Sigma-Aldrich), ZrO 2 (99%, Junsei, Chuo-ku, Tokyo), and Y 2 O 3 (99.99%,Sigma-Aldrich, St. Louis, MI, USA) with 2 wt.%NiO (99.99%,Sigma-Aldrich) as a sintering aid were ball-milled at 120 rpm in ethanol for 24 h with zirconia balls.The ball-milled powder was dried at 70 • C using a rotary evaporator (OSB-2000, Eyela, New York, NY, USA).To investigate the effect of particle size, the precursor powder was pulverized further using a planetary ball mill (Pulverisette 6, Fritsch, Idar-Oberstein, Germany) at 300 rpm for 5 h.The weight proportions of the precursor powder, ethanol, and zirconia ball mixture (three-ball mixture with diameters of 1, 3, and 10 mm) were 1:2:5.The particle size of the precursor powder was analyzed using a particle size analyzer (La-950 V2, Horiba, Fukuoka, Japan).The powder was uniaxially pressed into green pellets 25.4 mm in diameter under a pressure of 25 MPa for 1 min.The pellets were covered with calcined BaZr 0.8 Y 0.2 O 3−δ powder to prevent volatilization of the barium, and sintered at temperatures ranging from 1435 to 1535 • C at 50 • C intervals for h in ambient air.The relative density of the BZY pellets was measured in ethanol media using Archimedes' method.The phases and lattice parameters of the sintered pellets were characterized using X-ray diffraction (XRD, Rigaku 2200, Tokyo, Japan).The XRD patterns were obtained at room temperature in the 2θ range from 20 to 80 • , and the corresponding lattice parameters were calculated using the FullProf program [25].Synchrotron X-ray diffraction measurements were conducted using the high-resolution powder diffraction at the 9B beamline of the Pohang Light Source (Pohang Accelerator Laboratory, Korea).A crushed BZY powder sample was exposed to a monochromatic 1.5184 Å X-ray beam, and the diffraction pattern was measured in the 2θ range from 10-130 • at intervals of 0.01 • .The data obtained were analyzed using the FullProf program [25].The microstructure of the sintered pellets was investigated by scanning electron microscopy (SEM, S-4700, Hitachi, Tokyo, Japan).The chemical composition of the BZY pellets sintered for 15 h was measured using inductively coupled plasma optical emission spectroscopy (ICP-OES, ICP-OES 720, Agilent, Santa Clara, CA, USA).
Measurement of Electrochemical Properties of the Dense BaZr
The BZY pellets were polished to a thickness of 1 mm using SiC sandpaper ((100-1200) grit), brush-coated on both sides using a Pt paste (6926, Heraeus, Hanau, Germany), and treated at 900 • C for 1 h to investigate their electrochemical properties.Electrochemical impedance spectroscopy (EIS, Metrohm, Autolab, Utrecht, Netherland) measurements were performed at temperatures of 100 to 600 • C at 50 • C intervals under a humidified reducing atmosphere (3% H 2 balanced in Ar, p(H 2 O) = 0.03 atm) in the frequency range of 1 MHz to 0.1 Hz.The EIS spectra were fitted and analyzed using Z-view software (Scribner Associates Inc., Southern Pines, NC, USA).To investigate the electromotive force (EMF) and current-voltage behavior, BZY discs were sealed onto an alumina reactor using a gold ring, and heated at 1050 1a), and with an additional planetary ball milling (right in Figure 1a).The BZY pellets produced from the conventional, only ball-milled precursor powder exhibited apparent bending and cracks after sintering.In contrast, when planetary ball milling was additionally employed after the conventional ball milling process, crack-free BZY pellets were obtained.Dilatometry experiments have shown that during SSRS, abrupt expansion of green BZY pellets followed by sintering shrinkage occurs, because of the release of CO 2 from the precursor powder [26], which can cause cracking and bending of BZY pellets.Additional planetary ball milling to micronize the precursor powders prevents the crack formation and bending of BZY pellets during SSRS, as shown in Figure 1a. Figure 1b the particle size distribution of the precursor powders produced with and without additional planetary ball milling.The particle sizes of the ball-milled precursor powder are in the range of 0.2-10 µm, while that with additional planetary ball milling ranges from 0.2 to 4 µm.After an additional planetary ball milling, the median particle size decreased from 3.0 to 1.1 µm.At the same time, the size distribution also decreased.Thus, it was confirmed that a smaller median particle size and narrower particle distribution of the precursor powder was beneficial in obtaining dense, crack-free BZY pellets by the SSRS method.Furthermore, when an additional planetary ball milling was performed, regardless of the sintering temperature and time, denser BZY pellets with a density higher than 94% were obtained.On the other hand, dense BZY pellets could only be obtained with the ball-milled precursor powder at high sintering temperature and long sintering time (1535 • C and 15 h, respectively).Table 1 summarizes the densities of the BZY pellets sintered by the SSRS process.For the comparison, we also sintered BZY using the planetary ball-milled BaCO 3 , ZrO 2 , and Y 2 O 3 powder, without the NiO sintering aid.The density of BZY sintered at 1485 • C for 15 h was only 58%, which confirms that the NiO (2 wt.%) plays an important role in realizing high density BZY pellets.In order to confirm the effect of sintering conditions during SSRS on the microstructures and phases of BZY, we modulated the sintering time and temperature.Figure 2 shows the SEM images of sintered BZY pellets produced with and without the additional planetary ball milling process at various sintering times and temperatures.The grain size gradually increased with higher sintering time and temperature for both cases (Table 2).However, the average grain size of the BZY pellets obtained using the planetary ball-milled precursor powder was in the range of 0.61-1.39μm, while that of the BZY pellets obtained using the ball-milled precursor powder was in the range of 0.51-1.19In order to confirm the effect of sintering conditions during SSRS on the microstructures and phases of BZY, we modulated the sintering time and temperature.Figure 2 shows the SEM images of sintered BZY pellets produced with and without the additional planetary ball milling process at various sintering times and temperatures.The grain size gradually increased with higher sintering time and temperature for both cases (Table 2).However, the average grain size of the BZY pellets obtained using the planetary ball-milled precursor powder was in the range of 0.61-1.39µm, while that of the BZY pellets obtained using the ball-milled precursor powder was in the range of 0.51-1.19µm, as estimated from the SEM image.Furthermore, the BZY sintered at 1435 • C for 5 and 15 h.utilizing only conventional ball milling (Figure 2a,c), do not fully form a flat surface at micrometer scale.The SEM images and measured density of the BZY confirmed that the sintered BZY produced using the larger powder produced by only ball-milled precursor (without planetary ball milling) had a lower density than that produced from the larger powder generated by planetary ball milling.Figure 3 shows the SEM images of the sintered BZY as a function of the sintering time at 1485 • C for the planetary ball-milled precursor powder.This figure shows that with increasing sintering time from 5 to 25 h, the grain size of BZY increased from 0.83 to 1.16 µm.Such results agreed with the grain growth behaviors that are typically observed.
Energies 2018, 11, x FOR PEER REVIEW 5 of 15 μm, as estimated from the SEM image.Furthermore, the BZY sintered at 1435 °C for 5 and 15 h.utilizing only conventional ball milling (Figure 2a,c), do not fully form a flat surface at micrometer scale.The SEM images and measured density of the BZY confirmed that the sintered BZY produced using the larger powder produced by only ball-milled precursor (without planetary ball milling) had a lower density than that produced from the larger powder generated by planetary ball milling.
Figure 3 shows the SEM images of the sintered BZY as a function of the sintering time at 1485 °C for the planetary ball-milled precursor powder.This figure shows that with increasing sintering time from 5 to 25 h, the grain size of BZY increased from 0.83 to 1.16 μm.Such results agreed with the grain growth behaviors that are typically observed.4a shows the XRD patterns of the BZY samples prepared by sintering of planetary ballmilled precursor powders at 1435, 1485, and 1535 °C for 15 h, respectively.In the BZY sample sintered at 1435 °C for 15 h, cubic BZY perovskite (space group 3 ) and unreacted cubic Y2O3 (space group 3 ) phases were observed, whereas the BZY samples sintered at 1485 and 1535 °C for 15 h showed cubic BZY with orthorhombic BaY2NiO5 (space group Immm) phases, and the unreacted cubic Y2O3 peak mostly disappears.Such changes in the Y2O3 secondary phase at the different sintering temperatures suggests that it is necessary to perform SSRS at temperatures higher than 1485 °C.To study the effect of the sintering time on the phase formation, BZY samples prepared from the planetary ball-milled precursor powder were sintered at 1485 °C for (5 to 25) h at 5 h intervals.The XRD patterns (Figure 4b) confirm that the sintering time does not significantly affect the crystalline phases of the sintered BZY.The lattice parameter was approximately 4.21 Å for all of the sintered BZY samples regardless of sintering time, which is in a good agreement with the values reported in the literature [17][18][19][20]23,24].The lattice parameter of BZY given in Table 3 decreased with Ba deficiency and/or Y2O3 secondary phase formation [27]; hence, it can be seen that 15 h is the minimum sintering time for obtaining dense BZY by means of SSRS with least variation in the nominal BZY composition.In addition, the actual chemical compositions of sintered BZY for 15 h were determined by ICP-OES.The compositions of BZY sintered at 1435 and 1485 °C are Ba0.99Zr0.79Y0.22O3−δ-0.044BaY2NiO5and Ba0.98Zr0.80Y0.22O3−δ-0.043BaY2NiO5,respectively, while that of BZY sintered at 1535 °C is Ba0.93Zr0.82Y0.23O3−δ-0.046BaY2NiO5,under the assumption that Ni is present in the compound of BaY2NiO5.The significant evaporation of Ba is observed for the BZY sintered at 1535 °C.Combined XRD and ICP-OES results suggest that dense BZY pellets could be prepared at the optimized sintering condition of BZY at 1485 °C for 15 h. Figure 4a shows the XRD patterns of the BZY samples prepared by sintering of planetary ball-milled precursor powders at 1435, 1485, and 1535 • C for 15 h, respectively.In the BZY sample sintered at 1435 • C for 15 h, cubic BZY perovskite (space group Pm3m) and unreacted cubic Y 2 O 3 (space group Ia3) phases were observed, whereas the BZY samples sintered at 1485 and 1535 • C for 15 h showed cubic BZY with orthorhombic BaY 2 NiO 5 (space group Immm) phases, and the unreacted cubic Y 2 O 3 peak mostly disappears.Such changes in the Y 2 O 3 secondary phase at the different sintering temperatures suggests that it is necessary to perform SSRS at temperatures higher than 1485 • C. To study the effect of the sintering time on the phase formation, BZY samples prepared from the planetary ball-milled precursor powder were sintered at 1485 • C for (5 to 25) h at 5 h intervals.The XRD patterns (Figure 4b) confirm that the sintering time does not significantly affect the crystalline phases of the sintered BZY.The lattice parameter was approximately 4.21 Å for all of the sintered BZY samples regardless of sintering time, which is in a good agreement with the values reported in the literature [17][18][19][20]23,24].The lattice parameter of BZY given in Table 3 decreased with Ba deficiency and/or Y 2 O 3 secondary phase formation [27]; hence, it can be seen that 15 h is the minimum sintering time for obtaining dense BZY by means of SSRS with least variation in the nominal BZY composition.In addition, the actual chemical compositions of sintered BZY for Azad et al. [28] have reported that two cubic phases are observed for 10 mol.%BaZr 1−x Y x O 3−δ , due to the cross substitution of Y from B-sites onto the A-sites.In order to investigate the crystal structure in detail, a synchrotron X-ray diffraction pattern was obtained for the crushed powder from sintered BZY (1485 • C for 15 h). Figure 5 shows the observed, calculated, and difference profiles of the synchrotron X-ray diffraction pattern.Table 4 summarizes the structural parameters and residual indices from the Rietveld refinement.The refinement results confirm that all of the diffraction peaks are consistent with single-phase cubic BZY and orthorhombic BaY 2 NiO 5 as a minor impurity phase.The calculated weight fractions of BZY and BaY 2 NiO 5 from the final Rietveld refinement run were 94 and 6%, respectively.
Azad et al. [28] have reported that two cubic phases are observed for 10 mol.%BaZr1−xYxO3−δ, due to the cross substitution of Y from B-sites onto the A-sites.In order to investigate the crystal structure in detail, a synchrotron X-ray diffraction pattern was obtained for the crushed powder from sintered BZY (1485 °C for 15 h). Figure 5 shows the observed, calculated, and difference profiles of the synchrotron X-ray diffraction pattern.Table 4 summarizes the structural parameters and residual indices from the Rietveld refinement.The refinement results confirm that all of the diffraction peaks are consistent with single-phase cubic BZY and orthorhombic BaY2NiO5 as a minor impurity phase.The calculated weight fractions of BZY and BaY2NiO5 from the final Rietveld refinement run were 94 and 6%, respectively.To evaluate the electrochemical properties of dense BZY samples sintered at 1435, 1485, and 1535 • C for 15 h, EIS measurements were conducted in the temperature range 100 to 600 • C.Even though oxygen ions, protons, and holes are potential charge carriers in BaZrO 3 -based proton-conducting oxide, at temperatures below 600 • C under a reducing atmosphere, the protons play a major role in the charge carrier, because the diffusion coefficient of protons is much higher than those of holes and oxygen ions [29].Therefore, in the present study, proton conductivity was measured under a reducing wet atmosphere (3% H 2 in Ar and pH 2 O = 0.03 atm).Cole-Cole plots are typically used to determine the grain (R b and CPE b ), grain boundary (R gb and CPE gb ), and electrode (R elec and CPE elec ) contributions [6][7][8]13,30,31].Figure 6 shows the Cole-Cole plots obtained for the sintered BZY samples.Figure 6a shows that the grain arc could not be clearly distinguished in the temperature range of 400 to 600 • C, because of the frequency limit (1 MHz) of the impedance analyzer.Therefore, only the grain resistance was obtained from the real axis intercept at high frequency.In contrast, the grain arc was observed below 200 • C, as shown in Figure 6b, which enables the grain capacitance to be determined.Figure 6 also includes equivalent circuit fitting.In the equivalent circuit, a constant-phase element (CPE) was used for fitting of the depressed arc described by CPE = Y 0 (jw) n −1 , where w is the frequency, Y 0 is the non-Debye capacitance, and n is the phase-angle parameter of the constant-phase element.The capacitance of each arc contribution can be calculated using C = (R 0 Y 0 ) , where R 0 is the resistance parallel to CPE. Figure 7a plots the total proton conductivity of the BZY samples as a function of the inverse of the temperature.Among the BZY samples, the BZY sample sintered at 1435 • C exhibited rather low conductivity compared to the other samples, throughout the entire temperature range.At 500 • C, total conductivity values of ((2.28, 1.15, and 2.01) × 10 −3 ) S•cm −1 were obtained for the sample sintered at 1485, 1435, and 1535 • C, respectively.The activation energies of the samples sintered at 1435, 1485, and 1535 • C were 0.51, 0.52, and 0.44 eV, respectively, in the temperature range of 100 to 500 • C. The total conductivity of all of the BZY samples exhibited a change in slope at approximately 550 • C, because of the decrease in the concentration of protons in the BZY lattice [17,20].The conductivity and activation energy values obtained in this study are consistent with the conductivity reported in the literature for BZY prepared by SSRS with a 1 wt.%NiO sintering aid (1.6 × 10 −3 S•cm −1 ) [13].Therefore, it could be concluded that the relatively high amount of NiO sintering aid used in this study had little impact on the total proton conductivity of the BZY in the temperature range (300 to 500) • C, except for the BZY sintered at 1435 • C.This suggests that the unincorporated Y 2 O 3 phase plays a detrimental role in proton conduction, as indicated in the X-ray diffraction patterns (Figure 4a), since the unincorporated Y 2 O 3 results in the decrease of proton concentration of BZY.The conductivity values measured at temperature below 200 • C were much lower compared with those measured at a temperature higher than 200 • C (by a factor of ~5 to 12), which suggests that the effect of secondary and impurity phases (BaY 2 NiO 5 and Y 2 O 3 ) is apparent at temperatures below 200 • C. Table 5 summarizes the conductivities obtained in this study, and those reported in the literature for BZY at temperatures between 500 and 600 • C under wet inert and reducing atmospheres.The comparison of total proton conductivities in this study and the literature indicates that 2 wt.%NiO sintering aid is not significantly detrimental for proton conduction in the BZY electrolyte.Glycine-nitrate combustion 1600/10 The brick layer model is typically used to describe the physical properties of polycrystalline materials.The specific grain boundary conductivity is given by σ , where R gb is the grain boundary resistance, C gb is the grain boundary capacitance, C bulk is the bulk capacitance, L is the sample thickness, and A is the electrode area.When the grain boundary capacitance cannot be extracted from the impedance spectra, the specific boundary conductivity is calculated using , under the assumption that the grain boundary thickness and grain size remain effectively constant.Here, δ is the grain boundary thickness, and D is the grain size.Figure 7b shows the grain conductivity and specific boundary conductivity of the samples as a function of the inverse of temperature.The conductivity data reported in the literature [13] for BZY prepared by SSRS with a 1 wt.%NiO sintering aid are included for comparison.The grain conductivity was much higher than the grain boundary conductivity, which is consistent with the results reported in the literature for BaZrO 3 -based proton-conducting oxides.The specific grain boundary conductivity obtained in this study was significantly lower than that reported for BZY prepared by SSRS with a 1 wt.%NiO sintering aid [13] below 350 • C, whereas the grain conductivity was comparable.The significantly lower grain boundary conductivity obtained in this study might be due to the higher amount of the BaY 2 NiO 5 secondary phase, resulting from the relatively high amount of the NiO sintering aid (2 wt.% NiO).This is consistent with the finding reported in the literature that the secondary phase is mainly segregated at the grain boundary by introducing a sintering aid [32].The activation energy values of the grain and specific grain boundary conductivities were in the ranges 0.33-0.39 and 0.35-0.45eV, respectively, which is consistent with the values of 0.39 and 0.45 eV, respectively, reported in the literature for BZY prepared using SSRS [13].
Table 6 shows the bulk dielectric constant (ε r ), Debye length (λ), and Mott-Schottky depletion length (λ*) of BZY at 100 , and λ * = 2λ e∆φ(0) kT , respectively.In these equations, C H is the proton concentration of BZY (2.50 × 10 26 m −3 ) estimated in the literature [20]; ∆φ(0) is the barrier height at the center of the grain boundary, determined from σ bulk σ GB = e e∆φ(0) kT 2 e∆φ(0) kT ; ε 0 is the vacuum permittivity.A is the area of the sample; L is the thickness of the sample; k is the Boltzmann's constant; and e is the electron charge.It is generally believed that the arc in the high-frequency region exhibits a value for the proton-conducting oxide grain (C = ~10 −11 F), and that the arc in the intermediate-frequency region exhibits a value for the grain boundary (C = ~10 −9 F) [6][7][8]13,30,31] that is in agreement with this study.The bulk dielectric constants of BZY obtained in this study were higher than those (37-155) reported in the literature [13,31,33,34].However, the Debye length, Mott-Schottky depletion length, and barrier height values obtained in this study were fairly consistent with the ranges of values of 0.26-0.35nm, 0.5-1.4nm, and 0.04-0.35V, respectively) reported for BZY in the literature [13,33,34]., where E 0 = ∆G 0 2F , ∆G is the Gibbs free energy for standard conditions, and P O 2 , P H 2 , and P H 2 O are the partial pressures of oxygen, hydrogen, and water, respectively.Figure 8a presents the measured and theoretical EMF values, together with the ion transport number (EMF measured /EMF theoretical ).The ion transport number was found to decrease with increasing temperature from 0.89 to 0.7) at(600 to 900 • C, because of the decrease in the proton concentration in BZY with increasing temperature, which is consistent with the findings of previous studies [20,32].Figure 8b illustrates the current-voltage behavior of the electrolyte-supported cell (Pt/BZY/Pt).The power density of the fuel cell increases with increasing temperature.The maximum powder density value obtained in this study was 5.24 mW•cm −2 at 900 • C, which is comparable to the values reported in the literature for electrolyte-supported cells produced using Y-doped BaZrO 3 -based electrolytes (1-1.2 mm thick) and ZnO and CuO sintering aids [14,16].
Electromotive Force Characteristics and Fuel Cell Performance
The performance and characteristics of the BZY sample sintered at 1485 °C for 15 h, which exhibited the best conductivity, were investigated by means of electromotive force (EMF) and current-voltage measurements.The theoretical value of the EMF was calculated using , where = ∆ , ∆ is the Gibbs free energy for standard conditions, and O , H , and H O are the partial pressures of oxygen, hydrogen, and water, respectively.Figure 8a presents the measured and theoretical EMF values, together with the ion transport number (EMFmeasured/EMFtheoretical).The ion transport number was found to decrease with increasing temperature from 0.89 to 0.7) at(600 to 900 °C, because of the decrease in the proton concentration in BZY with increasing temperature, which is consistent with the findings of previous studies [20,32].Figure 8b illustrates the current-voltage behavior of the electrolyte-supported cell (Pt/BZY/Pt).The power density of the fuel cell increases with increasing temperature.The maximum powder density value obtained in this study was 5.24 mW•cm −2 at 900 °C, which is comparable to the values reported in the literature for electrolyte-supported cells produced using Y-doped BaZrO3based electrolytes (1-1.2 mm thick) and ZnO and CuO sintering aids [14,16].
Figure 2 .
Figure 2. SEM images of BZY pellets sintered at different sintering temperature and for different times, using ball-milled (left) and planetary ball-milled precursor powders (right).
Figure 2 .
Figure 2. SEM images of BZY pellets sintered at different sintering temperature and for different times, using ball-milled (left) and planetary ball-milled precursor powders (right).
Figure4ashows the XRD patterns of the BZY samples prepared by sintering of planetary ball-milled precursor powders at 1435, 1485, and 1535 • C for 15 h, respectively.In the BZY sample sintered at 1435 • C for 15 h, cubic BZY perovskite (space group Pm3m) and unreacted cubic Y 2 O 3 (space group Ia3) phases were observed, whereas the BZY samples sintered at 1485 and 1535 • C for 15 h showed cubic BZY with orthorhombic BaY 2 NiO 5 (space group Immm) phases, and the unreacted cubic Y 2 O 3 peak mostly disappears.Such changes in the Y 2 O 3 secondary phase at the different sintering temperatures suggests that it is necessary to perform SSRS at temperatures higher than 1485 • C. To study the effect of the sintering time on the phase formation, BZY samples prepared from the planetary ball-milled precursor powder were sintered at 1485 • C for (5 to 25) h at 5 h intervals.The XRD patterns (Figure4b) confirm that the sintering time does not significantly affect the crystalline phases of the sintered BZY.The lattice parameter was approximately 4.21 Å for all of the sintered BZY samples regardless of sintering time, which is in a good agreement with the values reported in the literature[17][18][19][20]23,24].The lattice parameter of BZY given in Table3decreased with Ba deficiency and/or Y 2 O 3 secondary phase formation[27]; hence, it can be seen that 15 h is the minimum sintering time for obtaining dense BZY by means of SSRS with least variation in the nominal BZY composition.In addition, the actual chemical compositions of sintered BZY for 15 h were determined by ICP-OES.The compositions of BZY sintered at 1435 and 1485 • C are Ba 0.99 Zr 0.79 Y 0.22 O 3−δ -0.044BaY 2 NiO 5 and Ba 0.98 Zr 0.80 Y 0.22 O 3−δ -0.043BaY 2 NiO 5 , respectively, while that of BZY sintered at 1535 • C is Ba 0.93 Zr 0.82 Y 0.23 O 3−δ -0.046BaY 2 NiO 5 , under the assumption that Ni is present in the compound of BaY 2 NiO 5 .The significant evaporation of Ba is observed for the BZY sintered at 1535 • C. Combined XRD and ICP-OES results suggest that dense BZY pellets could be prepared at the optimized sintering condition of BZY at 1485 • C for 15 h.
Figure 4 .
Figure 4. (a) XRD patterns of BZY pellets sintered using the planetary ball-milled precursor powder at different sintering temperature of 1435, 1485, and 1535 °C for 15 h, and (b) XRD patterns of BZY pellets sintered using the planetary ball-milled precursor powder at 1485 °C for different sintering times of 5, 10, 15, 20, and 25 h.
Figure 4 .
Figure 4. (a) XRD patterns of BZY pellets sintered using the planetary ball-milled precursor powder at different sintering temperature of 1435, 1485, and 1535 • C for 15 h, and (b) XRD patterns of BZY pellets sintered using the planetary ball-milled precursor powder at 1485 • C for different sintering times of 5, 10, 15, 20, and 25 h.
Figure 5 .
Figure 5. Synchrotron X-ray powder diffraction pattern for the crushed powder of the BZY pellet sintered at 1485 °C for 15 h.The observed data are represented by circles, and the solid lines are the result of Rietveld refinements for the cubic BZY and orthorhombic BaY2NiO5.The difference profile is also shown (lower line).Tick marks indicate the Bragg positions of BZY (upper) and BaY2NiO5 (lower).
5 Figure 5 .
Figure 5. X-ray powder pattern for the crushed powder of the BZY pellet sintered at 1485 • C for 15 h.The observed data are represented by circles, and the solid lines are the result of Rietveld refinements for the cubic BZY and orthorhombic BaY 2 NiO 5 .The difference profile is also shown (lower line).Tick marks indicate the Bragg positions of BZY (upper) and BaY 2 NiO 5 (lower).
Figure 6 .
Figure 6.Nyquist plots of BZY sintered at different temperature of (a,d) 1435 °C, (b,e) 1485 °C, and (c,f) 1535 °C for 15 h at 400-600 °C (left) and 100 °C (right) in wet 3% H2 balanced in Ar (p(H2O) = 0.03 atm).The solid line is the fitting data for an equivalent electrical circuit (inset).The numbers used to label the spectra denote the logarithmic values of the frequency (Hz).
Figure 6 .
Figure 6.Nyquist plots of BZY sintered at different temperature of (a,d) 1435 • C, (b,e) 1485 • C, and (c,f) 1535 • C for 15 h at 400-600 • C (left) and 100 • C (right) in wet 3% H 2 balanced in Ar (p(H 2 O) = 0.03 atm).The solid line is the fitting data for an equivalent electrical circuit (inset).The numbers used to label the spectra denote the logarithmic values of the frequency (Hz).
Figure 7 .
Figure 7. (a) Total proton conductivity, and (b) bulk (filled symbols) and specific grain-boundary (empty symbols) conductivities of BZY sintered at different temperatures of 1435, 1485, and 1535 °C for 15 h as a function of the inverse temperature under wet 3% H2 balanced in Ar (p(H2O) = 0.03 atm).
Table 5 .
Proton conductivity of BaZr0.8Y0.2O3−δ at 500 and 60) °C under wet reducing atmosphere, in this study and in the literature.
7 .
(a) Total proton conductivity, and (b) bulk (filled symbols) and specific grain-boundary (empty symbols) conductivities of BZY sintered at different temperatures of 1435, 1485, and 1535 • C for 15 h as a function of the inverse temperature under wet 3% H 2 balanced in Ar (p(H 2 O) = 0.03 atm).
Table 5 .
Proton conductivity of BaZr 0.8 Y 0.2 O 3−δ at 500 and 60) • C under wet reducing atmosphere, in this study and in the literature.
Figure 8 .
Figure 8.(a) EMF values measured for the Pt/BZY/Pt cell, together with the theoretical electromotive force (EMF) and total ion transport number, and (b) current-voltage and power density curves of the BZY-supported cell with Pt/BZY/Pt configuration for an electrolyte thickness of 1 mm.
Figure 8 .
Figure 8.(a) EMF values measured for the Pt/BZY/Pt cell, together with the theoretical electromotive force (EMF) and total ion transport number, and (b) current-voltage and power density curves of the BZY-supported cell with Pt/BZY/Pt configuration for an electrolyte thickness of 1 mm.
3. Results and Discussion 3
• C for 1 h.Humidified air (p(H 2 O) = 0.03 atm) and H 2 (p(H 2 O) = 0.03 atm) gas were supplied to the cathode and anode, respectively.EMF values and current-voltage curves were measured in the temperature range of 600 to 900 • C. .1.Sintering Behavior and Structure of BaZr 0.8 Y 0.2 O 3−δ Figure 1a shows a photograph of the sintered BaZr 0.8 Y 0.2 O 3−δ (BZY) pellets produced from the precursor (BaCO 3 , ZrO 2 , Y 2 O 3 , and NiO) mixtures with only conventional ball milling (left in Figure
Table 2 .
Average grain size of sintered BZY pellets prepared using ball-milled (left) and planetary ball-milled precursor powders (right) at different sintering times and temperatures.Grain Size (
Table 2 .
Average grain size of sintered BZY pellets prepared using ball-milled (left) and planetary ball-milled precursor powders (right) at different sintering times and temperatures.
Table 3 .
Lattice parameters of sintered BZY pellets prepared using planetary ball-milled precursor powders at different sintering times and temperature.
Table 3 .
Lattice parameters of sintered BZY pellets prepared using planetary ball-milled precursor powders at different sintering times and temperature.
) Atmosphere (gas/atm with H 2 O) Reference
• C
Table 6 .
Bulk pseudocapacitance (C bulk ), grain-boundary pseudocapacitance (C GB ), dielectric constant (ε r ), Debye length (λ), barrier height at the grain boundary core (∆φ(0)), and Mott-Schottky depletion width (λ*) of BZY at 100 • C under wet 3% H 2 balanced in Ar (p(H 2 O) = 0.03 atm).Electromotive Force Characteristics and Fuel Cell PerformanceThe performance and characteristics of the BZY sample sintered at 1485 • C for 15 h, which exhibited the best conductivity, were investigated by means of electromotive force (EMF) and current-voltage measurements.The theoretical value of the EMF was calculated using EMF theoretical = E 0 + RT 2F ln | 2019-01-30T01:04:49.428Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "2f689253620bfca9bc5b26cebd964a305487d816",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/11/11/3083/pdf?version=1541672253",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2f689253620bfca9bc5b26cebd964a305487d816",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
7148305 | pes2o/s2orc | v3-fos-license | Acute and Chronic Effects of Dietary Lactose in Adult Rats Are not Explained by Residual Intestinal Lactase Activity
Neonatal rats have a high intestinal lactase activity, which declines around weaning. Yet, the effects of lactose-containing products are often studied in adult animals. This report is on the residual, post-weaning lactase activity and on the short- and long-term effects of lactose exposure in adult rats. Acutely, the postprandial plasma response to increasing doses of lactose was studied, and chronically, the effects of a 30% lactose diet fed from postnatal (PN) Day 15 onwards were evaluated. Intestinal lactase activity, as assessed both in vivo and in vitro, was compared between both test methods and diet groups (lactose vs. control). A 50%–75% decreased digestive capability towards lactose was observed from weaning into adulthood. Instillation of lactose in adult rats showed disproportionally low increases in plasma glucose levels and did not elicit an insulin response. However, gavages comprising maltodextrin gave rise to significant plasma glucose and insulin responses, indicative of a bias of the adult GI tract to digest glucose polymers. Despite the residual intestinal lactase activity shown, a 30% lactose diet was poorly digested by adult rats: the lactose diet rendered the animals less heavy and virtually devoid of body fat, whereas their cecum tripled in size, suggesting an increased bacterial fermentation. The observed acute and chronic effects of lactose exposure in adult rats cannot be explained by the residual intestinal lactase activity assessed.
Introduction
The main carbohydrate in milk, the first diet of all mammals, is the disaccharide lactose, which, together with the milk fat fraction, is the main energy source for the growing neonate. The responsible lactose-hydrolyzing enzyme lactase is expressed only in the enterocytes lining the small intestine, comprising the duodenum, jejunum and ileum, and is anchored upon intracellular maturation in the apical, brush border membrane in direct contact with the gut luminal content [1,2].
Most mammals show a temporary and programmed capability to digest lactose: intestinal lactase activity (due to the expression of the lactase gene) is maximal directly after birth and goes down quickly upon weaning [3,4]. This makes sense from both a functional and evolutionary point of view, as milk is normally the only and once-in-a-lifetime source of dietary lactose, the substrate of the lactase enzyme [5].
Yet, it was shown that intestinal lactase gene expression even increased in most mammals toward adulthood, although this expression usually does not result in functional lactase activity, probably due to a reduced or altered post-translational processing of the gene product [2,4,6]. Mammalian lactase gene expression hence seems transitional and shows a disparity with lactase activity or functionality.
Only in man, and even then restricted to specific ethnic groups, a fully functional intestinal lactase activity is maintained after weaning, in line with a continued consumption of lactose, mainly from dairy products [2,4].
In case of insufficient lactase activity, undigested lactose, regarded as "fiber", enters the large bowel where it can be fermented/metabolized by the residing microflora [7,8]. In this way, the ingested lactose may give some benefit to the organism, although a significant dietary lactose overload may cause gut symptoms due to lactose malabsorption, yielding lactose intolerance. This prompted the current studies into acute and chronic effects of lactose and lactose-containing milk-like products, such as infant milk formula (IMF): What are in adult animals the short-and long-term implications of exposure to lactose? Can adult animals be used to test IMF concepts?
In the current study, first, the time course and extent of the lactose intolerance into adulthood, i.e., the residual lactase activity, was monitored and assessed, both in the absence and continued exposure to dietary lactose (30% lactose diet). To this end, two independent methods to determine the functionality of the lactase enzyme were applied. In addition, the observed residual lactase activity was further studied by monitoring the acute postprandial plasma glucose response to intragastrically-applied lactose loads in adult rats.
Animals
Wistar rats from Harlan (Horst, NL) were used, housed in a climate-controlled animal care facility (at 21˘2˝C and 50˘5% humidity) with a 12/12 L/D cycle with lights on at 5 a.m. Animals were on AIN93-G (American Institute of Nutrition 'growth' diet) [9] chow (Teklad Global 18% Protein Rodent Diet, Harlan, Horst, NL) and had free access to food and tap water, unless stated otherwise. All animal procedures were approved by the local Animal Ethics Committee (DEC-Consult, Bilthoven, NL) and were according to laboratory animal care guidelines.
Study A: Intestinal Lactase Activity Assessment
Male and female rat pups born from Wistar dams were employed. Pregnant dams were obtained from Harlan (Horst, NL). On postnatal (PN) Day 2, five nests were culled to 4 pups of each gender per dam. Pups (4 nests) were weaned on PN Day 21 onto an AIN93-G-based chow diet containing 30% lactose (exchanged for maltodextrin) or to the control diet (1 nest) with no lactose. Animals were pair-housed (same gender) and weighed weekly. Diets were available for the pups from PN Day 15 onward. On PN Day 15, 28, 42 and 98 rats from each diet group were given a gavage treatment after a 14-h fast (mainly during the L-period) for in vivo lactase activity assessment (see below). The next morning, animals were euthanized by bleeding under inhalation anesthesia (isoflurane). An autopsy was performed, and various organs were collected and weighed. Small intestines were collected for in vitro lactase activity assessment (see below).
In Vivo Lactase Activity Test
The hydrolysis of the synthetic disaccharide galactose xylose (GX), a suitable substrate for the lactase enzyme, was used to monitor in vivo lactase activity. Via gavage, a saline solution was administered containing 4 mg, an adequate amount [10] of the GX-product required (the GX-disaccharide mixture, see below). Upon gavage, animals were put in metabolic cages for 6 h to collect their urine for xylose content (by ion exchange chromatography (IEC)) as a measure for their intestinal lactase activity. The percentage xylose excreted in the urine was assumed to be proportional to the prevailing lactase activity as reported previously [10,11]. Theoretically, the maximal amount to be recovered was 25%: complete GX hydrolysis renders 50% xylose of which only about 50% is excreted by the kidney, the rest being metabolized endogenously [10].
In Vitro Lactase Activity Test
The hydrolysis of lactose by a mucosal scraping preparation was used to determine the lactase (EC 3.2.1.23) activity directly with the amount of glucose produced as a read out, according to the method described by Dahlqvist [12]. Excised small intestines (from pylorus to cecum) were thoroughly rinsed with ice-cold saline (0.9% NaCl at 4˝C) to remove luminal contents and were cut open along the longitudinal axis. Mucosal scrapings from the entire small bowel were prepared using glass slides. The collected mucosa was diluted with distilled water (1:5) and homogenized using an Ultra-Turrax blender. The homogenate was kept cold and was spun at "2000ˆg for 10 min to remove large particles and debris. Next, the lactase activity was assessed in the supernatant: the assay used a lactose solution in maleate buffer (0.1 M, pH 6) to be incubated for 60 min at 37˝C with an equal amount of homogenate (final lactose concentration: 28 mM). Hereupon glucose was assessed colorimetrically using the GOD-PAP (Glucose Oxidase-p-amineophenazone) method (Roche Diagnostics, Almere, NL). Lactase activity was expressed as arbitrary units (U/mL homogenate): 1 unit hydrolyses 1 µmole disaccharide per min. The prepared homogenates contained "15 mg protein/mL, determined by the BCA assay (Pierce, Fisher Scientific/Emergo, Landsmeer, NL).
Study B: Lactose Testing in Adult Rats
Individually housed male adult rats (initial body weight (BW) 225-250 g) were fitted with a permanent intra-gastric (i.g.) cannula and a jugular vein cannula, both according to local standard procedures [13]. The chronic cannulas allowed for frequent and stress-free i.g. administration of meals and venous blood sampling, respectively, enabling monitoring of the post-meal (postprandial) plasma response upon i.g. meal application. Instilled were carbohydrate solutions (total 2 g/kg BW) or reconstituted infant milk formula (IMF). IMF solutions (Nutrilon , Nutricia, Zoetermeer, NL) contained per 100 mL:1.4 g protein, 3.5 g fat and 7.4 g carbohydrates (>97% lactose). Furthermore, the effect of adding maltodextrin (Glucidex DE19; 0.5 g/kg BW) to IMF was studied.
Animals were fasted for 4 h prior to treatment and received a load of 6 mL/350 g BW via the intragastric cannula, whereupon blood samples were taken. Blood samples (200 µL) were collected in chilled EDTA-coated tubes to avoid coagulation. Plasma was prepared by centrifugation (about 2500ˆg for 15 min at 4˝C) and stored at´80˝C until being assayed. Plasma levels of glucose and insulin were assessed by the GOD-PAP assay (Roche Diagnostics, Almere, NL) and a specific rat insulin ELISA kit (DRG Diagnostics, Veghel, NL), respectively. The insulin ELISA had a detection limit of 22.6 pM; its intra-and inter-assay variabilities were 4.6% and 4.8%, respectively.
GX-Product Preparation
The GX product used is not commercially available and, hence, was prepared in our laboratory via enzymatic β-D-galactosylation of xylose, as described by Aragon et al. [14] with some modifications. Nitrophenyl-β-D-galactopyranoside (50 mM) and D-xylose (500 mM) were dissolved in warm (37˝C) 0.2 M phosphate buffer (pH 7) to which β-galactosidase from E. coli (312 U) was added (all chemicals from Sigma-Aldrich Chemie, Zwijndrecht, NL). After incubation for 22 h at 37˝C, the synthetized disaccharides (2-, 3-and 4-galactosyl xylose, GX) were purified from the crude reaction mixture: initially, filtration removed nitrophenol, and its derivatives were adsorbed to added charcoal. Acetonitrile was added subsequently to the filtrate up to 2%, and this mixture was poured onto a bed of 200 g charcoal (in a Büchner funnel) that was activated earlier by treatment with 0.1% TFA in 80% acetonitrile ("activated charcoal"). After a wash step with 1 L 2% acetonitrile to remove xylose, the disaccharides adsorbed to the activated charcoal bed were eluted in the fourth elution step (100 mL each) with 25% acetonitrile. Evaporation rendered a product that consisted of 95 m/m% GX-mixture and only 2.2 m/m% xylose. Figure 1A shows the IEC chromatogram of the obtained purified GX-product. No attempts were made to further purify the co-eluting disaccharide mixture (2-, 3-and 4-galactosyl xylose, GX), as all three region isomers were good lactase substrates [10]. The GX-product mixture was shown to be entirely digested when incubated with lactase enzyme (see Figure 1B), rendering galactose, xylose and some unidentified digestion product (possibly from the enzyme solution).
Chromatography
GX-product evaluation and urinary xylose content was assessed by ion exchange chromatography (IEC) using a BioLC system (Dionex, Amsterdam, NL). Chromatography employed an isocratic run (72 min) with 15 mM NaOH as the eluent (1 mL/min) and used a Carbopac PA1 guard column (50 × 4 mm) in series with a Carbopac PA1 analytical column (250 × 4 mm). An ED40 detector (Au, 0.015 inch gasket) with a quadric-pulse analyzed the eluate composition. Every five runs, the column was rinsed with 330 mM NaAc followed by 300 mM NaOH, whereupon it was re-equilibrated to 15 mM NaOH again.
Statistics
Data are presented as the means ± SEM. Statistical analysis was performed using SPSS 15.0 (SPSS Benelux, Gorinchem, NL). The effect of treatment was tested using a repeated measures or univariate ANOVA, followed by LSD post hoc analysis. Correlations were evaluated by Pearson's test. Differences were considered significant at p < 0.05.
Chromatography
GX-product evaluation and urinary xylose content was assessed by ion exchange chromatography (IEC) using a BioLC system (Dionex, Amsterdam, NL). Chromatography employed an isocratic run (72 min) with 15 mM NaOH as the eluent (1 mL/min) and used a Carbopac PA1 guard column (50ˆ4 mm) in series with a Carbopac PA1 analytical column (250ˆ4 mm). An ED40 detector (Au, 0.015 inch gasket) with a quadric-pulse analyzed the eluate composition. Every five runs, the column was rinsed with 330 mM NaAc followed by 300 mM NaOH, whereupon it was re-equilibrated to 15 mM NaOH again.
Statistics
Data are presented as the means˘SEM. Statistical analysis was performed using SPSS 15.0 (SPSS Benelux, Gorinchem, NL). The effect of treatment was tested using a repeated measures or univariate ANOVA, followed by LSD post hoc analysis. Correlations were evaluated by Pearson's test. Differences were considered significant at p < 0.05.
Study A: Growth
Feeding rats from weaning until PN Day 98 a diet with 30% lactose did not appear to affect body growth or body weight course (growth velocity) into adulthood: the animals on the lactose diet grew well and gained weight within the normal range of Wistar rats, although body weight was on average lower than that observed in the control group. At autopsy on PN Day 98, notably smaller fat deposition was observed compared to controls: virtually no abdominal or subcutaneous fat pads were observed. Furthermore, organ weights were affected: muscle and liver weights were decreased, whereas pancreas and cecum weight was increased in lactose-fed vs. control rats (Table 1).
Study A: Growth
Feeding rats from weaning until PN Day 98 a diet with 30% lactose did not appear to affect body growth or body weight course (growth velocity) into adulthood: the animals on the lactose diet grew well and gained weight within the normal range of Wistar rats, although body weight was on average lower than that observed in the control group. At autopsy on PN Day 98, notably smaller fat deposition was observed compared to controls: virtually no abdominal or subcutaneous fat pads were observed. Furthermore, organ weights were affected: muscle and liver weights were decreased, whereas pancreas and cecum weight was increased in lactose-fed vs. control rats (Table 1). Continued exposure to a lactose-containing diet did not alter the observed lactase activity. On PN Day 42, the lactase activity determined in the rats on the lactose diet was found to be statistically different from the controls. Oral application of the disaccharide GX to monitor lactase activity in vivo neither differed in time nor between diet groups ( Figure 3). Control data on PN Day 98 were not available (no data: n.d.). The maximal urinary xylose recovery found was about 10% in 15-day-old, milk suckling pups. According to this method, the residual lactase activity appears to be maintained at 50% into adulthood.
Nutrients 2015, 7 7 Continued exposure to a lactose-containing diet did not alter the observed lactase activity. On PN Day 42, the lactase activity determined in the rats on the lactose diet was found to be statistically different from the controls. Oral application of the disaccharide GX to monitor lactase activity in vivo neither differed in time nor between diet groups (Figure 3). Control data on PN Day 98 were not available (no data: n.d.). The maximal urinary xylose recovery found was about 10% in 15-day-old, milk suckling pups. According to this method, the residual lactase activity appears to be maintained at 50% into adulthood. The urinary xylose excretion did not align with the direct measurement of the intestinal lactase activity: urinary xylose levels did not show a clear fall after weaning as observed with the mucosal scrapings method. Correlation analysis, however, revealed a weak, but significant positive correlation between the two methods used (r = 0.41, p < 0.05).
Study B: Lactose Testing in Adult Rats
The postprandial responses of i.g. instilled loads (6 mL/350 g BW) of aqueous carbohydrate (mix) solutions in adult rats are depicted in Figure 4A,B. Maltodextrin, an easily digestible glucose polymer, administered i.g. in a total dose of 2 g/kg BW, i.e., about 0.7 g per animal, showed a quick and significant response in both glucose ( Figure 4A) and insulin ( Figure 4B The urinary xylose excretion did not align with the direct measurement of the intestinal lactase activity: urinary xylose levels did not show a clear fall after weaning as observed with the mucosal scrapings method. Correlation analysis, however, revealed a weak, but significant positive correlation between the two methods used (r = 0.41, p < 0.05).
Study B: Lactose Testing in Adult Rats
The postprandial responses of i.g. instilled loads (6 mL/350 g BW) of aqueous carbohydrate (mix) solutions in adult rats are depicted in Figure 4A,B. Maltodextrin, an easily digestible glucose polymer, administered i.g. in a total dose of 2 g/kg BW, i.e., about 0.7 g per animal, showed a quick and significant response in both glucose ( Figure 4A) and insulin ( Figure 4B) plasma levels. The infant milk formula (IMF) used in the present tests ( Figure 5) contains 7.4 g lactose per 100 mL, i.e., 0.45 g lactose in a 6-mL load. The result of this (albeit lower) carbohydrate load on the plasma glucose and insulin response as shown in Figure 4 is marginal and not statistically different from baseline variability.
However, the addition of only 25% of the above-mentioned maltodextrin dose (0.175 g per animal) to the lactose dose used (as present in IMF) resulted in a significant post-meal rise in plasma glucose and insulin levels ( Figure 4).
However, the addition of only 25% of the above-mentioned maltodextrin dose (0.175 g per animal) to the lactose dose used (as present in IMF) resulted in a significant post-meal rise in plasma glucose and insulin levels ( Figure 4). Hence, in case the carbohydrate load comprises lactose only, the postprandial glucose and insulin plasma response is minimal and much lower compared to carbohydrate loads composed of pure or a blend containing maltodextrin. Hence, in case the carbohydrate load comprises lactose only, the postprandial glucose and insulin plasma response is minimal and much lower compared to carbohydrate loads composed of pure or a blend containing maltodextrin.
Next, IMF was tested: a mixed nutrient solution containing lipids, proteins and carbohydrates; and the postprandial response was compared to instillation of carbohydrates only (Figure 4 vs. Figure 5). The presence of macronutrients other than carbohydrates highly affected the postprandial response. In particular, the postprandial plasma insulin response was markedly increased ( Figure 5B) compared to the plasma glucose response observed ( Figure 5A). Post-meal plasma glucose levels are twice as high, whereas plasma insulin levels even triple (cf. the lactose curve in Figure 4 with the IMF curve in Figure 5). Increasing the lactose load by preparing more concentrated IMF solutions ("IMF 2x" and "IMF 3x") did not significantly affect the height of the postprandial plasma response in glucose or insulin ( Figure 5). Thus, tripling the energy and macronutrient density in a nutritive solution containing only lactose as a carbohydrate failed to enhance the postprandial glucose and insulin responses any further.
Again, addition of a small amount of maltodextrin (0.5 mg/kg, i.e., "0.2 g per animal) to an IMF solution ("IMF + Maltodex" in Figure 5) did cause post-meal plasma levels to rise significantly.
Next, IMF was tested: a mixed nutrient solution containing lipids, proteins and carbohydrates; and the postprandial response was compared to instillation of carbohydrates only (Figure 4 vs. Figure 5). The presence of macronutrients other than carbohydrates highly affected the postprandial response. In particular, the postprandial plasma insulin response was markedly increased ( Figure 5B) compared to the plasma glucose response observed ( Figure 5A). Post-meal plasma glucose levels are twice as high, whereas plasma insulin levels even triple (cf. the lactose curve in Figure 4 with the IMF curve in Figure 5). Increasing the lactose load by preparing more concentrated IMF solutions ("IMF 2x" and "IMF 3x") did not significantly affect the height of the postprandial plasma response in glucose or insulin ( Figure 5). Thus, tripling the energy and macronutrient density in a nutritive solution containing only lactose as a carbohydrate failed to enhance the postprandial glucose and insulin responses any further.
Again, addition of a small amount of maltodextrin (0.5 mg/kg, i.e., ~0.2 g per animal) to an IMF solution ("IMF + Maltodex" in Figure 5) did cause post-meal plasma levels to rise significantly.
Discussion
We confirm and observed a major decline in lactase activity during development, although not to the extent as found previously, i.e., to 5%-10% of its original level [1,15,16]. Hence, residual lactase activity was shown to be present in adulthood and was confirmed to be functional by a small rise in plasma glucose levels upon lactose intake, as also shown by others [4,17,18].
As shown previously in artificially-reared rat pups, the ingestion of natural milk seems to play a cardinal role in the maintenance of lactase activity during the suckling period [19], the lactose content in milk not being the most essential component in this respect, nor the only substrate for lactase present in milk [5].
Whether lactase activity levels are linked to lactose-containing milk consumption (as a natural source of lactose) remains controversial [15], despite studies supporting the adaptive theory (or induction hypothesis), stating that keeping the substrate available by continuation of milk ingestion, newborn lactase levels are maintained [20]. We and others [4,16] found no data to corroborate this theory and showed the residual lactase activity in adult rats to be mainly "non-functional" and to elicit merely a small, insignificant post-meal glucose plasma response after lactose ingestion.
The lactase activity in rats was monitored into adulthood by two independent methods: in addition to a direct in vitro lactase assay using tissue samples, an indirect, non-invasive in vivo method was applied. The activity derived from mucosal scrapings, the average over the entire small intestine (and ignoring possible "hot spots") showed "25% of the lactase activity to be retained into adulthood (Figure 2), whereas the adult urinary xylose excretion remained >50% of pre-weaning levels ( Figure 3). Furthermore, the mucosal tissue assessment showed an abrupt decrease, almost as an "on/off switch", a feature not mirrored at all by the in vivo method. The methods were previously shown to correlate well and to only differ in their kinetic parameters [10,11]. In all, despite the discrepancies found, both methods do indicate and confirm a considerable residual intestinal lactase activity to be present in adult rats, both in the absence and continued exposure to dietary lactose (30% lactose diet), as also reported previously [15,16,21]. Weaning studies in rats earlier showed the switch from milk to solid diet to occur gradually between postnatal Day 18 and 30, suggesting the decline of functional lactase activity in the course of natural weaning to be gradual, as well [22], and more in line with our in vivo data ( Figure 3).
Instillation of increasing doses of lactose (0.45 g to 1.35 g as part of a load of 6 mL of IMF solution) yielded only a minor, insignificant rise in post-meal plasma glucose levels ( Figure 5A). Moreover, the addition of maltodextrin (25-35 w%) to a lactose load already induced a significant rise in postprandial plasma glucose levels ( Figures 4A and 5A). These data show the preference and aptitude of the adult gut to digest and absorb complex glucose-based polysaccharides, such as maltodextrin. In contrast, lactose, although (potentially) digested by the residual lactase activity, apparently is neither efficiently nor rapidly absorbed, even when presented in a higher dosage, to result in significantly elevated plasma glucose levels. Furthermore, insulin levels show a skewed responsiveness to glucose-based carbohydrates compared to lactose ( Figure 4B), even when lactose is given mixed with other macronutrients, which also elicit insulin release ( Figure 5B).
Two issues deserve to be addressed in this context: Firstly, the glycemic index (GI) of the two carbohydrates is compared. Maltodextrin (in man) is a "fast sugar" (GI 100), whereas lactose is "slow" (GI 45). This already might explain part of the differences in the plasma responses observed. Secondly, the more concentrated IMF solutions instilled ( Figure 5) have a higher energy density and osmolality, which might both slow down the gastric emptying rate, adding to the GI effect, although the insulin response seems not affected, indicative of a similar gastric residence time.
The apparent non-functionality of the observed residual intestinal lactase activity was also confirmed in the chronic lactose feeding experiment: despite the low number of animals autopsied (see Table 1), the adult body and organ weight of animals fed a 30% lactose-containing diet indicated that the lactose was not available to the animals as a direct energy source, but as an indirect one (see below). Due to this energy restriction, the animals had less energy to store or to deposit as fat in their adipose tissue, which may explain the lack of fat at autopsy, as also reported previously [23,24]. Alternatively, an involvement of calcium absorption from the gut, which is promoted by lactose [18,24,25], is possible, as high prevailing calcium levels would counteract fat storage [26]. We did not assess calcium levels, nor did we evaluate if the animals on the lactose diet showed compensatory eating.
In line with the virtual absence of fat, an expansion of the cecal gut compartment in the animals on the lactose diet was observed (Table 1). This indicates that part of the indigestible dietary lactose was treated as a "fiber" and was fermented likewise by the cecal and/or colonic microflora, yielding some, but much less (indirect), energy to the rats. Previously, it was estimated that about 43% of the lactose ingested passes into the colon [8]. As already known [7], adult rats can readily adapt to a high lactose intake without any clinical disorders: indeed, we did not observe loose stools or diarrhea as manifestations of abdominal discomfort or lactose-intolerance in the lactose diet group during the period studied (i.e., . Table 1 also shows liver and muscle weight to be similar (the protein content of the diets was equal), but mentions a bigger pancreas in the lactose diet group. This might possibly refer to the exocrine pancreas trying to boost digestive efficiency by a higher enzyme output.
Lactose-containing weaning diets did not preserve intestinal lactase activity in our rats, despite the adaptive theory claiming this [20]. To date, only peroral gene therapy [17] has been shown in rats to "cure" functional lactose malabsorption permanently through a persistent expression of a β-galactosidase transgene in gut epithelial cells. Treated rats showed no weight loss after two weeks on a lactose-only rat chow and showed a significant post-meal response in plasma glucose levels, both at seven days and at 120 days after gene therapy.
As is known, in man, early postnatal diet impacts the development and body composition in childhood and later life [19,[27][28][29]. Nutritional intake during early age (up to four years) may be critical in this respect as a transition from a relatively high fat intake to a relatively high carbohydrate intake occurs at this time [22,29]. As has been shown by observational studies, a higher carbohydrate intake, implying a higher dietary glycemic load (GL), promotes the development of obesity [30]. In this respect, the quality of the carbohydrates ingested seems vital, but to date, no prospective evidence exists to substantiate a pivotal role for carbohydrate quality in early childhood [29]. In addition, dietary factors, including caloric intake, but also a higher meal frequency, resulting in lower postprandial glucose and insulin plasma levels, have been reported to affect body composition beneficially [29]. To address the issue of carbohydrate quality in the early diet, we compared the effect of various carbohydrate loads comprising either maltodextrin or lactose or its combination on the postprandial plasma glucose and insulin response and kinetics in adult rats. Next, whole meals (IMFs) containing lactose and/or maltodextrin were tested and demonstrated the "bias" of the adult GI tract towards the digestion of glucose polymers. Based on these results, we conclude that lactose-containing feeds, e.g., new IMF concepts, should not be tested in adult animals. Similarly, (pre-)clinical studies involving lactose-containing solutions or nutritive preparations, employing adult rats, or animals in general (or even adult men), is therefore cautioned, due to the adult, entirely glucose-based carbohydrate-directed digestive system. Conversely, sucrose or glucose, regularly added to baby and toddler nutrition as an inexpensive lactose alternative, may have detrimental effects on the maturation of the neonatal gut [31,32]. More preferably, human milk replacers (bottle feeds in general) and products designed and produced by the food industry for neonates and/or premature infants [28] should ideally only contain lactose as its major carbohydrate source.
Conclusions
In conclusion, lactose digestive capacity is transiently present in the rat and declines steeply after weaning. The residual lactase activity shown to be present in adult rats does not directly predict the capacity to digest and absorb lactose-containing food preparations. These findings should be taken into account when studying lactose-containing products in preclinical adult (rat) models. | 2015-09-18T23:22:04.000Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "b271b858ef686ab2e042385b87dda260d4d429fe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/7/7/5237/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b271b858ef686ab2e042385b87dda260d4d429fe",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4713832 | pes2o/s2orc | v3-fos-license | Transferring Rich Deep Features for Facial Beauty Prediction
Feature extraction plays a significant part in computer vision tasks. In this paper, we propose a method which transfers rich deep features from a pretrained model on face verification task and feeds the features into Bayesian ridge regression algorithm for facial beauty prediction. We leverage the deep neural networks that extracts more abstract features from stacked layers. Through simple but effective feature fusion strategy, our method achieves improved or comparable performance on SCUT-FBP dataset and ECCV HotOrNot dataset. Our experiments demonstrate the effectiveness of the proposed method and clarify the inner interpretability of facial beauty perception.
I. INTRODUCTION
Facial beauty analysis [1] has been widely used in many fields such as facial image beautification APPs (e.g., MeiTu and Facetune), plastic surgery, and face-based pose analysis [2]. In the mobile computing era, billions of images per day are acquired and uploaded to social networks and online platform, leading to the demand for better image processing and analyzing technology. Recently, thanks to the big data and high-performance computational hardware, computational and data-driven approaches have been proposed for solving these questions such as face recognition, facial expression recognition, facial beauty analysis and etc.
The existing methods resort to machine learning and computer vision techniques to analyze facial beauty and achieve promising results [3]. The methods often include image feature descriptors (such as HOG, SIFT, LBP, etc) and supervised machine learning predictors (such as SVM, KNN, DNN, LR, etc).
In order to explore the best facial beauty prediction approach that precisely maps high-level features into face beauty ratings, we propose a method that combines transfer learning and Bayesian regression. The method achieves the improved or comparable performance on SCUT-FBP dataset [4] and ECCV HotOrNot dataset [5].
The main contributions of this paper are as follows: • We apply transfer learning to our facial beauty prediction problems for feature extraction. Experimental results show that the transferred deep features can attain more impressive performance compared with the traditional image feature descriptors such as HOG, LBP and gray value features. • We make a detailed analysis about deep features based on knowledge adaptation. Additionally, we perform an effective feature fusion strategy to build more informative facial features in our facial beauty prediction task. • Studies found that the neural networks are lack of satisfactory interpretation. We make ablative studies by visualizing the face feature and reveal the elements that influence facial beauty perception. The rest of this paper is organized as follows. Section II reviews the related works of facial descriptor and learning methods. Section III describes our proposed method in details, which include deep feature extraction and Bayesian ridge regression. Experimental results and comparisons are presented in Section IV and Section V concludes this paper with a summary and future work.
A. Facial Descriptors and Machine Learning Predictors
Many researchers focus on developing new machine learning algorithms to achieve better classification or regression performance, while others focus on designing better facial feature descriptors. Zhang et al. [6] combine several low-level face representations and high-level features to form a feature vector and perform feature selection to optimize the feature set. Eisenthal et al. [7] use a vector of gray values created by concatenating the rows or columns of an image. Huang et al. [8] propose a method to learn hierarchical representations of convolutional deep belief networks. Xie et al. [4] resort to deep learning to train a predictor and achieve state-of-the-art performance. Amit et al. [9] use numerous facial features that describe facial geometry, color and texture to predict facial attractiveness. Lu et al. [10] detect face landmarks with ASM and then extract facial features based on Blocked-LBP which achieved the Pearson Correlation at 0.874 on 400 high-quality female face images. Zhang et al. [11] compute geometric distances between feature points and ratio vectors composed of geometric distances, and then treat them as features for machine learning algorithm. For the lack of abundant labeled images, it always takes lots of time to fine-tune the deep neural networks architecture and parameters to achieve a comparative result and avoid overfitting problems as well.
In addition, some research works towards developing or improving new machine learning algorithms. Eisenthal et al. [7] employ KNN and SVM as classifiers to rate faces belongs to different levels. Gan et al. [12] use deep self-taught learning to obtain hierarchical representations and learn the concept of facial beauty. Xu et al. [13] propose a method which constructs a convolutional neural network (CNN) for facial beauty prediction using a new deep cascaded fine tuning scheme with various face inputting channels. Wang et al. [14] use deep auto encoders to extract features and take a low-rank fusion method to integrate scores, and their method achieves promising results. Xu et al. [15] propose "psychologically inspired CNN (PI-CNN)" for automatically facial beauty prediction.
B. Deep CNN and Transfer Learning
Deep learning allows computational models that are composed of processing layers to learn representations of data with multiple levels of abstraction [16]. CNN is a type of neural networks which is designed to process data that come in form of multiple arrays. Deep learning has been used as a dramatically powerful tool in computer vision tasks such as image recognition [17], [18], [19], [20]. The features are automatically extracted via stacked layers. Neural networks are trained through back-propagation algorithm to minimize the cost function.
Deep convolutional neural networks show more extraordinary capacity in feature extraction than traditional handcrafted descriptors. However, we may need to design different networks architectures and train the deep neural networks almost from scratch to satisfy our task, which takes much computational burden. Transfer learning allows us to fine-tune the higher layers based on a pretrained model, or even just treat the pretrained model as a feature extractor.
Yosinski et al. [21] show that initializing a network with transferred features from almost any number of layers can produce an improvement to the generalization even after finetuning to the target domain dataset. Yoshua Bengio et al. [22] explore why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario. Donahue et al. [23] show that the features extracted by deep convolutional neural networks pretrained on ImageNet can achieve much better performance than many algorithms on lots of classification tasks, which illustrates the great generality and transferability of deep convolutional neural networks.
A. VGG Network
We include a brief review of VGG, which is employed by our proposed method. VGG [18] consists of 16-19 weight layers and very small (3 × 3) convolution filters as well. Fig. 1 shows the overall architecture of the VGG16 networks. Though VGG networks architecture is simple, it is widely used in many computer vision tasks. In our experiments, we take a VGG face model which is pretrained on a face verification task [24]. Although the original task is absolutely different from our facial beauty prediction task, it shows dramatically impressive performance. We believe the main reason for this issue can be attributed to the extraordinary feature representation power of deep CNNs. Fig. 1: Networks architecture. we adopt VGG16 in our feature extraction procedure, which is composed of multiple small convolutional filters to extract more informative features compared with bigger filters used in AlexNet [17] for ImageNet recognition task.
B. Deep Feature Extraction
Several research works [22], [16] show that the deep convolutional neural networks can learn increasingly powerful representations as the feature hierarchy becomes deeper. However, due to the limited labeled face images, if we train a deep convolutional neural network directly, we may suffer from severe overfitting problems. Recently, transfer learning has aroused much attention [25], which enables us to finetune from a pretrained model or just treat the learned neural network as a feature extractor to satisfy our tasks [21].
We extract facial features with VGG face model [24] pretrained on face verification task. Despite their target task is different from our facial beauty prediction task, the feature can achieve remarkable performance, which indicates extraordinary feature representation power of CNNs to some degree. Researches [21] show that the features in lower layers contain more detailed information while features in higher layers represent more semantic meaning. Our method concatenates on both relatively low layer's features and relatively high layer's features as our facial representation. We also use HOG, grayscale and LBP features in our experiments for comparison to evaluate the feature extraction capacity of deep CNNs.
C. Bayesian Ridge Regression
We feed the concatenated feature vectors into Bayesian ridge regressor. Bayesian ridge regression includes regularization during estimation procedure: the regularization item is not embedded with cost function directly, but tuned to your data distribution. The L 2 regularization used in Bayesian ridge regression is equal to maximizing a posterior estimation of the parameters w with precision λ −1 under a Gaussian prior. The output y is assumed to be Gaussian distributed around Xw in order to form a fully probabilistic model: Bayesian ridge regressor evaluates a probabilistic model of the regression problem. The prior for the parameter w is decided by a spherical Gaussian: The priors over α and λ are chosen to be Gamma distributions, the conjugate prior for the precision of the Gaussian.
The parameters w, α and λ are estimated jointly during the fit procedure. The remaining hyperparameters are the parameters of the Gamma priors over α and λ. All the parameters are tuned by maximizing the marginal log likelihood.
IV. EXPERIMENTS
We implement our method with TensorFlow [26] and Scikit-Learn [27] on an Ubuntu server with NVIDIA Tesla K80 GPU and Intel Xeon CPU.
A. SCUT-FBP Dataset
The SCUT-FBP dataset [4] contains images of 500 Asian females. Each image is scored by 10 raters, the main task is to build a computational model to predict the average score of the human portrait image.
Since the images in SCUT-FBP [4] are not in same size, deep CNNs can only support fixed squared data as input. We conduct three methods named "Crop", "Warp" and "Padding" to get squared images respectively. In "Crop" setting, we detect face provided by [28] and crop the face region, then we resize it to 224 × 224. In "Warp" setting, we just warp the image forcely to form a 224 × 224 image. In "Padding" setting, we resize the longer side to 224 and zero-pad the shorter side to form a 224 × 224 image (See Fig. 2). We also normalize the input image by substracting the mean and dividing the standard variance of the pixels. Furthermore, we manually crop the central region of the image and treat it as the input for our neural networks in case of failed face detection. In SCUT-FBP dataset, we concatenates the conv5 1 and conv4 1 layer's features. The pipeline is shown in Fig. 3: Fig. 3: Pipeline of our proposed method. The face is detected and then fed into CNNs, we concatenate conv4 1 and conv5 1 layers' feature maps, and flatten them into feature vectors for the input of Bayesian ridge regression.
B. Performance Evaluation
In our experiment, we use Pearson Correlation (PC), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as the criteria for evaluating our method.
RM SE(X, h)
where m denotes the number of images, x (i) denotes the input feature vector of image i, h(•) denotes the learning algorithm, y (i) denotes the groundtruth attractiveness score of image i. MAE and RMSE measure the fit quality of the learning algorithms, the performance is better if the value is closer to zero. PC measures the linear correlation between h(x (i) ) and y (i) . Its value lays between 1 and -1, where 1 means absolutely positive linear correlation, 0 means no linear correlation, and -1 means absolutely negative linear correlation.
In order to make the prediction more reliable and reproducable, we follow the provision denoted in [4] for fair comparison. We randomly select 400 images as training set and the rest 100 images as test set. Finally, we average the 5 experimental results as the final performance to remove sample variances. The results are shown in TABLE I. TABLE II shows performance comparison with other methods. The best performance is marked with bold font and the second best is highlighted with an underline. Our method ranks the second place on SCUT-FBP [4] dataset.
C. Ablation Analysis
It is almost a common sense in machine learning practice is that "feature matters". To illustrate the feature extraction capability by deep learning, we conduct experiments based [4] and [15] are not given and are hence denoted with "-".
on different features including HOG, LBP, gray image and transferred deep features for performance comparison and visualization: • Raw Grayscale: we convert the RGB facial images into their corresponding gray scale ones, and the flattened pixel gray scale value is used as the feature. • HOG: HOG is an image feature descriptor which is widely used in computer vision and image processing for object detection tasks. Details can be found in [29]. • LBP: LBP is a type of feature descriptor which especially cares more about texture details, and is widely used in many machine vision tasks. In addition, we compare the feature performance from different layers to find which layer produces the most discriminative features (See Fig. 5).
Moreover, among three preprocessing methods (Crop, Warp, and Padding), Crop achieves the best performance on SCUT-FBP, which indicates that facial region plays a more significant part in beauty perception, while background may act as noise in our facial beauty prediction task on SCUT-FBP dataset (See TABLE. IV). Fig. 5: Performance comparison between different layers: the performance gets better as layer goes deeper, which means the deep CNN extracts more discriminative features. It decreases sharply after max pooling operation, which may be attributed as heavy spatial information loss. Fig. 5 depicts that as layer goes deeper the performance gets better, and reaches the best at conv5 1. While when feature maps are flattened into vectors, we see a sharp drop in performance, which may be attributed as the heavy spatial information loss. Performance of diffrent preprocessing methods ("Crop", "Warp", and "Padding") on SCUT-FBP. "Crop" achieves the best. (a)
D. ECCV HotOrNot Dataset
ECCV HotOrNot dataset [5] contains 2056 faces which are collected from the Internet. Each face is labeled with a score, and the dataset has already been split into 5 training and test datasets. Unlike SCUT-FBP dataset [4], the faces in ECCV HotOrNot dataset [5] are more challenging because of the variant postures, cluttered background, illumination, low resolution and unaligned faces problems, which make the facial beauty prediction more difficult (See Fig. 6).
ECCV HotOrNot dataset uses Pearson Correlation (PC) for performance metric. We also list MAE and RMSE for more detailed comparison.
E. Ablation Study
We concatenate conv5 2 and conv5 3 layers' feature maps and flatten them to form more informative features. The concatenated features are then fed into Bayesian ridge regression algorithm [30].
We implement two means to evaluate the impact of preprocessing techniques. In solution A, we run face detector [28] to detect 68 facial landmarks and the facial region. For grayscale images, we replicate the gray pixel value twice to form an RGB channels image. Then we calculate the inclination angle to the horizontal line with two eyes coordinates, which is denoted as θ. If |θ| > 0, we rotate the face around the central point by θ degree and crop the facial region. The mean pixel value is subtracted from the cropped image, which is normalized by its standard deviation. Solution B includes mean subtraction and standard error division on the original images. No additional preprocessing is taken.
We find that solution B achieves much better performance than solution A, the results can be found in TABLE V. We believe the main reason is that the annotators may also take extra information such as haircut, posture, and clothing into consideration while labeling these facial beauty scores, instead of just measuring face region.
Additionally, we define = |y i −ŷ i |, which describes the error between the predicted facial beauty score (ŷ i ) and the ground truth beauty score (y i ). If ≥ τ 1 , we believe there is a relatively severe bias among the predicted values and ground truth scores. If ≤ τ 2 , we believe our algorithm fits these samples perfectly. In this part, we set τ 1 = 2.75 and τ 2 = 0.02 for detailed analysis (See Fig. 7 and Fig. 8). We believe the performance could be greatly improved through face alignment techniques. Besides, posture and facial expression may also contribute to beauty perception because our algorithm fails to capture these samples with variant postures.
Table VI compares the Pearson Correlation of our proposed method with five state-of-the-art methods. Our method outperforms other methods and achieves the best performance on ECCV HorOrNot dataset without face alignment.
V. CONCLUSION
In this paper, we propose a method which extracts rich deep facial features through knowledge adaptation, and then trains Bayesian ridge regression algorithm for face beauty prediction. Despite that the VGG model is pretrained for a totally [14] 0.437 Ours 0.468 Performance comparison on ECCV HotOrNot dataset [5]. Pearson Correlation (PC) is used for evaluating performance. Our method achieves the best result on this dataset as mentioned in [5].
different task, it also captures more descriptive information than conventional hand-crafted features, and even outperforms many deep learning-based methods in our facial beauty prediction task, which shows the great generality of deep features in transfer learning. With our feature fusion strategy, our method outperforms other methods and achieves the state-ofthe-art performance on ECCV HotOrNot dataset [5] without face alignment and comparable performance on SCUT-FBP dataset [4]. In our future work, we plan to explore 3D face alignment and novel networks architecture for extracting more descriptive features. | 2018-03-20T04:39:28.000Z | 2018-03-20T00:00:00.000 | {
"year": 2018,
"sha1": "9768b5245ee449b300d3da718a089e6a057a4e6d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4698a599425c3a6bae1c698456029519f8f2befe",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
201306427 | pes2o/s2orc | v3-fos-license | An Inversion Formula for Horizontal Conical Radon Transform
In this paper, we consider the conical Radon transform on all cones with horizontal central axis whose vertices are on a straight line. We derive an explicit inversion formula for such transform. The inversion makes use of the vertical slice transform on a sphere and V-line transform on a plane.
Introduction
Let us denote by C the set all cones in R n . Then, a (weighted) conical Radon transform of a function f ∈ C ∞ (R n ) is the function T(f ) : M ⊂ C → R defined by where w(x, c) is a positive smooth weight function. The conical Radon transform has been actively studied the thanks to its applications in Compton camera imaging (see, [6,21]). In Compton camera imaging, one has to invert a conical Radon transform in order to find the interior image of a biological object from the measurement of Compton scattering.
In the two dimensional space (n = 2), the conical Radon transform becomes the V-line transform, which also arises in optical tomography [8]. There exist quite a few inversion formulas for the V-line transform (e.g. [3,18,23,7,2,14]). In the three dimensional space (n = 3), C is a six dimensional manifold and there are many practical choices of M. Taking advantage of redundancy, i.e. by choosing dim(M) > 3, was the topic of several works (see, e.g., [22,16]). One, however, may wish to study the case dim(M) = 3 for the mathematical interest and practical setups for Compton camera imaging [5,19,1,11,17]. Several papers (e.g., [10,12,22]) gave the inversion formula for conical transform in general dimensional space. We mention that using spherical harmonics to compute series solutions is also a popular approach for inversion (see, e.g., [4,15,20]).
In this paper, we aim to reconstruct a function f ∈ C ∞ (R 3 ) from all cones whose vertices are in a vertical line and central lines are horizontal, see Fig. 1. The manifold M of such cones is of three dimensions. This formulation corresponds to the Compton camera imaging with detectors on a line.
We define our (weighted) conical Radon transform of a function f ∈ C ∞ 0 R 3 as follows. T where k ∈ N is fixed. In this paper, we investigate the inversion of T k . In order to invert the conical introduced in the previous section, we introduce the weighted X-ray and vertical slice transforms.
The main results
We define the weighted X-Ray transform The weighted X-ray transform has been used in inverting the conical transform in other setups [17].
Vertical slice transform was investigated Gindikin [9]. It has the following inversion formula (see [9, Theorem 2.1]): Let ω ∈ S 2 and g : C(S 2 ) −→ R is even in the third coordinate. Then The following lemma gives us the relationship between the weighted X-ray, the weighted conical Radon, and the vertical slice transforms: In the lemma, we have used the notation Γ(χ k f )(z, ·) for the vertical slice transform of χ k f (z, ·). Let us now prove the lemma.
This finishes the proof.
Choosing h = sin η sin γ , This finishes our proof.
Let us note 2(X 0 f ) e (z, ϕ, η) is the integral of f along a V-line whose vertex is b(z), each branch makes an angle η to the horizontal plane and is the reflection of the other via the horizontal plane. We are now ready to compute the function f in R 3 from its conical transform in Definition 2.1. To this end, we will decompose R 3 into the union of half-planes H e , where e is a horizontal unit vector. Here, H e is the vertical half plane passing through the z-axis and containing the unit vector e. We only need to compute f on each such half-plane. On each such half-plane we are given the V-line transform of the function f , which integrates f over all V-lines with horizontal central axis whose vertices are on the boundary of H e . The inversion such transform can be reduced to that of the X-ray transform (see [3]). The idea will be used in our proof below for Theorem 2.6 below. | 2019-08-22T08:42:08.000Z | 2019-08-22T00:00:00.000 | {
"year": 2019,
"sha1": "ece667f6dd7050260cb04880e4ab1fcd8470bdbf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ece667f6dd7050260cb04880e4ab1fcd8470bdbf",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
249802143 | pes2o/s2orc | v3-fos-license | Exploring drought-responsive crucial genes in Sorghum
Summary Drought severely affects global food production. Sorghum is a typical drought-resistant model crop. Based on RNA-seq data for Sorghum with multiple time points and the gray correlation coefficient, this paper firstly selects candidate genes via mean variance test and constructs weighted gene differential co-expression networks (WGDCNs); then, based on guilt-by-rewiring principle, the WGDCNs and the hidden Markov random field model, drought-responsive crucial genes are identified for five developmental stages respectively. Enrichment and sequence alignment analysis reveal that the screened genes may play critical functional roles in drought responsiveness. A multilayer differential co-expression network for the screened genes reveals that Sorghum is very sensitive to pre-flowering drought. Furthermore, a crucial gene regulatory module is established, which regulates drought responsiveness via plant hormone signal transduction, MAPK cascades, and transcriptional regulations. The proposed method can well excavate crucial genes through RNA-seq data, which have implications in breeding of new varieties with improved drought tolerance.
INTRODUCTION
With the increasing of global population, food security has become a serious global problem. Drought is a typical abiotic stress that severely affects food security. It is reported that the impact of drought on crops is grievous among all abiotic stresses (Fahad et al., 2017;Jaiswal et al., 2021). It is estimated that drought can directly cause averagely $2.9 billion losses annually (Fahad et al., 2017). An efficient way to guarantee food security is to optimize and cultivate crops that can adapt quickly to environmental changes (Council, 1996), such as drought stress. However, exploring drought-resistant mechanisms of crops and the associated crucial genes is the first step to cultivate novel drought-resistant varieties and alleviate the impact of drought on crop yields. It is well known that different crops have varied water demand to maintain growth and development. Comparing with Corn, Barley, and Wheat, Sorghum is extremely resistant to drought, which can survive for several weeks without water (House, 1985). Actually, Sorghum is characterized by its low water consumption, high water utilization, and high photosynthetic efficiency; thus, it is widely planted in arid and semi-arid areas, and it has become an ideal plant for probing drought responsiveness (Mace et al., 2013). The genome of Sorghum was firstly published in 2009 (Paterson et al., 2009), and considerable research on transcriptomic sequencing have been subsequently performed (Mace et al., 2013;Varoquaux et al., 2019;Zhang et al., 2019), which facilitate us to systematically explore its drought-resistant mechanisms from omics data (Ngara et al., 2021).
For omics data analysis, a challenge issue is to develop appropriate mathematical and statistical tools to explore useful bioinformatics. Various data-driven techniques have been developed and great advances have been made during the last decades. The associated techniques include dimensional reduction and variable pre-selection, network reconstruction and network-based information mining, sophisticated model-based methods for crucial gene identification, and so on (Lü and Wang, 2020).
Hereinafter, we briefly review some related works on omics data analysis. First of all, massive omics data often contain too many covariates but with only a few samples, considerable actually uncorrelated or independent covariates greatly hinder the subsequent analysis and applications . Therefore, it is necessary to perform dimensional reduction or variable pre-filtering. RNA-seq data often include Secondly, complex network has been widely used to explore omics data (Lü et al., , 2016Kawata et al., 2018;Ding et al., 2020;Shang and Liu, 2021;Liu et al., 2012Liu et al., , 2014Csermely et al., 2013). For example, various network-based approaches have been developed to explore drug targets and essential proteins (Csermely et al., 2013;Shang and Liu, 2021;Liu et al., 2012Liu et al., , 2014, as well as stress-responsive crucial genes in plants Wang et al., 2018a;Wang et al., 2018b;Wang, 2021;Bi and Wang, 2022). Recently, Wang et al. proposed a novel method to construct gene differential co-expression networks (GDCNs), and then based on the GDCNs, they developed three indexes to evaluate the importance of genes in altering global co-expression patterns. The network-based approach provides effective tools to explore omics data.
Finally, as to model-based methods for crucial gene identification, many methods or algorithms have been reported, including the hidden Markov random field (HMRF) model (Besag, 1986). Generally, HMRF model can be used to describe the noncausal context relationship or spatial relations in physical phenomena. The HMRF model has been widely used in genome-wide association studies (GWAS). For example, based on the guiltby-association principle (Xu and Li, 2006;Wu et al., 2008;Jeong et al., 2001), Chen et al. incorporated biological pathway information into the HMRF model to screen informative GWAS signals (Chen et al., 2011). However, Chen et al. overlooked the dynamic feature of biological networks (Wu et al., 2008;Yang et al., 2011;Vanunu et al., 2010;Chen et al., 2009;Lee et al., 2011). Subsequently, Hou et al. integrated gene rewiring networks into the HMRF to study the Crohn and Parkinson diseases. They introduced the guilt-by-rewiring principle in the HMRF model to prioritizing genes. The method proposed by Hou et al. considered the dynamic characteristics of the networks (Hou et al., 2014), which is biologically meaningful. However, the existing models need to integrate multiple omics data, including gene expression data and GWAS data, which are inappropriate for the cases without required data. Moreover, the use of multiple omics data unavoidably introduces bias and batch effect, which inspire us to develop novel methods that merely rely on single omics data, such as RNA-seq data.
Motivated by the mentioned issues, we will explore the RNA-seq data for Sorghum under drought stress and with multiple time points. Firstly, the MV test is used to exclude genes that are independent with the response or phenotype; then, based on the expression data of the selected genes under treatments and controls and the gray correlation coefficient (GCC), weighted gene differential co-expression networks (WGDCNs) are constructed. Finally, combining the WGDCNs and the HMRF model, the posterior probabilities of genes that contribute to drought stress are obtained. GO enrichment analysis and gene sequence alignment analysis reveal that the screened crucial genes play critical functional roles during drought stress in Sorghum. The main contribution of this paper includes three aspects: 1) A method that integrates the WGDCN and the HMRF model is proposed to analyze RNA-seq data, which has the advantages of both considering the network structural information and the sophisticated statistical model; 2) The RNA-seq data for Sorghum under drought stress and with multiple time points are explored; droughtresponsive crucial genes are identified for different developmental stages, and their biological functions are investigated in detail; 3) A multilayer differential co-expression network and a possible gene regulatory module are established, which can be used to reveal certain mechanisms of drought responsiveness in Sorghum.
Method summary
Our goal is to statistically identify drought-responsive crucial genes in Sorghum. A summary of the proposed approach is depicted in Figure 1. Firstly, since Sorghum suffers great phenotypical changes from week 3 to week 17, and to be more concretely screen crucial genes at different developmental periods, we divide the RNA-seq data into five stages, each stage covers three weeks. This classification mainly considers the developmental features (Vanderlip and Reeves, 1972) of Sorghum and the balance of samples for each stage. Secondly, for the processed data from each stage, we perform MV test to exclude genes that are independent with treatments, and retain genes with P MV % 0:01 as candidate genes for statistical analysis. Based on RNA-seq data of the selected genes, WGDCN is constructed for each stage. Finally, combining the WGDCN and the HMRF model (Method details), posterior probability of each candidate gene is obtained for each stage. The obtained posterior probability reflects the association Figure 1. Schematic flowchart of the proposed method to identify drought-responsive crucial genes in Sorghum RNA-seq data under pre-flowering drought, post-flowering drought and normal watering conditions are considered, and these samples are divided into five developmental stages. Each stage covers samples from three successive weeks. The notation week i j stands for the sample of the j'th replicate at the i'th week. For each stage, the original RNA-seq data is firstly processed and filtered by the mean variance test; then based on the gray correlation coefficient and the guilt-by-rewiring principle, a weighted gene differential co-expression network is constructed. Finally, based on the weighted gene differential coexpression network and the hidden Markov random field model, posterior probabilities for candidate genes are obtained. Drought-responsive crucial genes are genes with high posterior probabilities.
OPEN ACCESS
iScience 25, 105347, November 18, 2022 3 iScience Article tendentiousness of a gene with drought stress. The candidate genes can be prioritized according to the posterior probabilities, and the top-ranked genes are deemed as crucial ones.
Rankings according to the proposed method have high discrimination ability
Based on the RNA-seq data of Sorghum, the MV test screens 6682, 12,276, 1712, 4672, and 3983 genes at the five stages, respectively, where 335 genes are commonly selected at the five stages ( Figure 2). In the former four stages, more differentially expressed genes (DEGs) are downregulated; whereas, at Stage 5, upregulated DEGs are more than the downregulated ones. About half of the candidate genes are not differentially expressed (jlog 2 ðFCÞj < 1 or P > 0:05). By incorporating the WGDCNs and the HMRF model, the posterior probabilities of the candidate genes are obtained. Our results reveal that considerable genes are with posterior probabilities ranging from 0.96 to 1, and there are slight differences between different stages ( Figures 2C and 2D). The distributions of P MV ( Figure 2C) and posterior probabilities (Figure 2D) show reverse trends, and there are some differences between the two, especially for the last two stages.
In order to compare the performance of the MV test and the HMRF model on their distinguish abilities, we define discrimination abilities of the MV test and the HMRF model at the l'th stage as Here, m l is the number of candidate genes at Stage l, U MV l is the number of unique rankings according to the MV test, and U HMRF l denotes the number of unique rankings from the HMRF model at the l'th stage. Table 1 shows that the discrimination abilities of the MV test are apparently lower than those from the HMRF model, which indicates that the HMRF model can more precisely distinguish the differences among genes.
Drought-responsive crucial genes and their functional analysis
Hereinafter, the top-20 ranked genes with high posterior probabilities at each developmental stage will be selected as crucial drought-responsive ones. The top-20 ranked genes account for $ 0:09% of all detected genes in RNA-seq. . Information for candidate genes at the five stages (A) Volcano plots for candidate genes at each stage. sig(Up/Down) represents significantly up-/down regulated genes; FC denotes fold change of gene expressions between treatment and control; FC(Up/DownOnly) represents genes with log 2 ðFCÞ R 1 or log 2 ðFCÞ % À 1, but their expressions are not significantly different between treatments and controls (P R 0:05). P(Only) denotes genes with P < 0:05 and jlog 2 ðFCÞj < 1; NoDiff denotes genes with both P R 0:05 and jlog 2 ðFCÞj < 1. iScience Article It is known that plants can cope with drought through various ways, such as metabolism (Bhargava and Sawant, 2012;Pinheiro and Chaves, 2011), biosynthesis (Capell et al., 2004;Ilhan et al., 2015), osmotic adjustment (Babita et al., 2010;Flowers and Yeo, 1986), stomatal closure, and reduction of photosynthetic rates (Pezeshki and Chambers, 1986). GO enrichment analysis reveals that the top-20 ranked genes are enriched in drought-related biological processes ( Figures 3A-3E), including response to stimulus, response to stress (drought and oxidative stresses), and response to chemical. Figure 3F shows 15 enriched biological processes and the associated candidate genes. The 15 processes include response to stress/stimulus, regulation of response to watering, and cellular response to water deprivation. Among the associated genes, Sobic.001G0401300.v3.1 and Sobic.004G116300.v3.1 participate in many of the 15 biological processes; Sobic.001G079500.v3.1, Sobic.001G095700.v3.1, and Sobic.009G116700.v3.1 involve in responding to water and water deprivation. However, the GO enrichment results for the bottom-20 ranked genes are quite different from the top-20 ranked ones; no apparent processes are associated with drought responsiveness ( Figure S1). GO enrichment analysis suggests that the top-20 ranked genes by the HMRF may actually play a key role during drought responsiveness in Sorghum.
Among the identified crucial genes, based on sequence alignment analysis (Johnson et al., 2008) with the Arabidopsis genome, we find that many genes are homologous with known drought-related genes in Arabidopsis, for example, Sobic.003G229400.v3.1 is possibly homologous with MPK3 and MPK6 (Table S1).
Many studies have reported that MPKs play roles in regulating developmental processes and in responding to various stimuli in plants (Ma et al., 2017). Tsugama et al. reported that MPK6 can be directly regulated by drought, and ROS-induced MPK6 activation served as an upstream signal under drought conditions (Tsugama et al., 2012). Sobic.007G077466.v3.1 is homologous with WRKY66 and WRKY75, which belongs to the WRKY transcription factor (TF) family. The WRKYs play important roles during stress responsiveness in plants (Wang et al., 2018a). Some other homologous genes in Arabidopsis include PDC1, PMH1, LEA, SOS6, IAA7, PBS1, ARSK1, ERD14, RBOHD, and so on. Many of them involve in drought-related biological processes (including responding to water/water deprivation and cellular response to water deprivation), and partly of them have been proved by previous studies (Table S1).
As a summary, GO enrichment analysis and sequence alignment analysis with the Arabidopsis genome reveal that the identified top-20 ranked genes are inextricably associated with drought stress, which indicates that the proposed method is efficient in identifying crucial drought-responsive genes in Sorghum.
Multilayer differential co-expression network analysis for the identified crucial genes
Hereinafter, based on the identified top-20 ranked genes, we construct a temporal multilayer differential co-expression network to explore the selected genes ( Figure 4A). The multilayer network is constructed as follows. Firstly, we extract subnetworks of the WGDCNs for the top-20 ranked genes at each stage. The subnetwork for each stage serves as one layer. The weights of intralayer edges are the same as those in the WGDCNs. Secondly, interlayer edges are added, which connect the same gene at two different layers. Structural analysis reveals that the temporal network at Stage 2 encompasses the largest average degree and average clustering coefficient, and it has the lowest average path length, which indicates that the associated network has small-world property ( Figure 4B). The subnetworks at Stages 4 and 5 are more densely connected than the other stages, which reveal that relatively more rewiring events among the selected genes have been triggered by drought stress at the reproductive growth stages. The expression profiles of the identified genes show some patterns in samples under treatment and control ( Figure 4C).
To evaluate the overlap of nodes across layers, we compute the Jaccard similarity coefficient (Wang and Wang, 2022) according to Here, A i denotes the selected gene set at the i'th stage (i = 1; 2; 3; 4; 5). For the five-layer network, we obtain The overlaps between different layers are very low, which may reveal that there are considerable differences on rewiring patterns among different developmental stages of Sorghum. Especially, the overlaps among the first three stages are quite low, which may be due to the fact that the first three stages are developmental growth stages, quickly growth of the plants leads to great phenotypical differences, as well as great differences on the associated crucial genes. However, the overlap between Stage 4 and Stage 5 reaches 0.1282, which suggests that the two stages share comparably more common genes than those in the iScience Article first three stages. Several common genes in the last two stages continuously play functional roles under drought stress. Stage 3 shares the least common genes with the other stages, which well separates the pre-flowering period (Stages 1 and 2) and the post-flowering period (Stages 4 and 5). Actually, there are 57 crucial genes in the pre-flowering period and 31 crucial genes in the post-flowering period, which may indicate that drought responsiveness in Sorghum is more complex before flowering. Moreover, the genes screened at Stage 2 have overlaps with the other four stages, indicating that Stage 2 may be a very important developmental stage, which closely relate to the whole life of Sorghum. In the face of drought environment, we should pay special attention on the prevention of pre-flowering drought and enhance defensive measures at Stage 2 to reduce the influence of abiotic stress on crops.
A B C Figure 4. Multilayer differential co-expression network analysis for the top-20 ranked genes at the five stages (A) The constructed multilayer network for the selected genes. The inter layer edges connect the same gene in two layers; node sizes are proportional to their evidence of drought responsiveness (whether they are reported to be drought responsive (Table S1) or are annotated with drought-related GO biological processes. The largest nodes are both reported in existing references and functionally annotated). (B) Density, average degree, average path length, and average clustering coefficient for the network at each layer.
(C) Clustering analysis of the expression profiles for the top-20 ranked genes in all samples. The clustering analysis is based on normalized data, which is performed by the average linkage method and Euclidean distance.
Crucial gene regulatory module for drought responsiveness in Sorghum
The multilayer GDCN only reflects differential co-expression patterns among genes; based on PlantRegMap, we can further predict the possible gene interactions and explore the gene regulatory module for drought responsiveness in Sorghum ( Figures S2 and S3 Figures 5A and S2). Moreover, the expression of these genes changed more severely in roots than in leaves ( Figure 5B). Further based on the STRING database, the homologous genes in Arabidopsis are also connected in the protein-protein interaction (PPI) network ( Figures 5A and S3). The PPI network consists of several famous genes that have been reported to be closely related to drought stress, such as MPK6, MYC2, IAA7, ERD14, ERD10, WRKY75, and AUX1. The associated genes in the PPI network involve in plant hormone signal transduction, MAPK signaling pathway, and stress response. iScience Article Enrichment analysis shows that genes in the homologous network are enriched in various stress-responsive processes, including response to desiccation, water deprivation, and water ( Figure 5C).
The homologous genes in Figure 5A not only involve in drought-related biological processes but also relate to hormone-related (including abscisic acid (ABA) and jasmonic acid) biological processes and MAPK cascades ( Figure 5C). Actually, phytohormones play an important role in regulating drought stress. Plants can sense and respond environmental changes via a series of hormone-mediated signal cascades. ABA is a common hormone in plants. It not only plays a key role during the growth and development of plants but also closely relates to drought. Actually, many genes in plants are regulated by both ABA-dependent and ABA-independent pathways to respond to drought (Riyazuddin et al., 2022;Yao et al., 2021), such as dehydrin (DHN) genes.
In fact, Sobic.009G116700.v3.1 and Sobic.004G286600.v3.1 are possibly homologous with ERD10 and ERD14 in the DHN family. The DHN proteins are highly hydrophilic and perform multifaceted roles in the protection of plant cells under drought stress. For ABA-dependent pathways, the signal of drought stress is perceived by different receptors which may lead to an accumulation of ABA and decreased contents of other plant hormones. The activated hormonal signaling cascade may trigger the expression of different DHN genes that participate in drought stress tolerance by inhibiting the ROS accumulation and lipid peroxidation and protecting the photosynthetic machinery (Riyazuddin et al., 2022). For ABA-independent pathways, it is reported that fully intrinsically disordered DHN ERD14 protein might protect and even activate redox enzymes through the direct effect on the activity of glutathione transferase PHI9 in Arabidopsis, and thus help plants to survive oxidative stress under drought stress (Nguyen et al., 2020). At the same time, MAPK cascades are an important signaling module in responding to drought. It is demonstrated that the MAPK pathway involves in mRNA decapping via MPK6-DCP1-DCP5 pathway, playing a role in dehydration stress response (Xu and Chua, 2014). Sobic.003G229400.v3.1 is highly homologous with MPK3 and MPK6, which indicates the role of Sobic.003G229400.v3.1 under drought stress in Sorghum.
In addition to the hormone signal transduction pathways and the MAPK cascades, the WRKY TF family also plays an important role in responding to various abiotic stresses. Sobic.007G077466.v3.1 is homologous with WRKY75, which is defined as a crucial gene at Stage 3. It is reported that WRKY75 can participate in regulating gibberellin-mediated flowering time through the interaction with DELLAs, and it involves in the growth of roots. It is also reported that PtrWRKY75 acts on the upstream of PAL1 and directly regulates the expression of PAL1 by binding to the promoter of PAL1, and the activated PAL1 increases the accumulation of ROS by promoting the biosynthesis of salicylic acid, which eventually leads to the size of stomatal pore narrowing, thereby enhancing the drought resistance of plants (Zhang et al., 2020). Moreover, Sobic.001G079500.v3.1, Sobic.001G095700.v3.1, Sobic.004G286600.v3.1, and Sobic.009G116700.v3.1 take part in responding to water deprivation; these genes are directly or indirectly regulated by Sobic.003G058200.v3.1. Sobic.003G229400.v3.1, Sobic.007G077466.v3.1, and Sobic.009G085100.v3.1 are also regulated by Sobic.003G058200.v3.1.
DISCUSSION
With global warming and the intensifying contradiction between water supply and demand, drought has become the most important abiotic factor affecting food production in the world. However, drought-responsive mechanisms of crops are still largely unknown. Sorghum is a typical crop with strong drought resistance, which is an ideal crop to explore drought-responsive mechanisms. The investigations of Sorghum are of great significance in cultivating novel drought-resistant varieties, and in promoting sustainable agricultural development.
In this paper, to explore drought-responsive crucial genes from RNA-seq data of Sorghum, we establish rigorous statistical procedures. Firstly, in order to exclude redundant genes and reduce the subsequent ll OPEN ACCESS iScience 25, 105347, November 18, 2022 9 iScience Article computational burden, the MV test is performed on samples at each stage; genes that show certain dependence with the treatments are retained as candidate genes for subsequent analysis. Secondly, based on the GCC, we construct WGDCN for candidate genes at each stage. It is reported that the GCC is more robust against data processing, and it is appropriate to evaluate nonlinear relationships under small sample sizes . Finally, the WGDCN and the HMRF model are combined to calculate the posterior probabilities of candidate genes. GO enrichment analysis reveals that the identified top-20 genes are enriched in drought-related biological processes. Gene sequence alignment analysis reveals that some genes are highly homologous with drought-related genes in Arabidopsis. Multilayer differential co-expression network analysis shows that considerable crucial genes can trigger differential co-expression patterns at different stages. Further based on the PPI network in Arabidopsis and the predicted gene interactions in Sorghum, a possible drought-responsive module in Sorghum is established and discussed.
Except the proposed method, there are many other methods to explore the data in this paper. For example, we recently propose an algorithm to construct gene differential co-expression network, and based on the GDCN and the traditional degree, closeness, and betweenness centralities, crucial genes that may be associated with drought stress can be also explored (Bi and Wang, 2022). Comparing the results from the HMRF-based method and the GDCN-based method, 4,5,5,18,16 common crucial genes are selected in the top-20 ranking lists at the five stages, respectively (Table 2), which demonstrates the consistence of the proposed method with the existing ones. More importantly, several different potentially critical genes are screened by the proposed HMRF-based method, including Sobic.003G229400.v3.1, Sobic.009G116700.v3.1, Sobic.001G095700.v3.1, Sobic.007G077466.v3.1, and Sobic.009G085100.v3.1. The additionally selected genes are demonstrated to be more likely to play a key role in drought responsiveness of Sorghum (see Figure 5), which further reveals the merit of the proposed method.
It is noted that some of the findings in this paper coincide with existing works (Paterson et al., 2009). It is reported that drought responsiveness in Sorghum involves many biological processes (Paterson et al., 2009), including the response to salicylic acid, response to jasmonic acid, defense response, response to fungus, and regulation of defense response. Enrichment analysis in this paper shows that the identified crucial genes are enriched in these biological processes. It is also reported that DEGs for pre-flowering stages are more than those for post-flowering stages, and changes of gene expression in pre-flowering stages are far more complex (Paterson et al., 2009). However, in this paper, the amount of the identified crucial genes for pre-flowering stages are more than those for post-flowering stages, and the overlaps iScience Article of the top-20 ranked genes among the three pre-flowering stages are quite low, which coincide with the existing work. These results further support the effectiveness of the findings in this paper.
There are several advantages of this study. Firstly, different from existing methods (Hou et al., 2014), the proposed method only relies on RNA-seq data, which is appropriate for the cases without GWAS signals. Secondly, the resolution of the proposed method is higher than the MV test, which indicates that the HMRF can more precisely distinguish the crucialness of genes in responding to drought stress in Sorghum.
Thirdly, the GCC-based approach of WGDCN is appropriate for cases with small sample sizes, which overcomes the deficiency of the traditional PCC-or SCC-based methods. Fourthly, the associated investigations consider different developmental stages of Sorghum; crucial genes are analyzed via temporal multilayer differential co-expression network and predicted gene interaction network; a crucial gene regulatory module is established, which regulates drought responsiveness via plant hormone signal transduction, MAPK cascades, and transcriptional regulations.
This paper only explores crucial drought-responsive genes in the root parts of Sorghum; it is interesting to further consider the data from the leaf parts. Moreover, based on the time series data of Sorghum, it is possible to construct multilayer co-expression network and to further explore useful bioinformatics. It is also interesting to establish some methods based on time series analysis to further explore the considered data. It is also noted that the proposed method can be used to explore other omics data for various organisms. All of the mentioned issues will be our future research directions. As a summary, the associated investigations not only provide rigorous theoretical foundations for exploring crucial phenotype-related genes from RNA-seq data but also provide promising target genes for molecular breeding of improved Sorghum varieties.
Limitations of the study
There are some limitations in the current investigation. Firstly, the setting of hyperparameters in the HMRF needs to be further improved. For simplicity, we set the hyperparameters of the posterior probabilities of nodes as t 1 = t 2 = 0:01 (Method details), which actually assumes that the contribution of two genes that are both associated with drought stress is the same as that they are both un-associated. Another parameter h is determined by the 90% quantile of the potentially associated state, which mainly considers the parameter settings in previous research (Hou et al., 2014) and the characteristics of actual data. Secondly, since GO annotations of genes in Sorghum are still largely incomplete (Paterson et al., 2009), the functions of some of the identified genes are unknown. Some detailed biological experimental validations of the selected crucial genes need to be further performed. Thirdly, the proposed method relies on several hard cutoff thresholds. The hard cutoff thresholds for GCC determine the densities of the constructed WGDCN; the P value from the MV test determines the retained candidate genes. The selection of the cutoff thresholds mainly considers the balance between computational burden and information loss. Finally, the samples are manually divided into five developmental stages, which make the amounts of samples at different stages comparable and consider the phenotype features of Sorghum. It will be an interesting topic to group samples according to some properly designed algorithms for optimal parting of ordered samples (Fisher, 1958).
d Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
EXPERIMENTAL MODEL AND SUBJECT DETAILS
Our study does not use typical experimental models in the life sciences.
RNA-seq data for Sorghum
The RNA-seq data for Sorghum is obtained from NCBI with accession number GSE128441, which is a part of the five-year EPICON project (Varoquaux et al., 2019). In the EPICON project, field-based, temporal transcriptomic data for two genotypes of Sorghum has been sequenced. The two genotypes are the pre-flowering drought-tolerant genotype RTx430 and the post-flowering tolerant variety BTx642 (Smith et al., 1985;Thomas and Howarth, 2000). Three experimental settings are considered: pre-flowering drought, postflowering drought and normal watering. Almost 400 samples, ranging from week 3 to week 17, are sampled weekly from leaves and roots of the two genotypes. Each sample averagely detects the expression of 22066 genes.
We consider the root samples of BTx642 under pre-flowering drought, post-flowering drought and normal watering conditions. The main reasons are as follows: firstly, the BTx642 plants can stay green and perform active photosynthesis under drought stress, which demonstrate obvious drought resistance (Rosenow et al., 1983); Secondly, roots not only play an important role in absorbing water and nutrients, but also is pivotal in responding to various adverse environmental stresses, such as drought, low temperature (Takeuchi et al., 2011). When plant encounters drought, its roots can promptly sense the coercive changes and quickly make adaptive adjustments for self-growth. Additionally, an existing research reports that roots of Sorghum encompass more DEGs than leaves under drought stress at the seeding stage (Zhang et al., 2019). In the following, in order to comprehensively explore crucial genes that respond to drought stress at different developmental stages, we combine pre-flowering (from week 3 to week 8) and post-flowering is the empirical conditional distribution function of X i given response variable Y = q, b F ðxÞ = n À 1 P n v = 1 Ifx vi % xg is the empirical unconditional distribution function of X i , and b p q = n À 1 P n v = 1 Ify v = qg denotes the sample proportion of the q 0 th category. Ið:Þ is an indicator function. Larger T ðiÞ n provides a stronger evidence against the null hypothesis H 0 , indicating that the correlation between X i and the binary response variable Y is higher.
For small sample size, Cui et al. (Cui and Zhong, 2019) developed a permutation test to obtain the P value for the MV test. Procedures are as follows: Step 1: Compute the MV test statistic for the given sample fðx ji ; y j Þ; j = 1; 2; /; ng by Step 2: Generate a permuted response sample Y à = ðy à 1 ; y à 2 ; /; y à n Þ T from the original response vector, and compute the corresponding MV index T ðiÞà n = n d MV ðX i jY à Þ.
Step 3: Repeat Step 2 for K times and obtain K permuted MV statistics T In this paper, for each gene, we set K = 5000. If P ðiÞ MV % 0:01, the null hypothesis should be rejected, we have reason to believe that there is correlation between X i and Y; Otherwise, the i'th gene is deemed to be independent with Y, and it is neglected in the subsequent analysis.
Weighted gene differential co-expression network
To reveal whether a gene can trigger differential co-expression patterns or rewiring between treatment and control, weighted gene differential co-expression networks (WGDCNs) will be constructed. Since Pearson correlation coefficient (PCC) (Hudson et al., 2009) and Spearman correlation coefficient (SCC) (Sedgwick, 2014) all rely on considerable samples, and they are sensitive to data processing (Wang, 2021), Gray correlation coefficient (GCC) will be used to evaluate the co-expression relationships between genes. Specifically, when the p'th gene is taken as a reference, the GCC between the p'th and the q'th genes can be obtained according to Chen and Liu, 2021) r pq = 1 n X n k = 1 min s˛f1;2;/;mg min t˛f1;2;/;ng x tp À x ts + rmax s˛f1;2;/;mg max t˛f1;2;/;ng x tp À x ts x kp À x kq + rmax s˛f1;2;/;mg max t˛f1;2;/;ng x tp À x ts :
(Equation 2)
Here, r pq˛½ 0; 1; q = 1; 2; /; m; r is called resolution ratio, which is usually taken as 0.5. m denotes the number of genes with P MV % 0:01. Since the GCC relies on reference sequence, generally, r pq s r qp .
To overcome this disadvantage, we correct the GCC between the p'th and the q'th genes as ðr pq + r qp Þ=2. Samples under treatments and controls are separately considered, and we denote r treat pq and r control pq as the corrected GCC between the two genes in treated and control samples respectively.
Based on the GCC, the WGDCN for a specific developmental stage is constructed as follows. We set r 0 = 0:9 as a hard threshold (Such hard threshold mainly considers the density of the constructed network and information loss). If the correlation between two genes satisfied ðr treat pq À r 0 Þðr control pq À r 0 Þ % 0, then, genes p and q are differentially co-expressed between treatments and controls, and an undirected edge between the two genes is added. Edge weight is defined as rewire pq = r treat pq À r control pq :
(Equation 3)
Here, rewire pq reflects the importance/strength of rewiring between the two genes at the given developmental stage.
Hidden Markov random field model Suppose G = ðy; εÞ is an undirected graph, y = f1; 2; /; mg is the set of nodes (genes); ε is the edge set, e pq = 1 if the p'th and the q'th genes are connected, and their connection strength is rewire pq (Equation 3). Denote u p as the true association status of the p'th gene with drought stress, u p = + 1 if gene p is associated with drought stress, otherwise u p = À 1. For simplicity, u p is called as the label of gene p, and U = fu 1 ; u 2 ; /; u m g is called as the label vector or a configuration for the node set y.
Assume that neighbored genes tend to have similar association status (Chen et al., 2011;Hou et al., 2014), the probability distribution of network configuration can be described by an Ising model (Kindermann and Snell, 1980), which is defined as Pðu 1 ; u 2 ; /; u m Þ = rewire pq , I À u p = + 1; u q = + 1 Á À t 2 X epq = 1;rewirepq > d rewire pq , I À u p = À 1; u q = À 1 Á 9 = ; ; Ið ,Þ is an indicator function; h; t 1 ; t 2 are hyper-parameters. h is a constant, which is defined as the probability of being drought stress associated if the gene is isolated. t 1 represents the contributions of the rewired drought-associated gene pairs; while t 2 reflects the contributions of gene pairs that are not associated with drought stress (Chen et al., 2011). d = 0:95, rewire pq > d indicates the rewiring between genes p and q under treatments and controls is significant. It is noted that an underlying biological hypothesis behind model (Equation 4) is that, the co-expression difference of genes under two different experimental conditions can actually reflect their phenotype differences. That is, the model follows the guilt-by-rewiring principle (Hou et al., 2014).
Based on the formula of conditional probability, we obtain P u p = + 1 u Np = P u p = + 1; u Np P u Np ; P u p = À 1 u Np = P u p = À 1; u Np P u Np :
Furthermore, the conditional distribution of the associated status for gene p can be obtained as: rewire pq , I À u q = + 1 Á + t 2 X epq = 1;rewirepq > d rewire pq , I À u q = À 1 Á :
(Equation 6)
Here N p = fq : Cp; qD˛εg denotes the neighbor set of gene p; u Np denotes the label set of gene p's neighbors; logitðPÞ = lnðP =ð1 À PÞÞ is the logit function.
Given the joint probability of the labels for all genes, the posterior probability of network configuration can be inferred through the following Bayesian framework: PðUjxÞff ðxjUÞPðUÞ: (Equation 7) Here, f ðxjUÞ = Q fp: up = À 1g f 0 ðx p Þ, Q fp;:;up = +1g f 1 ðx p Þ. We can also obtain that P u p = + 1 x; u Np f f 1 À x p Á P u p = + 1 u Np ; P u p = À 1 x; u Np ff 0 À x p Á P u p = À 1 u Np : In Equation 7, the observed data x = ðx 1 ; x 2 ; /; x m Þ is taken as the normalized scores that transformed from the P value of the MV test: x p = F À 1 ½1 À P MV ðpÞ, F is the cumulative distribution function of standard ll OPEN ACCESS iScience Article jlog 2 ðFCÞj > 1 and P < 0:05. Here, FC denotes the fold change value between the average expression value under treatments and that under controls. In this paper, GO biological processes with P < 0:1 are considered.
QUANTIFICATION AND STATISTICAL ANALYSIS
All data are analyzed using R (http://www.R-project.org/, version 3.6.1) and Gephi (https://gephi.org/, version 0.9.5). Statistical tests for each analysis can be found in each figure or the main text. Here, DEGs are defined as genes with jlog 2 ðFCÞj > 1 and P < 0:05 (also see method details and Figure 2A). In the mean variance test, genes with P ðiÞ MV % 0:01 are retained as candidate genes (also see method details). | 2022-06-18T15:14:37.942Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "d526bfc47c410d32e43a74943da1012e8855c223",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "01c9adeff95ff0afcb8842a45a2f243429374c10",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
1906408 | pes2o/s2orc | v3-fos-license | A System for Generating Multiple Choice Questions: With a Novel Approach for Sentence Selection
,
Introduction
MCQ generation is the task of generating questions from various text inputs, having prospective learning content. MCQ is a popular assessment tool used widely in various levels of educational assessment. Apart from assessment MCQ also acts as an effective instrument in active learning. It is studied that, in active learning classroom framework conceptual understanding of the students can be boosted by posing MCQs on the concepts just taught (Mazur, 1997;Nicol 2007). Thus the MCQ is becoming an important aspect for next generation learning, training and assessment environments.
Generation of Multiple Choice Question manually is a time-consuming and tedious task which also requires domain expertise. Therefore an automatic MCQ generation system can leverage the active learning and assessment process.
Consequently automatic MCQ generation became a popular research topic and a number of systems have been developed (Coniam 1997;Pino, Heilman, & Eskenazi, 2008;. Generation of MCQ automatically consists of three major steps; (i) selection of sentences from which question can be generated, (ii) identification of the keyword which is the correct answer and (iii) generation of distractors that are the wrong answers (Delphine Bernhard, 2010).
All the sentences of a textual document cannot be the candidates for being question sentences or stems. The sentence that contains sufficient and quality information can act as MCQ stem; moreover keyword and corresponding distractors should be available. Hence the target is to select only the informative sentences from which factual MCQs can be generated for testing the content knowledge of the learner. Therefore, selection of sentence has been playing a pioneer role in automatic MCQ generation task. But unfortunately in the literature we have found that the sentence selection task has become unable to achieve adequate attention from the researchers. As a result, the sentence selection task is confined in a limited number of approaches by using only a set of rules or checking the occurrence of a set of pre-defined features and pattern. Success of such approaches suffers from the quality of the rules or features and thus become extremely domain reliant.
In this paper we propose an efficient technique for informative sentence selection and generation of MCQs from the selected sentences. Here we select the informative sentences based on certain words that are important to define the domain or topic and parse structure similarity. The proposed system is robust and expected to work in a wide range of domains. As input to the system we consider the Wikipedia and news article which are trusted sources of information. To generate a MCQ from a sentence, first we perform a set of pre-processing tasks like, converting complex and compound sentences into simple sentences and co-reference resolution. Then we use topic modeling as another pre-processing step that finds the subject words or topics of the domain and check whether the sentence contains any of these topics. This will reduce our overhead in subsequent steps. We have found that two sentences contain similar parse structures, are generally of similar type and carry same type of facts. Therefore, parse structure of a sentence may play an important role in sentence selection. We collect a set of MCQs available in the Internet in the domain of interest and form sentences from them. Here we like to mention that we have chosen sports domain specially cricket as a case study because of wide availability of existing MCQs in this domain. We obtain parse structures of these sentences and the common structures are saved as a reference set. Next we compare the parse tree of an input sentence with the reference set structures. If the sentence has structural similarity with any of the reference set structures then it is considered as an informative sentence for MCQ stem generation.
Next we perform other subtasks namely, keyword selection and distractor generation. Keyword selection is done by a rule based approach based on cricket domain specific words and named entities (NE) in the sentence. Generation of distractors is done using a gazetteer list based approach. The following sections present the details of the system.
Previous Work
Generating Multiple Choice Question automatically is a relatively new and important research area and potentially useful in Education Technology. Here we first discuss a few systems for MCQ generation. Coniam (1997) presented one of the earlier attempts of MCQ generation. They used word frequencies for an analyzed corpus in the various phases of the development. They matched partsof-speech and word frequency of each test item with similar word class and word frequency options to construct the test items. Mitkov and Ha (2003) and Mitkov et al. (2006) used NLP techniques like shallow parsing, term extraction, sentence transformation and computation of semantic distance in their works for generating MCQ semi automatically from an electronic text. They did term extraction from the text using frequency count, generated stems using a set of linguistic rules, and selected distractors by finding semantically close concepts using WordNet. Brown (2005) developed a system for automatic generation of vocabulary assessment questions. They used WordNet for finding definition, synonym, antonym, hypernym and hyponym in order to generate the questions as well as the distractors. Aldabe et al. (2006) and Aldabe and Maritxalar (2010) developed systems to generate MCQ in Basque language. They have divided the task into six phases: selection of text (based on learners and length of texts), marking blanks (manually), generation of distractors, selection of distractors, evaluation with learners and item analysis. Papasalouros et al. (2008) proposed an ontology based approach for development of an automatic MCQ system. presented a system that automatically generates questions from natural language text using discourse connectives.
As in this paper we focus on sentence selection, next we like to discuss the sentence selection strategies used in various works. In order to MCQ stem generation different types of rules have been defined manually or semiautomatically for selecting informative sentences from a corpus; these are discussed as follows. Mitkov et al. (2006) selected sentences if they contain at least one term, is finite and is of SVO or SV structure. Karamanis et al. (2006) implemented a module to select clause, having some specific terms and filtering out sentences which having inappropriate terms for multiple choice test item generation (MCTIG). For sentence selection Pino et al. (2008) used a set of criteria like, number of clause, well-defined context, probabilistic context-free grammar score and number of tokens. They also manually computed a sentence score based on occurrence of these criteria in a given sentence and select the sentence as informative if the score is higher than a threshold. For sentence selection used a number of features like: is it first sentence, contains token that occurs in the title, position of the sentence in the document, whether it contains abbreviation or superlatives, length, number of nouns and pronouns etc. But they have not clearly reported what should be optimum value of these features or how the features are combined or whether there is any relative weight among the features. Kurtasov (2013) applied some predefined rules that allow selecting sentences of a particular type. For example, the system recognizes sentences containing definitions, which can be used to generate a certain category of test exercise. For 'Automatic Cloze-Questions Generation' Narendra et al. (2013) in their paper directly used a summarizer, MEAD for selection of important sentences. Bhatia et al. (2013) used pattern based technique for identifying MCQ sentences from Wikipedia. Apart from these rule and pattern based approaches we also found an attempt on using supervised machine learning technique for stem selection by Correia et al. (2012). They used a set of features like parts-of-speech, chunk, named entity, sentence length, word position, acronym, verb domain, known-unknown word etc. to run Support Vector Machine (SVM) classifier. Another approach was presented by Majumder and Saha (2015), which used named entity recognition, based rule mining along with syntactic structure similarity for sentence selection.
Pre-processing on Input Text
MCQ is generally made from a simple sentence but we have found that many of the Wikipedia and news article sentences are long, complex and compound in nature. Moreover, a number of these sentences are having coreference issues. Our system first aims to identify informative sentences from Wikipedia and news articles for stem generation. The proposed technique is based on parse structure similarity; hence the structure of the sentences plays a major role in the task. In order to obtain better structural similarity we first apply a few pre-processing steps that are discussed below.
Co-reference Resolution and Simple Sentence Generation
First preprocessing step we employ is transforming complex and compound sentences into simple form. Moreover, to resolve the coreference issues we perform corefernce resolution. Coreference has been defined as, referring of the same object (e.g., person) by two or more expressions in a corpus. For generating question the referent must be identified from such sentences. We consider the following sentence as an example. The 2012 ICC World Twenty20 was the fourth ICC World Twenty20 competition that took place in Sri Lanka from 18 September to 7 October 2012 which was won by the West Indies.
This sentence is complex in nature and it has coreference problem. In this sentence 'that' and 'which' are referring to '2012 ICC World Twenty20'. A simple sentence is built up from one independent clause where a compound or complex sentence is consisted of at least two clauses. So the task is to split complex or compound sentence into clauses that can form simple sentences.
To convert the sentence into simple form we use the openly available 'Stanford CoreNLP Suite 1 '. The tool is not directly converting the complex and compound sentences into simple ones. It provides the parse result of the example sentence in Stanford typed dependency (SD) notations (Marneffe et al., 2008). We analyze the dependency structure provided by the tool in order to convert it. We use 'Stanford Deterministic Coreference Resolution System', which is basically a module of the 'Stanford CoreNLP Suite', for coreference resolution. Finally we get the following simple sentences from the aforementioned example sentence.
Simple1: The 2012 ICC World Twenty20 was the fourth ICC World Twenty20 competition.
Simple2: The 2012 ICC World Twenty20 took place in Sri Lanka from 18 September to 7 October 2012.
Simple3: The 2012 ICC World Twenty20 was won by the West Indies.
Subject or Topic Word Identification and Potential Candidate Sentence Selection
The sentence selection strategy for MCQ stem generation is based on parse tree similarity. We need to compare an input sentence with reference set of structures for selecting it as the basis of a MCQ. But the size of such input text is huge. Therefore comparing these vast numbers of sentences with reference structures will be a gigantic task. To reduce this overhead we have taken the help of topic modeling which can identify the topic words of the domain and if the test sentence is not containing a topic then reject it. We also found that the sentence with the topic word is more informative than the sentences which are not containing any domain or topic specific words. This approach will identify a set of potential candidate sentences and simplifies the task of parse tree comparison. We use the openly available Topic Modeling Tool (TMT) 2 to identify the topic words as well as the distribution of these words in the sentences. We run the topic modeling tool on the Wikipedia pages and news articles that we considered as input for sentence selection, and get the topic words. Some of the identified topic words are, 'World Cup', 'World Twenty20', 'Champions Trophy', 'Knock Out Tournament', 'Indian Premier League or IPL' etc. Now we check whether an input sentence is containing any of these topic words or not.
Sentence Selection for MCQ Stem Generation
The syntactic structure can play a key role in sentence selection for MCQ. The parse tree of a particular question sentence is able to retrieve many informative sentences have similar structure. For example, the aforementioned Wikipedia sentence 'Simple3' (in Section 3.1) is defining the fact that a team has won a series/tournament. The parse structure of the sentence is similar with many sentences carrying 'team wins series' fact. The sentences like '1983 ICC World Cup was won by India.', '2006 ICC Champions Trophy was won by Australia.' have similar parse trees and these can be retrieved if the parse structure shown in Figure 1 is considered as a reference structure. From this observation we aim to collect a set of such syntactic structures that can act as the reference for retrieving new sentences from the web.
Reference Sentence Formation
For the parse tree matching we require a reference set of parse structures with which the input sentences will be compared. We compile the reference set from existing MCQs. We found that in the sports domain a large number of MCQs are available in the Internet. We collect about 400 MCQs for the reference set creation. As we have discussed earlier, a MCQ is mainly composed of a stem and a few options. Generally the stems are interrogative in nature. Our system is supposed to identify informative sentences from Wikipedia and news articles. Most 2 http://code.google.com/p/topic-modeling-tool/ of the sentences in Wikipedia pages and news articles are assertive. In order to get the structural similarity the reference sentences and the input sentences should be in same form. Therefore we convert the collected stems into assertive form. For this conversion we replace the 'wh' phrase or the blank space of the stem by the first alternative of the option set. Here we like to mention that in this phase our target is to compile a reference set containing a number of grammatically correct sentences, not to extract the fact from the existing MCQ. Even if the first option is not the correct answer of the given question, out target of reference set creation is satisfied. The set of sentences generated using the approach is referred as 'reference sentence'.
Parse Tree Comparison
We generate the parse tree of the reference set sentences using the openly available Stanford Parser 3 . In the sports domain the questions (MCQs) deal with the facts embedded in the sentences. Therefore, the tense information of the sentences is not so important for question formation but tense information leads to alter the parse structure. For example, 'In the 2012 season Sourav Ganguly has been appointed as the Captain for Pune Warriors India.' and 'In the 2013 season Graeme Smith was announced as the captain for Surrey County Cricket Club.' the two sentences are describing similar type of fact but parse structure is different due to the difference in verb form. This type of phenomena occurs in 'noun' subclasses also: singular noun vs plural noun, common noun vs proper noun etc. For the sake of parse tree matching we have used a coarse-grain tagset where a set of subcategories of a particular word class is mapped into one broader category. From the original Penn Treebank Tagset (Santorini, 1990) used in Stanford Parser we derive the new tagset and modify the sentences accordingly. For this purpose first we create parse trees and replace the tags or words according to the new tagset in the pare structures.
Once we get the parse trees of the reference sentences and test sentences, we need to find the similarity among them. In order to find the similarity in these parse trees we have proposed the Parse Tree Matching (PTM) Algorithm.
The algorithm is basically trying to find whether the sentences have similar structure. The parse tree matching algorithm considers only the non-leaf nodes during the matching process. All other words that occur as leaf of the tree are not playing any role in the parse tree matching.
We have found that some of the reference sentences are having similar parse structures. Therefore first we run the PTM Algorithm among these parse trees generated from the reference set of sentences to find the unique set of structures. During this phase argument 'T1' of the algorithm is a parse tree of the reference set sentence and the argument 'T2' is the parse tree of another reference set sentence. We run this algorithm for several iterations: by keeping 'T1' fixed and varying 'T2' for all the parse trees.
The sentences for which the matches are found are basically of similar type and we keep only one of these in the reference set and discard the others. By applying the procedure finally we generate the reduced set of parse structures.
Once the reference structures are finalized, we used them for finding new Wikipedia and news article sentences which have similar structure. For this purpose we run the proposed PTM Algorithm repeatedly in the same way as mentioned above. Here we set the argument 'T1' as the parse structure of a test sentence and argument 'T2' as a reference structure. We fix 'T1' and vary the 'T2' among the reference set structures until a match is found or we come to the end of the reference set. If a match is found then the sentence (whose structure is 'T1') is selected. After this phase we have successfully selected a set of sentences which is used to form MCQ stems. Keyword extraction and distractors generation are also done from these selected sentences. Question generation, keyword extraction and distraction are discussed as follows.
Keyword Identification, Question Formation and Distractors Generation
A MCQ consists of a stem along with the option set which contains a keyword and distractors. (NP (NN Sri) (NN Lanka))))) (. .))) Therefore we need to identify the keyword and form the distractors to generate a multiple choice question.
Keyword Identification
Keyword identification is the next phase where we select the word (or n-gram) that has the potential to become the right answer of the MCQ. We have found that some particular patterns are followed by these potential sentences which are having some specific named entities (NEs). For the identification of these keys we have taken the help of the named entity recognition (NER) system developed by Majumder and Saha (2014). And the domain specific words like, tournament, series, trophy, captain, wicket, bowler, batsman, wicket-keeper, umpire, pitch, opening ceremony, etc are very important to identify these patterns in the sentences. Therefore we have also compiled a list of such domain specific words. For example, "opening ceremony was held in" pattern retrieves sentences containing the name of the location (city name or ground name) where the opening ceremony of a tournament was held. Therefore the key for this pattern is the location name in the retrieved sentence. Similarly, "the man of the tournament" pattern extracts sentences having the name of the player who got the man of the tournament in a particular tournament. Here the key for the pattern is the person name. The pattern "team won the tournament/series" is retrieving the team or country name that won the series or tournament; therefore the corresponding key is the country or team or franchise name. The sentences are tagged using the NER system and the corresponding entity is selected as the key.
Question Formation
After the keyword is identified we can form the question by replacing it with proper 'wh-word'. We have also consulted the parse tree structure of the sentence to bring the 'wh-word' at the appropriate position in the stem of the MCQ. For different type of keyword appropriate 'wh-word' is selected. For example if the category is location then the 'wh-word' is where; similarly, for person: who, for date: when, for number: how many etc.
Distractors Generation
Distractors are closely related to keyword. These are the distraction for the right answer in a MCQ. In this cricket domain majority of the distractors are named entity. Here first we identify the class of the key and search for a few close members using a gazetteer list based approach.
We compile a few gazetteer lists using the web. In this cricket domain the major categories of key (or, distractors) are: person name (cricketer, bowler, batsman, wicketkeeper, captain, board president, team owner etc.), organization name (country name, franchise name, cricket boards like ICC etc.), event name (cup, tournament, trophy, championship etc.), location name (cricket ground, city etc.). For each of the name categories we extract lists of names from relevant websites. For example, for cricketers we search the Wikipedia, Yahoo! Cricket and Espncricinfo player's lists. Then we search the key in these lists to determine the class of the key.
For each name category we select a set of attributes. The Wikipedia pages normally contain an information template on the title (at the topright portion of the page) that contains a set of properties defining the class. Additionally, majority of the cricket related pages contain a table for summarizing the topic. Those fields of the tables are extracted to become member of the attribute set. For example, if we consider the category batsman, the attribute set may include date-of-birth, span, team name, batting style, last match, total run, batting average, strike rate, number of century, number of half-century, highest score etc. The detailed strategy is discussed as follows.
Next we search for a list of related tokens of the same category in the Wikipedia. For a cricketer key we run a search query "list of <national side> cricketers"; if the 'is-captain' attribute value is true, then the query is "List of <national side> national cricket captains". From the search result in Wikipedia pages we extract a set of sim- 2014)))))) (. .))) ilar entities. Similar entity is defined as the entities that have certain attribute value same as the key. We have predefined a set of attributes as 'important' for each class. For the cricketer class we consider the attributes country, span (overlapping), batting average (difference less than ten) or bowling average (difference less than five). Similarly, for the ground class we use only the country attribute; for the team class we consider the country and common trophy/tournament attributes as important. The entities which have match in important attributes are considered as candidate distractor. And from these candidate distractors we randomly pick three to four entities as the list of distractors.
Result and Discussion
We have already mentioned that the system is tested on cricket related Wikipedia pages and news article. In order to evaluate the performance of the sentence selection module we consider the quality of the retrieved sentencewhether this is really able to act as a MCQ stem.
There is no benchmark or gold-standard data in the task. In order to evaluate the performance of the system we have taken a few Wikipedia pages and news articles as input on which we run the system. The question formation capability of the retrieved sentences is examined by a set of human evaluators. The evaluators count the number of sentences that are potential to become the basis of a MCQ ('correct retrieval'). The average of the percentage of correct retrieval is considered as the accuracy of the system.
For computing the accuracy of the system we consider six Wikipedia pages. These are the pages on 2003, 2007, 2011 ICC Cricket World Cup, ICC Champions Trophy, IPL 2014 and T20 World Cup 2014 and four sports news articles from The Times of India, a popular English daily of India related to the T20 World Cup 2014, namely, 'Sri Lankans Lord Over India', 'Yuvi cuts a sorry figure in final', 'Virat, the lone man standing for India' and 'Mahela, Sangakkara bow out on a high'. Only the text portions of these pages are taken as input that contains a total of ~795 sentences. From these input text ~508 sentences were selected after the topic word based filtering. Then we apply the parse tree matching algorithm which finally considers 112 sentences. These sentences are examined by five human evaluators. They consider 105, 104, 103, 106 and 104 sentences respectively as correct retrieval. Therefore the accuracy of the system is 93.21%. Table 1 summarizes the accuracy of the system. From the evaluation score given by the human evaluators it is clear that the proposed system is capable of retrieving quality sentences from an input document. In addition to the correct retrievals, the system also selects a few sentences that are not considered as 'good' by the evaluators. We have analyzed these sentences. As for example we have listed the following sentences: Netherlands and Canada were both appearing in the Cricket World Cup for the second time.
Ireland had been the best-performing associate member since the previous World Cup.
These sentences are containing the topic words and matching with the reference set structures. But these are missing out of some important information for which the fact is incomplete. The time or year related information is missing in both the sentences. A modified topic modeling system may be used to consider a tournament name with year is a topic but only the tournament name without year is not.
While comparing with the existing technique (Majumder and Saha, 2015), we found that the proposed technique identifies more number of sentences after pre-processing and postprocessing steps. Omission of domain specific word and NER based rule mining restriction not only make the proposed system domain independent but also it outperforms the existing system in terms of selecting number of sentences.
Next we measure the performance of the overall MCQ system. After sentence selection, key selection and distractor generation are the major modules. We evaluate the performance of these modules using: key selection accuracy (whether the key is selected properly), distractor quality (whether the distractors are good). Again we employ the human evaluators to assess the system. The average evaluation accuracy of key selection is 83.03% (93 out of 112) and in distractor quality the accuracy is 91.07% (102 out of 112).
A few examples of the generated MCQs are given below:
Conclusion
In this paper we have presented a novel technique for selecting informative sentences for multiple choice questions generation from an input corpus. The proposed technique selects informative sentences based on topic word and parse structure similarity. The system also uses a set of pre-processing steps like simplification of sentences, co-reference resolution etc. The selected sentences are used in the key selection and distractor generation modules to make a complete automatic MCQ system. We test the system in sports domain and use Wikipedia pages and news articles as input corpus. But we feel the system is generic and expected to work well in other domains also. We have deeply studied the false identifications and observed that the accuracy of the system can be further improved by incorporating better pre-processing and post processing steps. A deeper co-reference resolution system can be used to remove a number of semi-informative sentences. Better identification of domain specific phrases or topics can also be helpful to handle a number of false detections. These observations may lead us to continue work in future. | 2015-08-11T20:29:18.000Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "39fb1b6052f8efa9d424723e15b73435d7756cfa",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/W15-4410.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "39fb1b6052f8efa9d424723e15b73435d7756cfa",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53030165 | pes2o/s2orc | v3-fos-license | Comparison of Synovial Fluid and Serum Procalcitonin for Diagnosis of Periprosthetic Joint Infection: A Pilot Study in 32 Patients
Background Periprosthetic joint infection (PJI) remains challenging since a “gold standard” for diagnosis has not yet been established. This study aimed to evaluate the accuracy of synovial fluid procalcitonin (SF-PCT) and serum procalcitonin as a diagnostic biomarker for PJI and to compare its accuracy against standard methods. Methods A prospective cohort study was conducted during 2015–2017 in 32 patients with painful hip or knee arthroplasty who have underwent revision surgery. Relevant clinical and laboratory data were collected. PJI was diagnosed based on the 2013 international consensus criteria. Preoperative blood sample and intraoperatively acquired joint fluid were taken for PCT measurement with a standard assay. Diagnostic accuracy was analyzed by the receiver-operating characteristic curve and the area under the curve (AUC). Results Twenty patients (62.5%) were classified as the PJI group, and 12 (37.5%) were classified as the aseptic loosening group. The median age was 68 years (range 38–87 years). The median values of SF-PCT and serum PCT in the PJI group were both significantly higher than those in the aseptic loosening group: the median serum PCT levels (interquartile range: IQR) were 0.33 ng/mL (0.08-2.79 ng/mL) in the PJI group compared with 0.04 ng/mL (0.03-0.06 ng/mL), and the median SF-PCT levels (IQR) were 0.16 ng/mL (0.12-0.26 ng/mL) in PJI group compared with 0.00 (0.00-0.00 ng/mL) (p < 0.001, both). SF-PCT, with a cut-off level of 0.08 ng/mL, had an AUC of 0.87, a sensitivity of 90.0%, a specificity of 83.3%, and a negative likelihood ratio (LR-) of 0.12. Serum PCT, with a standard cut-off level of 0.5 ng/mL, had an AUC of 0.70, a sensitivity of 40.0%, a specificity of 100.0%, and a LR- of 0.60. Conclusion SF-PCT appears to be a reliable test and could be useful as an alternative indicator or in combination with standard methods for diagnosing PJI.
Introduction
Periprosthetic joint infection (PJI) is a serious complication after total joint arthroplasty resulting in devastating consequences, such as revision surgery, limb loss, or death [1][2][3]. However, the diagnosis of this condition is difficult and often delayed, especially with chronic or low-grade infections, due to the lack of "gold standard" examinations and limited accuracy with current diagnostic methods. Therefore, a combination of preoperative and intraoperative markers-including synovial fluid cell count/differential, serum inflammatory markers, cultures, clinical signs, and tissue pathology-are required for PJI diagnosis. Recent studies regarding the new diagnostic techniques demonstrated that some biomarkers-such as procalcitonin (PCT), interleukin-6 (IL-6), and -defensin-are helpful and a better marker for PJI [4][5][6]. Moreover, several studies also showed that the synovial fluid biomarkers obtained directly from the infected joint are more reliable and accurate for diagnosing PJI compared to serum biomarkers and other existing tests [7,8].
PCT, the precursor of calcitonin, is a 116-amino-acid protein produced by the neuroendocrine and the parafollicular cells of the thyroid. Serum PCT level is generally very low (< 0.05 ng/mL) in healthy subjects [9], but specifically elevates in bacterial and fungal infections [10]. It is also unresponsive or only mildly reactive to aseptic inflammation and viral infection [11]. Therefore, numerous studies have shown its ability for differentiating septic arthritis from the aseptic condition [12][13][14]. Regarding the accuracy with PJI diagnosis, a recent meta-analysis showed that serum PCT had a pooled sensitivity and a pooled specificity for detecting PJI as 53% and 92%, respectively [15]. However, to the best of our knowledge, while serum PCT seems reliable [4,15,16], only a few studies addressed the efficacy of synovial fluid PCT (SF-PCT) for PJI diagnosis [13], and its diagnostic utility has not been clearly established. The aim of this study was to assess synovial fluid and serum levels of PCT as a diagnostic tool for PJI and to evaluate their diagnostic accuracy compared with the standard tests.
Study Design, Inclusion, and Exclusion Criteria.
This study design was a single-centered prospective cohort study in a medical university hospital, and the study was approved by the institutional board review committee (Faculty of Medicine Ramathibodi Hospital, Mahidol University: Protocol number ID 05-58-01). All patients signed informed consent forms prior to being enrolled. The study was conducted in accordance with the declaration of Helsinki. Between 2015 and 2017, patients undergoing revision hip or knee arthroplasty were recruited into this prospective study. The patient-inclusion criteria were (1) pain at the site of total hip or total knee arthroplasty that prompted a clinical evaluation for infection or possible revision hip or knee arthroplasty, (2) no history of previous septic arthritis treatment or previous septic revision surgery, (3) sufficient synovial fluid for the study methods, and (4) sufficient clinical and laboratory data for PJI classification according to the criteria of the International Consensus Meeting on Periprosthetic Joint Infection 2013 [17] (Tables 1 and 2). Patients were excluded if they received any antibiotics or joint puncture treatments prior to enrollment in the current study. All patients underwent standard diagnostic evaluation for PJI diagnosis. Preoperative blood samples were taken for complete blood count (CBC) erythrocyte sedimentation rate (ESR), c-reactive protein (CRP), and PCT. Joint aspiration was done intraoperatively before opening the joint capsule, and then synovial fluid was sent for cell differentiation, cell count, gram stain, aerobic culture, and PCT. Intraoperative frozen section was performed. Periprosthetic tissue from five different areas (joint capsule, synovial lining, intramedullary material, granulation tissue, and bone fragments) was delivered for microbiology and histology.
Determination of the Levels of Serum and Synovial
Fluid PCT. PCT levels were quantified using a standard quantitative PCT enzyme immunoassay kit, according to the manufacturers' instructions (Elecsys5 BRAHMS PCT test, Roche Diagnostics Ltd., Switzerland), on the Roche Cobas e601 analyzer. The lower limit of detection was 0.02 ng/mL. The specimens, either blood or synovial fluid, were collected and kept at room temperature (10 ∘ C-25 ∘ C) and were measured within 2 hours. When synovial fluid cannot be measured within 2 hours, the specimen must be kept at approximately 2 ∘ C-8 ∘ C and must be measured within 24 hours. Due to the high viscosity of synovial fluid, the specimen was diluted at a ratio of 1:4 (100 L of synovial fluid sample with 300 L normal saline). Therefore, the synovial PCT level was then calculated from the measured PCT value multiplied by 4, such as 0.08, 0.12, and 0.16 ng/mL.
Statistical Analysis.
Statistical analysis was carried out with MedCalc Statistical Software version 15.8 (MedCalc Software bvbv, Ostend, Belgium). Normally distributed continuous data were shown as mean ± standard deviation (SD) and
Results
A total of 32 patients (5 revision hip arthroplasties and 27 revision knee arthroplasties) were recruited into our prospective study between 2015 and 2017. Regarding the International Consensus Criteria on PJI [17], 20 patients (20 revision knee arthroplasties) were classified in the PJI group and 12 patients (5 revision hip arthroplasties and 7 revision knee arthroplasties) were classified in the aseptic group. The patient characteristics data are presented in Table 3. There were 7 males (22%) and 25 females (78%). The median patient age was 68 years (range 38-87 years). The mean BMI was 26.9 ± 4.0 kg/m 2 , and the median CCI was 3 (range 0-9). Of these, 2 patients had preexisting rheumatoid arthritis (1 patient in each group) and were receiving immunomodulating drugs. No significant difference existed in age, BMI, operated side, CCI, presence of systemic inflammatory disease, and concomitant immunomodulation drugs between both groups. However, the PJI group was significantly higher for male gender, revision knee arthroplasties, body temperature, and serum WBC count than the aseptic group ( < 0.05 all). Tables 4 and 5 demonstrate the relevant clinical and laboratory findings according to the PJI definition [17] in both groups and the microbiological findings in our study. Of the 20 patients with PJI, 13 (65%) had positive synovial fluid culture and 14 (70%) had positive tissue culture. The most common microorganism from cultures-positive PJI was Streptococci (n = 7, 50%). The PJI group demonstrated significantly greater values in serum and synovial fluid markers related to infection than the aseptic loosening group ( < 0.001, all). The median serum PCT level (interquartile range: IQR) in the PJI and aseptic groups was 0.33 (0.08 to 2.79) and 0.04 (0.03 to 0.06), respectively ( < 0.001). The median SF-PCT (IQR) in the PJI and aseptic groups was 0.16 (0.12 to 0.26) and 0.00 (0.00 to 0.00), respectively ( < 0.001). Regarding the aseptic loosening group, the median serum PCT and SF-PCT values from revision hip arthroplasties (0.04 and 0.00 ng/mL) did not significantly differ compared to those from revision knee arthroplasties (0.04 and 0.00 ng/mL) ( = 0.400 and 0.287, respectively). Table 6 and Figure 1 show the diagnostic accuracy of PJI diagnosis by using serum PCT or SF-PCT in each cut-off value and ROC curve comparison between serum PCT and SF-PCT. The cut-off references of serum PCT were set as 0.1, 0.3, and 0.5 ng/mL, whereas those of SF-PCT were set as 0.08, 0.12, and 0.16 ng/mL. Regarding the accuracy of the serum PCT test with the standard cut-off reference level as 0.5 ng/mL, the sensitivity, specificity, LR+, and LR-were 40.0%, 100.0%, not available, and 0.60, respectively. However, with the lower cut-off level as 0.1 ng/mL, the serum PCT test showed sensitivity, specificity, LR+, and LR-as 65.0%, 91.7%, 7.80, and 0.38, respectively. The AUC of 0.5 and 0.1 ng/mL cut-off levels was 0.70 and 0.78, respectively. Regarding the accuracy of the SF-PCT test for PJI diagnosis, the cut-off value as 0.08 ng/mL resulted in sensitivity of 90.0%, specificity of 83.3%, LR+ of 5.40, and LR-of 0.12. Conversely, the higher cut-off level as 0.12 ng/mL showed sensitivity of 80.0%, specificity of 91.7%, LR+ of 9.40, and LRof 0.22. The AUC of 0.08 and 0.12 ng/mL cut-off levels was 0.87 and 0.86, respectively (Table 6).
Discussion
Periprosthetic joint infection (PJI) is one of the most severe and costly complications following total joint arthroplasty. Although there is an international consensus for the definition of PJI, no single "gold standard" test currently exists for diagnosing PJI. Recently, many studies have reported the usefulness of synovial fluid cytokines-such as interleukin-6, c-reactive protein, and alpha-defensin-as alternative and better diagnostic markers for PJI compared to the standard technique [7,8,[18][19][20][21]. The overall sensitivity and specificity of these markers were more than 80% and 90%, respectively [7]. However, to our knowledge, although some biomarkers have demonstrated excellent diagnostic performance for PJI, the comparison of diagnostic accuracy between these biomarkers did not achieve statistical significance [8]. Moreover, according to the current evidence on these new biomarkers, serum PCT is a promising and reliable test, but the utility of synovial fluid PCT for detecting PJI has not been clearly demonstrated.
The results of this study show that both serum PCT and SF-PCT could be used as diagnostic biomarkers to support clinicians in differentiating PJI from aseptic loosening. The PJI group had significantly higher serum PCT and SF-PCT values, the same as serum ESR and CRP ( < 0.001 all), compared with the aseptic loosening group (Table 4). Using ROC curve analysis, the present study demonstrates that serum PCT, with the standard cut-off level as 0.5 ng/mL (a sensitivity of 40%, a specificity of 100%, and AUC of 0.70), is comparable to PCT from the previous meta-analysis (pooled sensitivity of 53%, pooled specificity of 92%, and AUC of 0.76) [15]. Additionally, this study also reveals that, with the lower serum PCT cut-off level as 0.1 ng/mL, the diagnostic accuracy of serum PCT could be further improved to sensitivity of 65%, persistently good specificity (92%), and AUC of 0.78 ( Table 5). However, serum PCT with a lower cut-off level should be used with caution and may need future larger studies to ensure an effective strategy implementation.
Regarding ROC curve analysis, SF-PCT showed an ability to be a more valuable biomarker for identifying PJI from aseptic loosening than serum PCT. With a cut-off level as 0.08 ng/mL, SF-PCT showed the greatest accuracy with sensitivity of 90%, specificity of 83%, LR+ of 5.40, LR-of 0.12, and AUC of 0.87 (Table 5). These high sensitivity, high LR+, and very low LR-characteristics are all good indicators for ruling in and out the diagnosis of PJI [22], especially for preoperative and intraoperative settings. This is because the PCT test would take only 30 minutes to perform in the laboratory and may have the potential to become a point-of-care test in the patients who obtain the synovial fluid. Additionally, this study also found that, according to the PJI group, the concentration of PCT in blood (median value 0.33 ng/mL, interquartile range 0.08 ng/mL-2.79 ng/mL) seemed to be greater (about two times) than those in joint fluid (median value 0.16 ng/mL, interquartile range 0.12 ng/mL-0.26 ng/mL). However, the difference did not reach statistical significance ( = 0.20) (Table 4). This could imply that the cut-off reference for SF-PCT should be different from those for serum PCT, the same as the other synovial fluid biomarker [21].
Concerning the comparison of the diagnostic performance between synovial fluid biomarkers for PJI diagnosis, although this study demonstrated a good accuracy of SF-PCT for PJI diagnosis with an AUC of 0.87, this diagnostic accuracy appeared to be slightly inferior to biomarkers from previous studies-such as CRP, IL-6, and alpha-defensin-with AUC between 0.90 and 0.99 [18,20,23,24]. However, due to the previously noted potential of the PCT test, we still recommend using SF-PCT as a complementary tool with the standard technique for diagnosing PJI.
This study also had some limitations. Firstly, due to the prospective cohort nature in only one university hospital center, our sample size was relatively small and included both knee and hip patients. Therefore, future longitudinal studies with a larger sample size and a specific analytic approach for revision knee or hip arthroplasties would require determining the usefulness of SF-PCT for detecting PJI. Secondly, this study did not include patients with prior antibiotics therapy or with concomitant disease that might affect SF-PCT, such as crystal-induced arthritis or malignancy [13]. Lastly, a diagnostic accuracy comparison between SF-PCT and other biomarkers was not performed. However, the information related to other biomarkers is already published.
Conclusion
The accuracy of SF-PCT was significantly higher than that of serum PCT. Therefore, SF-PCT may be used as an alternative indicator in the differential diagnosis of PJI from aseptic loosening in cases where patients are undergoing revision hip or knee arthroplasty. However, further prospective studies with a larger sample size are required to validate the usefulness of SF-PCT.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
All of the authors declare that they have no conflicts of interest.
Authors' Contributions
Paphon Sa-ngasoongsong and Siwadol Wongsak were main researchers who designed and performed this study and prepared the manuscript. Chavarat Jarungvittayakon, Kawee Limsamutpetch, and Thanaphot Channoom were orthopaedic trauma surgeons who assisted in data collection and manuscript preparation. Viroj Kawinwonggowit was senior orthopaedic consultant who assisted in research process. | 2018-11-11T01:39:44.450Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "57fe5ead5d685695622e910dbafd7a7b013a61c3",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2018/8351308.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8fef17556c55fc9cfd1300ec65eb044db14d15b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233874398 | pes2o/s2orc | v3-fos-license | Effects of a Common Eight Base Pairs Duplication at the Exon 7-Intron 7 Junction on Splicing, Expression, and Function of OCT1
Organic cation transporter 1 (OCT1, SLC22A1) is localized in the sinusoidal membrane of human hepatocytes and mediates hepatic uptake of weakly basic or cationic drugs and endogenous compounds. Common amino acid substitutions in OCT1 were associated with altered pharmacokinetics and efficacy of drugs like sumatriptan and fenoterol. Recently, the common splice variant rs35854239 has also been suggested to affect OCT1 function. rs35854239 represents an 8 bp duplication of the donor splice site at the exon 7-intron 7 junction. Here we quantified the extent to which this duplication affects OCT1 splicing and, as a consequence, the expression and the function of OCT1. We used pyrosequencing and deep RNA-sequencing to quantify the effect of rs35854239 on splicing after minigene expression of this variant in HepG2 and Huh7 cells and directly in human liver samples. Further, we analyzed the effects of rs35854239 on OCT1 mRNA expression in total, localization and activity of the resulting OCT1 protein, and on the pharmacokinetics of sumatriptan and fenoterol. The 8 bp duplication caused alternative splicing in 38% (deep RNA-sequencing) to 52% (pyrosequencing) of the minigene transcripts when analyzed in HepG2 and Huh7 cells. The alternatively spliced transcript encodes for a truncated protein that after transient transfection in HEK293 cells was not localized in the plasma membrane and was not able to transport the OCT1 model substrate ASP+. In human liver, however, the alternatively spliced OCT1 transcript was detectable only at very low levels (0.3% in heterozygous and 0.6% in homozygous carriers of the 8 bp duplication, deep RNA-sequencing). The 8 bp duplication was associated with a significant reduction of OCT1 expression in the human liver, but explained only 9% of the general variability in OCT1 expression and was not associated with significant changes in the pharmacokinetics of sumatriptan and fenoterol. Therefore, the rs35854239 variant only partially changes splicing, causing moderate changes in OCT1 expression and may be of only limited therapeutic relevance.
INTRODUCTION OCT1 (SLC22A1) is by far the most strongly expressed transporter of organic cations in the sinusoidal membrane of the human liver (Nies et al., 2009;Wang et al., 2015;Drozdzik et al., 2019). OCT1 mediates the first step of hepatic clearance of weakly basic or positively charged drugs. Metformin, morphine, sumatriptan or fenoterol and endogenous compounds like thiamine belong to substrates transported by OCT1 (Wang et al., 2002;Tzvetkov et al., 2013;Chen et al., 2014;Matthaei et al., 2016;Tzvetkov et al., 2018). A loss of OCT1 function was shown to increase plasma concentrations of several drugs including sumatriptan and fenoterol (Kerb et al., 2002;Shu et al., 2007;Tzvetkov et al., 2013;Arimany-Nardi et al., 2016;Matthaei et al., 2016;Tzvetkov et al., 2018). Depending on administered drugs, an increase may bear the risk of toxic side effects and may affect drug efficacy.
OCT1 is encoded by the SLC22A1 gene, which is located on the long arm of human chromosome 6 (6q26) and contains 11 exons and 10 introns (Koehler et al., 1997;Zhang et al., 1997). The resulting OCT1 protein has 554 amino acids and is composed of 12 transmembrane helices (TMHs) with intracellularly localized N-and C-termini.
The SLC22A1 gene shows the highest genetic variability within the pharmacologically relevant members of the SLC22 family Schaller and Lauschke, 2019). Fourteen single nucleotide polymorphisms (SNPs) result in amino acid substitutions. Thereof, four common amino acid substitutions (Arg61Cys, Cys88Arg, Gly401Ser, and Gly465Arg) and a deletion of Met420 are known to confer strongly reduced or completely abolished OCT1 activity (Kerb et al., 2002;Shu et al., 2003;Shu et al., 2008;Seitz et al., 2015). Nine percent of Europeans and White Americans are homozygous or compound heterozygous carriers of these reduce function variants (Seitz et al., 2015). These individuals (also referred to as poor OCT1 transporters) have significantly altered pharmacokinetics resulting in altered efficacy and toxicity of clinically relevant drugs like sumatriptan, fenoterol, tramadol and morphine (Shu et al., 2008;Becker et al., 2011;Tzvetkov et al., 2011;Tzvetkov et al., 2012;Fukuda et al., 2013;Tzvetkov et al., 2013;Stamer et al., 2016).
Non-coding variants may also affect OCT1 activity, e.g. by altering OCT1 expression. Indeed, SLC22A1 expression varies strongly between individuals (Nies et al., 2009;O'Brien et al., 2013). The OCT1 mRNA levels differ up to 113-fold and protein levels up to 83fold between individuals (Nies et al., 2009). However, common promoter variants did not significantly affect the SLC22A1 promoter activity or mRNA expression (Bokelmann et al., 2018).
Another explanation of the high variability in OCT1 expression may be related to genetic variants that cause alternative splicing. Indeed, an 8 base pairs insertion/deletion variant rs35854239 (formerly also designated as rs113569197 or rs36056065) was suggested to affect OCT1 expression and activity by altering splicing (Tarasova et al., 2012;Grinfeld et al., 2013;Kim et al., 2017). This variant is located at the junction of exon 7 and intron 7 of the SLC22A1 gene and represents an 8 bp duplication of the 5' part of intron 7 including the splice donor site ( Figure 1A). The newly generated donor site results in an 8 bp longer transcript with shift in the open reading frame and a premature stop-codon.
The rs35854239 variant is genetically highly linked to the coding variant Met408Val (r 2 of 0.95). In several studies, Met408Val was associated with drug efficacy. This association has been explained by decreasing cellular uptake and thus altering the systemic concentrations or concentrations at the site of action of the drug. However, multiple independent in vitro studies demonstrated that the Met408Val substitution does not directly affect OCT1 uptake (Kerb et al., 2002;Shu et al., 2003;Shu et al., 2007;Nies et al., 2014;Tzvetkov et al., 2014;Seitz et al., 2015). Thus, the rs35854239 variant may be the true cause for the observed associations of Met408Val with clinically relevant phenotypes.
The rs35854239 variant is very common. If functional, with its minor allele frequency of 40.6% in Europeans and White Americans, the rs35854239 variant could be the most frequent variant affecting OCT1 expression and activity. However, it is not clear whether the duplicated splicing donor site always leads to alternative splicing, and to what extend the alternatively spliced transcript is functionally active.
In this study, we analyzed to what extend the SLC22A1 8 bp duplication (rs35854239) affects splicing and what are the consequences of the variable splicing on the transporter function in vitro and in vivo. To this end, first, we quantified the alternatively spliced transcripts both using the minigene assay and direct analyses of human liver samples. Second, we analyzed whether the protein resulting from the alternatively spliced transcripts is active. Finally, we analyzed whether the rs35854239 variant is associated with changes in OCT1 mRNA and protein expression in human livers and pharmacokinetics of sumatriptan and fenoterol in humans.
Generation of an Alternatively Spliced OCT1 Plasmid
The 8 bp insertion in exon 7 that results from alternative splicing of rs35854239 was introduced into an OCT1 encoding pcDNA5/ FRT vector by site-directed mutagenesis as described previously (Seitz et al., 2015). The used primer pair 1 is listed in Supplementary table S1. The sequence was validated by capillary sequencing prior to transient transfection into HEK293 cells.
Cellular Uptake Experiments After Transient Transfection
HEK293 cells were seeded at a density of 5 × 10 5 cells per well in a 12-well plate precoated with poly-D-lysine. Twenty-four hours after seeding, cells were transfected with 2 µg of the alternatively spliced OCT1 vector DNA using Lipofectamine ™ 2000 (Invitrogen, Darmstadt, Germany) according to the manufacturer's instructions. Transfection efficiency was evaluated by co-transfection with 0.5 µg of the green fluorescent protein coding vector pGFP-tpz (Thermo Fisher Scientific). The next day, uptake experiments were performed at 37°C and pH 7.4 using HBSS+ (HBSS supplemented with 10 mM HEPES buffer). Cells were washed once with pre-warmed FIGURE 1 | Effects of rs35854239 on SLC22A1 exon 7 minigene splicing in Huh7 and HepG2 (A) Splicing at the exon 7-intron 7 junction. Splice donor sites within the intronic sequence are shown in red. The 8 bp insertion/deletion variant rs35854239 carries a second splice donor site that is proposed to be spliced alternatively. (B) Representation of the pSPL3b splicing vector consisting of two exons of the rabbit β-globulin gene under control of the SV40 promoter (the "minigene"). SLC22A1 exon 7 and its flanking intronic regions with or without rs35854239 (referred to as duplication or wild-type, respectively) were cloned between both exons of the rabbit β-globulin gene. Minigene constructs were transiently transfected into Huh7 and HepG2 cells, and mRNA was isolated 48 h after transfection. As positive control we used the CYP2C19*2 variant, for which alternative splicing is known, (Morais et al., 1994). (C,D) Correctly and alternatively spliced OCT1 transcripts were shown (C) and quantified using pyrosequencing (D), or deep RNA-sequencing (E). Percentages within boxes represent relative values of correctly spliced minigene transcripts. Data are shown as mean and standard errors of the mean of at least three independent experiments.
Frontiers in Pharmacology | www.frontiersin.org May 2021 | Volume 12 | Article 661480 (37°C) HBSS+. Uptake was initiated by adding 20 µM ASP + diluted in HBSS+ and stopped after two minutes by adding ice-cold HBSS+. Cells were washed twice with ice-cold HBSS+ and lyzed with RIPA buffer. Fluorescence of ASP + in lysates was measured with an excitation of 485 nm and emission of 612 nm using the Tecan infinite M200 Microplate Reader (Tecan Group Ltd., Männedorf, Switzerland). ASP + fluorescence intensities were normalized to the total protein amount in the samples as measured using the bicinchoninic acid assay (Smith et al., 1985).
Generation of Minigene Constructs
Splicing of exon 7-intron 7 was analyzed in the splicing vector pSPL3b, further referred to as minigene. Exon 7-intron 7 of OCT1 for both rs35854239 genotypes was amplified with primer pair 2, listed in Supplementary table S1. The PCR product was cloned into the pSPL3b vector after restriction of the PCR product and the vector with PstI and EcoRV. The Met408 and Val408 were introduced by site-directed mutagenesis using primer pairs 3 and 4, respectively, listed in Supplementary table S1. The minigene constructs were validated by capillary sequencing and then used for transient transfection into Huh7 and HepG2 cells.
Transient Transfection of the Minigene Constructs
Huh7 and HepG2 cells were seeded in six-well plates at a density of 4 × 10 5 and 1.7 × 10 6 cells per well, respectively. After 24 hours, cells were transfected with 2 µg minigene vector DNA using Lipofectamine ™ 2000 as described above. Transfection efficiency was evaluated by co-transfection with pGFP-tpz as described above. Forty-eight hours after transfection, cells were lysed and RNA was isolated using the RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer's instructions.
PCR Amplification of Spliced Exon 7 Variants
After RNA isolation from transfected cells, complementary DNA (cDNA) was synthesized using the MultiScribe ™ Reverse Transcriptase Kit (Applied Biosystems). Spliced exon 7 was amplified with primer pair 5 listed in Supplementary table S1. PCR products were separated by gel electrophoresis, bands were visualized under UV light and band intensities were quantified using the Fiji software (ImageJ version 1.52p, National Institutes of Health, Bethesda, United States).
Analysis of rs35854239 in Human Liver Samples
Human liver samples were obtained from normal liver tissue that had to be removed for technical reasons during liver surgery or from organ donors. Patients gave their informed consent for research use of the removed liver tissue, and the procedures were approved by the ethics committee of the University Medicine Göttingen, Georg-August-Universität Göttingen (application number 26/01/17) and the ethics committee of the Pomeranian Medical University (application number KB-0012/ 64/12). Deep-frozen human liver samples were homogenized using the Mikro dismembrator S (B. Braun, Melsungen, Germany) at 2500 rpm for 1 min. DNA was isolated using the DNeasy Blood & Tissue kit (QIAGEN) according to the manufacturer's instructions. For the genotyping of functionally relevant polymorphisms in the SLC22A1 gene, the single base primer extension method was used as described previously (Seitz et al., 2015) using primer 6 listed in Supplementary table S1. RNA from human liver samples was isolated from homogenates using the RNeasy Plus Mini Kit and cDNA was synthesized as described above.
Pyrosequencing
The ratio of correctly vs. alternatively spliced exon 7 in the transfected cell lines with hepatic origin and human liver samples was analyzed using pyrosequencing. Spliced exon 7 from minigene experiments and from cDNA from human liver samples was amplified using primer pair 7 (minigene) and primer pair 8 (liver samples) listed in Supplementary table S1. Samples were prepared using PyroMark ™ Binding and Annealing Buffer (QIAGEN) and the PyroMark ™ Vacuum Prep Station (Biotage, Uppsala, Sweden).
Deep RNA-Sequencing and Sequence Mapping
Next-generation DNA and RNA sequencing was performed with cDNA from minigene experiments and DNA and cDNA from human liver samples. Exon 7 and its 3′ flanking region were first amplified using the primer pairs 11 to 14 listed in Supplementary table S1. The PCR products were purified by magnetic separation using Agencourt® AMPure® XP reagent (Beckman Coulter GmbH, Krefeld, Germany). Unique indices were attached to the purified amplicons by PCR using the Nextera® XT Index Kit v2 (Illumina Inc., San Diego, United States). The samples were again cleaned up with Agencourt® AMPure® XP reagent. All samples were pooled in appropriate ratios. The pooled library was quantified using the Qubit® 2.0 fluorometer and the Qubit® dsDNA BR assay kit (Thermo Fisher Scientific) and diluted to DNA concentration of 2 nM. DNA was denatured and diluted according to the manufacturer's instructions. As internal control and to increase variability within the sequencing run, 30% PhiX control (Illumina) was spiked in prior to denaturation. The sequencing run was performed using the MiSeq® Reagent Kit v3 (600 cycles) and paired-end 221 reads on the Illumina MiSeq ™ (Illumina). The sequencing run was analyzed using the IGV v.2.6.3 software (Broad Institute, Cambridge, United States). The paired-end sequence reads were merged using PEAR (release 0.9.11; Zhang et al., 2014). The mapping to a reference sequence was performed with Bowtie2 (Langmead et al., 2012) Immunocytochemistry Five x 10 5 HEK293 cells were seeded on poly-D-lysine coated cover slips and transfected as described above. One day after transfection, cells were washed twice with PBS and were fixed with 100% ethanol for 20 min at -20°C. After washing three times with PBS, cell membranes were permeabilized with PBS containing 0.4% Tween 20. Cells were washed three times with PBS and subsequently blocked for 3 hours with blocking buffer (5% FCS in PBS). OCT1 was stained using the NBP1-51684 (2C5) antibody (Novus Biologicals, Abingdon, United Kingdom). Cells were co-stained with the EP 1845Y antibody (Abcam, Cambridge, United Kingdom) against the membrane marker Na + /K + -ATPase. The primary antibodies against OCT1 and Na + /K + -ATPase were diluted in blocking buffer in a dilution of 1:400 and 1:200, respectively. Per cover slip, 50 µL antibody solution was added, cells were covered with parafilm and incubated in a humid chamber overnight. The next day, after washing three times with PBS, fluorescently-labeled secondary antibodies (Alexa Fluor® 546 goat anti-rabbit IgG (H + L), polyclonal and Alexa Fluor® 488 goat anti-mouse IgG (H + L), polyclonal; Thermo Fisher Scientific) were diluted 1:400 in PBS, added and incubated for 2 hours in the dark. Cells were washed three times with PBS and cover slips were mounted with Roti® Mount Fluor Care DAPI (Carl Roth, Karlsruhe, Germany). The staining was analyzed using the laser scanning microscope LSM780 (Carl Zeiss Microscopy GmbH, Oberkochen, Germany). Images were processed using the Fiji software.
Expression Data From Human Liver Samples
OCT1 mRNA and OCT1 protein expression data were extracted from a previous study describing expression of OCT1 and OCT3 in human liver samples (Nies et al., 2009). Analysis was performed on the subset of samples (n 90) that were from individuals who were non-cholestatic and had no hepatocellular, cholangiocellular or gallbladder carcinoma (Schaeffeler et al., 2011;Nies et al., 2013). The study was approved by the ethics committees of the Charité, Humboldt University (Berlin, Germany) and the University of Tübingen (Tübingen, Germany) in accordance with the principles of the Declaration of Helsinki. Written informed consent was obtained from each patient.
Clinical Trial
The clinical trials on the effects of SLC22A1 genetic variants on fenoterol and sumatriptan pharmacokinetics have been described in details before (Matthaei et al., 2016;Tzvetkov et al., 2018). The rs35854239 variant was genotyped using the available DNA from those studies and the single base primer extension method as described previously (Seitz et al., 2015) with the SNaPshot primers listed in Supplementary table S1.
Statistical Analysis
Differences in OCT1 mRNA and protein expression, or drug plasma concentration between homozygous wild-types and homozygous duplication allele carriers were performed using the Mann-Whitney-U test. Differences between DNA and RNA allele frequencies in allelic expression imbalance analyses were performed using the paired sample t-test. All analyses were performed using SPSS Statistics version 25 (SPSS INC., IBM, Chicago, IL) Statistical significance was defined as p < 0.05. Posthoc power calculations of the clinical studies were performed with the G*Power software version 3.1.9.4 (Faul et al., 2007).
Minigene Analyses of the Effects of SLC22A1 rs35854239 on Splicing
We used minigene assays to quantify the percentage of alternatively spliced transcripts in the 8 bp duplication allele of the rs35854239 variant. For this purpose, exon 7, including 306 bp upstream and 310 bp downstream of the flanking intronic regions, was cloned in the minigene vector pSPL3b between the exons 1 and 2 of the rabbit β-globulin gene. Next to the construct carrying the 8 bp duplication allele, the wild-type allele was also cloned and used as a control in the analyses ( Figure 1B). Two independent minigene clones containing the duplication allele were analyzed to account for potential artifacts from the quality of the clones and the DNA preparation. The minigene constructs were transiently transfected into HepG2 and Huh7 cells, and the resulting correctly and alternatively spliced transcripts were quantified 48 hours later using three independent quantification techniques: semi-quantitative PCR, pyrosequencing and deep RNA-sequencing. In all cases, first, total RNA was reverse transcribed into cDNA. For the semi-quantitative PCR, the spliced SLC22A1 exon 7 was amplified by PCR using primers within the flanking exons of the rabbit β-globulin gene. The PCR products were separated by gel electrophoresis to enable the selective identification of the correctly and the alternatively spliced transcripts, and the band intensities were quantified ( Figure 1C). As expected, the wild-type allele was spliced 100% correctly. However, the duplication allele was only spliced 47% correctly (range 42-51%) in transiently transfected Huh7 cells and 52% correctly (range 46-58%) in HepG2 cells (data not shown).
Next, we used pyrosequencing to quantify more precisely the ratio of the alternatively spliced transcripts. The pyrosequencing quantification method was validated by calibration series of vectors encoding the correctly or alternatively spliced minigene (Supplementary Figure S1). The pyrosequencing-based quantification showed that in Huh7 cells, the duplication allele was correctly spliced in 49% (range 46-52%) of all transcripts ( Figure 1D). This was highly comparable with the previously semiquantitatively determined ratios. In HepG2 cells, the duplication allele was correctly spliced in 57% (range 55-60%) of all transcripts.
Finally, the minigene insertion allele clone 1 spliced in Huh7 or HepG2 cells at 48 h was reanalyzed using deep massive-parallel sequencing ( Figure 1E). The average depth of targeted RNA sequencing was 59,135 reads (range 32,902-84,691). The quantification of reads carrying the 8 bp insertion as a result of alternative splicing showed a percentage of 62% correctly spliced minigene in both cell lines. These results confirm the alternative splicing of rs35854239. More importantly, these results suggest that the 8 bp duplication causes erroneous splicing in only a part of the transcripts. Estimated by the data of all experiments maximally 52% of the transcripts are erroneously spliced. Thus, our in vitro data suggest that even in homozygous carriers of the duplication about the half of OCT1 transcripts will be correctly spliced.
Effects of SLC22A1 Exon 7 Genetic Variants on rs35854239 Splicing
Within the SLC22A1 gene, exon 7 harbors the highest density of coding functionally relevant polymorphisms. Thereof, the Met408Val substitution is almost completely linked to the rs35854239 duplication ( Figure 2A). Under native conditions, it could not be excluded that the coding variant substantially contributes to the effects of splicing. Here were took advantage of the minigene technique and addressed separately the effects of the two variants on splicing. The Val408Met substitution alone did not significantly affect splicing, neither on duplication nor on wild-type rs35854239 background ( Figure 2B). Therefore, it could be concluded that the effects on splicing are completely caused by the rs35854239 duplication and there is no contribution of the highly linked Met408Val.
Functional Characterization of the Protein Encoded by the Alternatively Spliced Transcript
The alternative splicing leads to an 8 bp longer exon 7, entailing a frame shift. This results in an altered amino acid sequence after codon 425 followed by a premature stop after seven amino acids. The resulting truncated OCT1 protein p. Asp426fs consists of the first nine TMHs only ( Figure 3A).
To analyze whether the truncated OCT1 protein p. Asp426fs is able to function as an uptake transporter, the 8 bp insertion sequence was introduced between exon 7 and exon 8 of the OCT1 carrying pcDNA5 vector using site-directed mutagenesis. The resulting vector was transiently transfected into HEK293 cells and the uptake of the model OCT1 substrate ASP + was compared to the uptake of the wild-type OCT1. Three independent clones of p.Asp426fs were analyzed. The alternatively spliced OCT1 protein showed no transport activity ( Figure 3B). ASP + uptake of all alternatively spliced OCT1 clones was at same levels as the empty pcDNA5 vector, indicating no OCT1-mediated substrate uptake. Immunofluorescence staining revealed aberrant membrane localization of the alternatively spliced OCT1 ( Figure 3C). This demonstrates that the truncated OCT1 protein, which results from alternative splicing of exon 7, is completely inactive and lacks correct membrane localization.
Effects of rs35854239 on Splicing in Human Liver Samples
Minigene analyses in cell lines with hepatic origin showed that the 8 bp duplication leads to maximally 52% of alternatively spliced transcripts that encode a non-functional protein. In order to validate these results in vivo, we quantified the effects of the 8 bp duplication on OCT1 mRNA splicing in human liver. To this end, DNA from 24 liver samples was genotyped for rs35854239 and the correct splicing of the exon 7intron 7 junction was quantified using pyrosequencing and deep RNA-sequencing (Figure 4).
Using pyrosequencing, we observed 99% correctly spliced SLC22A1 mRNA irrespective of the genotype of the liver donors. Alternative splicing of OCT1 in liver samples from homozygous or heterozygous 8 bp duplication allele carriers appeared with a maximum of 2.1% ( Figure 4A). This percentage is far below the observed results in minigene experiments (Figure 1) but is still substantially higher than the observed 0.4% in the SLC22A1 mRNA from donors with wild-type genotype, which can only be spliced correctly.
Using deep RNA-sequencing, we detected very low levels of alternatively spliced transcripts that were, however, dependent on the rs35854239 genotype ( Figure 4B). The average depth of sequencing was 74,326 reads per RNA sample (range 41,116-133,715). The liver samples from homozygous duplication allele carriers showed mean values of 0.58% alternatively spliced transcripts (range 0.19-1.14%). In heterozygous genotypes, alternative splicing was detected with a mean of 0.36% (range 0.08-0.83%). The samples of the FIGURE 3 | Effects of the alternatively spliced OCT1 protein p.Asp426fs on OCT1 function (A) Alternative splicing of rs35854239 leads to a premature stop after Asn431, resulting in an OCT1 protein that is truncated after transmembrane helix 9. (B) HEK293 cells transiently transfected with a vector coding for the alternatively spliced OCT1 p. Asp426fs were incubated for 2 min with 20 μM ASP + . The OCT1-mediated uptake was calculated as fold change compared to control cells (transfected with the empty vector). Transfection of wild-type OCT1 served as a positive control for functional transporter. Data are shown as mean and standard error of the mean (SEM) of three independent experiments. (C) Membrane localization was analyzed using immunofluorescence staining and confocal microscopy (magnification factor 63). The OCT1 antibody used for this purpose recognizes the intracellular loop of the protein between TMH6 and TMH7. OCT1 (green) was co-stained with Na + / K + -ATPase (red) as membrane marker. Scale bar indicates 10 µm. TMH, transmembrane helix.
Frontiers in Pharmacology | www.frontiersin.org May 2021 | Volume 12 | Article 661480 homozygous wild-type allele carriers showed a mean of 0.02% alternatively spliced transcripts indicating very low levels of possible contamination with this highly sensitive method. In conclusion, both techniques demonstrated that despite the close to 50% probability of alternative splicing of the rs35854239 duplication allele estimated by the minigene assays, alternatively spliced OCT1 could only barely be detected in human liver samples. These results suggest that the alternatively spliced transcripts may be recognized and rapidly degraded under native conditions. To verify this, we performed an allelic expression imbalance analysis in the human liver samples. We took advantage of the strong genetic linkage between the duplication allele of rs35854239 and the A-allele of the coding variant Met408Val (rs628031, 1222A>G, r 2 0.95; Figure 5A). Based on the strong linkage, in heterozygous carriers of the rs35854239 duplication and Met408Val A-allele haplotype, a degradation of alternatively spliced OCT1 could be detected as a lower abundance of the Met408 A-allele in the RNA transcripts compared to the expected 50% of the DNA reads ( Figure 5B). We used all nine liver samples from which both DNA and RNA was available and applied deep sequencing for quantification. While the A-allele was detected in 50% of the DNA reads (range 49-53%), the abundance in RNA was significantly decreased to 42% (range 40-44%, p 2.77 × 10 −7 , paired t-test; Figure 5C). This result supports the degradation of the alternatively spliced transcripts and suggests that the presence of the rs35854239 duplication will result in the reduction of OCT1 mRNA levels, and as a consequence OCT1 protein in general.
In addition, more precise analyses of the sequencing reads suggest that the correct "canonical" splicing may be preferred under native conditions. Indeed, reads of alternatively spliced RNA carrying the A-allele of Met408Val were almost not detectable ( Figure 5D). However, the reads of correctly spliced RNA carrying A-allele Met408Val were 71.3% of the G-allele reads (range from 65.8 to 76.7%) instead of the expected 50% from the minigene analyses. This suggests that in parallel to degradation of the alternatively spliced transcripts also a preference for correct splicing of rs35854239 duplication allele under native conditions may exist.
To address this, we analyzed total OCT1 mRNA and protein expression in 73 human liver samples. The liver samples had been characterized for their OCT1 expression before (Nies et al., 2009). We included only those samples lacking the Arg61Cys substitution, which is known to significantly affect OCT1 protein levels in the liver (Supplementary table S2 and (Nies et al., 2009) by affecting the correct membrane localization (Seitz et al., 2015). The OCT1 mRNA expression was on median 47% lower in homozygous duplication than in homozygous wild-type rs35854239 allele carriers (median of 0.014 and 0.026 transcripts per beta-actin transcript, respectively; p 0.007; Figure 6A). The OCT1 protein levels were on median 35% lower in homozygous duplication than in homozygous wildtype rs35854239 allele carriers (median of 3.91 and 6.04, respectively; p 0.045; Figure 6B). However, although statistically significant, the effects of rs35854239 genotypes could explain only 9% of the variability of mRNA and protein expression in this sample set. Effects of rs35854239 Duplication on the Pharmacokinetics of Sumatriptan and Fenoterol Finally, we analyzed to what extend the decrease of OCT1 expression in carriers of the rs35854239 duplication allele leads to changes in sumatriptan and fenoterol pharmacokinetics in humans. We took advantage of the existing studies on the effects of OCT1 genotypes on the pharmacokinetics of both drugs (Matthaei et al., 2016;Tzvetkov et al., 2018) and analyzed them in the context of the rs35854239 genotype. The AUC of sumatriptan was slightly increased in homozygous rs35854239 duplication allele carriers compared to the wild-type (means of 7187 vs. 6277 min × ng/ ml, respectively, Figure 7A). However, this increase was not significant and was on average by 14% compared to the observed 127% increase in poor OCT1 transporters (homozygous or compound heterozygous carriers of the coding variants Arg61Cys, Gly401Ser, Gly465Arg) observed in the same study.
Even more, the AUC of fenoterol was not higher in homozygous carriers of the rs35854239 duplication allele compared to the wild-type (means of 84.25 vs. 86.84 min × ng/ml, respectively; Figure 7B). In comparison, poor OCT1 transporters showed 1.89-fold higher AUCs for fenoterol. This data suggests that compared to the well-known loss-of-function coding variants, the 8 bp duplication shows only limited effects on drugs pharmacokinetics.
DISCUSSION
The eight base pairs duplication at the exon 7-intron 7 junction (rs35854239) has been previously suggested to cause erroneous splicing of OCT1 by introducing an alternative splice site in the intronic sequence of intron 7. In this study, we confirm the alternative splicing and give more precise quantitative information about the effects on OCT1 expression and activity in order to better estimate the contribution of this variant to the highly inter-individual variability in OCT1 activity.
This study built up on the previous findings of Kim et al. (Kim et al., 2017). We confirmed the findings of Kim et al. that the 8 bp duplication causes alternative splicing. We did this both by using minigene assays ( Figure 1) and by detecting (a low level) of the alternatively spliced transcript in human liver samples (Figure 4). We also confirmed the finding of Kim et al. that the alternatively spliced transcript is not leading to a functional protein (Figure 2).
The major contribution of this study beyond the previously known is the precise quantification of the effects of the 8 bp duplication rs35854239. We used minigene analyses to quantify the effects on splicing ( Figure 1) and to confirm that these effects are caused by the 8 bp duplication and not by the highly genetically linked variant Met408Val. We quantified the effects of the 8 bp duplication on total OCT1 expression in human liver both on mRNA and on protein levels ( Figure 6) and finally we analyzed the association of the splice variants with the pharmacokinetics of drugs that are well known OCT1 substrates (Figure 7). This will enable us to better evaluate the contribution of the rs35854239 duplication to the high genetic and thus to the high functional variability of OCT1 in humans.
Our data suggest that the 8 bp duplication allele can cause erroneous splicing of up to 50% of the transcripts, but, probably due to mRNA decay, the number of detectable erroneously spliced transcripts in the human liver is very low. Thus, homozygous carriers of the duplication allele are characterized by decreased expression of the correctly spliced transcripts resulting in a median decrease of OCT1 protein expression by 35% in the human liver. However, the rs35854239 effects explained only 9% of the highly variable SLC22A1 mRNA expression in humans (Figure 6), and the 8 bp duplication was not associated with significant changes in the pharmacokinetics of known OCT1 substrates, i.e. sumatriptan and fenoterol (Figure 7). FIGURE 7 | Effects of rs35854239 on the pharmacokinetics of (A) sumatriptan and (B) fenoterol. The AUC of (A) sumatriptan and (B) fenoterol is depicted depending on the rs35854239 genotype and compared to the OCT1 phenotype (poor transporters). Poor transporters comprise homozygous or compound heterozygous carriers of the OCT1 alleles *3, *4, and *5 harboring the coding variants Arg61Cys, Gly401Ser or Gly465Arg, respectively. Dup, rs35854239 duplication allele; WT, wild-type OCT1 allele; AUC, area under the plasma concentration-time curve. Boxplots show median, lower (25%) and upper (75%) quartiles.
Frontiers in Pharmacology | www.frontiersin.org May 2021 | Volume 12 | Article 661480 sufficient to cause strong effects on drug pharmacokinetics (as demonstrated by analyzing the effects of the variant on pharmacokinetics of sumatriptan and fenoterol in healthy individuals).
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are publicly available. This data can be found here: NCBI repository, accession number: PRJNA720275 (https://www.ncbi.nlm.nih.gov/sra/ PRJNA720275)
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics committee of the University Medicine Göttingen, Georg-August-Universität Göttingen (application number 26/01/17) by the ethics committees of the Charité, Humboldt University (Berlin, Germany) and the University of Tübingen (Tübingen, Germany). The patients/participants provided their written informed consent to participate in this study. | 2021-05-07T13:26:51.902Z | 2021-05-07T00:00:00.000 | {
"year": 2021,
"sha1": "db09ae78f3c4eeb2c73b203e22d35da46788b2dd",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.661480/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db09ae78f3c4eeb2c73b203e22d35da46788b2dd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9491758 | pes2o/s2orc | v3-fos-license | Potential Susceptibility Mutations in C Gene for Hepatitis B-Related Hepatocellular Carcinoma Identified by a Two-Stage Study in Qidong, China
A two stage study was conducted to explore new potential mutations in the full genome of hepatitis B virus (HBV) on the progression of hepatocellular carcinoma (HCC) in Qidong, China. In stage 1, full genomes of HBV were compared between 30 HCC cases and 30 controls. In stage 2, an independent case–control study including 100 HCC cases and 100 controls was enrolled to verify the relationship between hot-spot mutations and HCC development. Furthermore, a longitudinal study was conducted on 11 HCC cases with serial serum samples available before HCC diagnosis. A total of 10 mutations (including pre-S2 start codon mutation and pre-S deletion in pre-S gene, G1613A, C1653T, A1762T, and G1764A mutations in X gene, A2159G, A2189Y, G2203W, and C2288R mutations in C gene) showed an increased risk of HCC. In the validation study, pre-S deletion, C1653T, A1762T/G1764A, A2159G, A2189Y, G2203W, and C2288R mutations were associated with increased HCC risk in univariate analysis. Multivariate analysis indicated that pre-S deletion, A1762T/G1764A, A2159G, and A2189Y mutations were independently related with HCC development. Moreover, a significant biological gradient of HCC risk by number of mutations in the C gene was observed. Longitudinal observation demonstrated a gradual combination of the above mutations accumulated during the progression of HCC.
Introduction
Hepatocellular carcinoma (HCC) is a global health problem, as it is the fifth most common cancer and the third leading cause of cancer-related death [1]. Etiologically, the majority of HCC develops in chronic hepatitis B virus (HBV) carriers, especially in East Asia and sub-Saharan Africa, where HBV is endemic [2]. Although an effective vaccine has been used for about two decades, more than 350 million people in the world are chronic carriers of this virus [3]. Until now, the exact mechanism underlying hepatocarcinogenesis in chronic HBV infection remains elusive.
The HBV genome comprises a partially double-stranded circular DNA molecule of approximately 3200 base pairs. This DNA strand encodes four overlapping open reading frames (ORFs): X, for the X protein; precore/core (C), for the nucleocapsid; pre-S/S, for the surface/envelope protein; and P, for the DNA polymerase [4]. HBV replicates through an RNA intermediate, using a reverse transcriptase that lacks a proofreading function. Thus, HBV exhibits replication errors at a much higher rate than in other DNA viruses, and the estimated mutation rate is about 1 nucleotide/10,000 bases per year [5]. The effect of viral mutations on HCC pathogenesis has been investigated extensively, and several important high-risk mutations have been identified. The most convincing association between viral mutation and the development of HCC is A1762T/G1764A double mutations in the basal core promoter (BCP) [6,7]. Additionally, pre-S deletion, T1753V mutation in BCP, and C1653T mutation in box-α of Enhancer II have been reported to be related with increased risk of HCC in several reports [8][9][10][11][12]. However, current studies concerning other potential predictive mutations by comparative analysis of the complete HBV genomes remain limited. Meanwhile, distinct clinical and virologic characteristics of HBV infection have been reported in different geographical parts of the world, and are increasingly associated with genetic diversity of the infecting virus [13]. The township of Qidong is one of the highest endemic regions for HBV-related HCC in China. In this two-stage study, the complete HBV genome was initially analyzed in the serums of patients from a prospective cohort of male HBV carriers in Qidong, in order to explore new mutation biomarkers of HCC development in addition to traditional hot-spot mutations in pre-S and X genes. Second, an independent validation study was conducted to confirm the relationship between the newly identified mutations and HCC risk. Furthermore, sequential serum sequencings of the C gene were carried out to assess the longitudinal evolution of mutations in the C gene during HCC development.
Comparison of Full Hepatitis B Virus (HBV) Genome between Hepatocellular Carcinoma (HCC) Cases and Controls
In stage 1, the full sequences of HBV in 30 HCC cases and 30 controls were determined by PCR direct sequencing. There were no statistically differences in age, distribution of genotype, and seropositivity of hepatitis B e antigen (HBeAg) between HCC cases and controls (55.0 ± 9.0 versus 56.3 ± 8.9 years, p = 0.874 for age; 7/30 (23.3%) versus 8/30 (26.7%), p = 0.766 for seropositivity of HBeAg; 2:28 versus 3:27, p = 0.640 for the genotype B to genotype C ratio). The number of nucleotide substitutions in the full genome was calculated by comparing with each corresponding prototype sequence from GenBank (Version GI289976889 for genotype C and GI289976881 for genotype B, both from an HBV carrier in Qidong, China). The average nucleotide substitutions in the full genome were 66.3 ± 10.4 and 57.6 ± 7.8 in HCC cases and controls (p = 0.001), respectively. Table 1 shows the number of nucleotide substitutions in various regions of the HBV genome. The HCC group had significantly more nucleotide substitutions in pre-S2 (p = 0.015), X (p = 0.002), and pre-C/C (p = 0.016) regions, respectively. The P gene indicated slightly increased nucleotide substitutions in HCC cases compared with controls (p = 0.112). Table 2 lists the frequencies of 24 hot-spot mutations observed in the full genome of HBV in HCC cases compared to controls. These include some well-studied mutations, such as pre-S2 start codon mutation, pre-S deletion, C1653T, T1753V, A1762T/G1764A, and G1896A mutations. A total of 10 mutations showed significant differences between HCC cases and controls. Among these, 10 were point mutations, two (pre-S2 start codon mutation and pre-S deletion) were located in the pre-S gene, four (G1613A, C1653T, A1762T, and G1764A mutations) were located in the X gene, and four (A2159G, A2189Y, G2203W, and C2288R mutations) were in the C gene. However, we did not observe even one point mutation in S and P genes that showed a significantly different frequency between HCC and control groups in this study. These data suggested that high HCC risk mutations were not likely to distribute evenly throughout the whole HBV sequence.
Validation of HCC-Related Hot Spot Mutations in the C Gene
In stage 2, to confirm the risk of 10 point mutations from stage 1 during the development of HCC, an independent case-control study was conducted with 100 HCC cases and 100 controls. After excluding the subjects with poor sequence data or negative PCR product (1 case and 2 controls), a total of 99 HCC cases and 98 controls were included in the final analysis. The demographic data of the 99 HCC patients and 98 controls are listed in Table 3. There were no statistically significant differences in age, history of cigarette smoking and alcohol consumption, seropositivity of HBeAg, or serum HBV DNA levels between HCC cases and controls. Genotype C dominated the HBV types in Qidong, accounting for 95.4% of HCC cases and 93.4% of controls. HCC cases and controls showed a similar distribution pattern for genotype (p > 0.05). When we examined HBV DNA sequences in the pre-S and Enh II/BCP regions, pre-S deletion, C1653T, and A1762T/G1764A, double mutations were significantly associated with HCC, showing adjusted ORs from 1.929 to 2.385 ( Table 3). The most frequently occurring mutation was a T1762/A1764 double mutation. However, G1613A (OR, 1.769; 95% CI, 0.932-3.358) and pre-S2 start codon (OR, 2.233; 95% CI, 0.795-6.274) mutations were not related to a higher risk of developing HCC. Meanwhile, the frequencies of hot-spot mutations in the C gene increased in the HCC group, from 6.1% (deletion), 15.2% (T1938C), 11.1% (C2002T), 13.1% (T2045A), 35.4% (A2159G), 40.4% (A2189Y), 10.1% (G2203W), and 22.2% (C2288R) in HCC cases to 2.0%, 7.1%, 7.1%, 14.3%, 20.4%, 22.4%, 2.0%, and 11.2% in controls, respectively. Similar to the results from initial full genome analysis, HCC patients had significantly higher frequencies of A2159G, A2189Y, G2203W, and C2288R mutations than controls (p = 0.020, p = 0.006, p = 0.037, and p = 0.030, respectively). After adjustment for age, history of cigarette smoking, and alcohol consumption, unconditional logistic regression analyses showed adjusted ORs from 2.147 to 5.203 (Table 3). Therefore, using stepwise logistic regression analysis, the following were found to be independent risk factors of HCC: A2159G (
Association between HCC Risk and the Presence of Specific Mutation Patterns in C Gene
In the current study, the risk of combined mutations in the C gene on HCC was explored, including the A2159G, A2189Y, G2203W, and C2288R mutations. The different patterns of combined mutations are presented in Table 5. Wild type and single mutations were highly prevalent, and were found in 40.6% and 41.1% of the included subjects, respectively. Double mutations were relatively rare and only occurred in 13.7% of the subjects, triple mutations were only found in 9 (4.6%) patients. Our data revealed that any mutation combinations were significantly associated with a higher risk of HCC. Compared to patients with wild-type infection, the adjusted OR was 2.904 (95% CI, 1.508-5.590) in those with a single mutation; 6.027 (95% CI, 2.232-16.275), double mutations; 8.630 (95% CI, 1.588-46.891), triple mutations. We did not observe any subject with quadruple mutations in these four points. A significant biological gradient of HCC risk by number of mutations in the C gene was observed.
* Adjusted for age, history of cigarette smoking, and history of alcohol consumption. Because of rounding, percentages do not always total 100. "−", absence; "+", presence.
Longitudinal Observation of C Gene Mutations during HCC Development
Most previous studies aiming to better understand the relationship between HBV mutation and HCC were conducted with a single blood sample at enrollment or upon HCC diagnosis. In this study, we further explored the HBV mutations in serial serum samples spanning years to assess the evolution of mutations in the C gene during HCC development. Analysis was focused on those putative HCC-related mutations identified from the above cross-sectional study. Among 99 HCC cases with success sequence data, 11 HCC cases with adequate sequential serum samples were selected for this longitudinal investigation of specific mutation patterns (A2159G, A2189Y, G2203W, and C2288R mutations). The lack of some information was due to a negative PCR product. Table 6 demonstrated the evolution of C gene mutations during the progression of HCC. Among these 11 HCC cases, seven cases had at least one nucleotide substitution of these four high HCC risk mutations at HCC diagnosis, and five cases of seven showed a gradual accumulation of these mutations in gene C during follow-up. Reverse mutation was observed in only one patient. These results, together with those from our case-control study, indicated that the high HCC risk mutations in the C gene were not acquired at the beginning of HBV infection, but occurred during the long course of liver disease.
Discussion
HBV infection is a major risk factor for HCC occurrence; ≥75% cases of HCC are associated with HBV infection in China [14]. Compared with non-carriers, patients with chronic HBV infection have a greater than 100-fold increased risk of developing HCC [15]. During the course of chronic HBV infection, a wide variety of liver diseases are observed, ranging from an asymptomatic carrier state to liver cirrhosis and HCC [16]. In our previous studies conducted in Qidong, pre-S deletion and specific mutations in BCP were confirmed to be associated with a high risk of HCC occurrence [17,18]. However, the risk of mutations in other regions of the full HBV genome was seldom reported. In the present study, the full HBV genome was analyzed in the serum of patients within a large cohort of male HBV carriers in Qidong. The number of nucleotide substitutions in the full-length sequence was significantly higher in HCC cases than in controls. Meanwhile, our data demonstrated that high HCC risk mutations were not likely to distribute evenly throughout the complete HBV genome. The regions with significant differences in the mutation number between HCC and control patients were (in rank order) X (p = 0.002), pre-S2 (p = 0.015), pre-C/C (p = 0.016), P (p = 0.112), pre-S1 (p = 0.483), and S (p = 0.636). Similar to previous studies, pre-S deletion and pre-S2 start codon mutation in pre-S gene, C1653T, and A1762T/G1764A mutations in X gene were associated with significantly higher risk of HCC development. Furthermore, in this full HBV genome comparison between 30 HCC cases and 30 controls, we also identify some rarely reported or new HCC-related mutations, including G1613A in the X gene, and A2159G, A2189Y, G2203W, and C2288R mutations in the C gene. These new high risk mutations-together with those confirmatory mutations in pre-S and Enhancer II/BCP regions from previous studies-suggested that mutation combinations in the full genome sequence might serve as potential viral markers for predicting the development of HBV-related HCC.
Among the 10 HCC-related mutations acquired from the full genome analysis in stage 1, four (A2159G, A2189Y, G2203W, and C2288R mutations) were located in the C gene. Meanwhile, the carcinogenic risk of pre-S deletion and pre-S2 start codon mutation in pre-S gene, C1653T, and A1762T/G1764A mutations in X gene has been extensively investigated in this cohort [17,18]. Compared to those studies focusing on mutations in pre-S and X genes, only a few studies have investigated the effect of mutations in the C gene during natural HBV infection. The clinical effect of mutations in this region was less well elucidated, and the results were inconsistent [19][20][21][22]. Thus, the temporal relationship between the C gene mutations and HCC in chronic HBV infection need to be further studied. In view of this, we carried out an independent validation study to confirm the findings from the initial full-length sequence comparison. After adjustment for age, history of cigarette smoking and alcohol consumption, unconditional logistic regression analyses showed that pre-S deletion, C1653T, A1762T/G1764A, A2159G, A2189Y, G2203W, and C2288R mutations were significantly associated with high HCC risk. Multivariate analysis indicated that pre-S deletion, A1762T/G1764A, A2159G, and A2189Y mutations were independent risk factors for HCC progression.
To our knowledge, the clinical implications of these mutations in the C gene during HCC occurrence have been reported in very limited studies. Ni et al. reported that HCC children had more mutations in the C gene than chronic HBV carriers. The mutation sites at core codon 74, 87, and 159 were related to the development of HCC in a small scale study [21]. In a nested case-control study within a prospective study form Taiwan, six mutations in the C gene (nt 1961, 1938, 2045, 2136, 2239, and 2441) were identified to be associated with decreased risk of HCC after accounting for viral genotype. Meanwhile, these mutations were also related with a 0.7-to 1-log decrease in plasma viral load and a high rate of HBeAg sero-conversion [19]. However, we did not observe this protective effect of such mutations in the C gene on HCC development in the current study from Qidong in the mainland of China. We speculated that this is probably because of distinct clinical and virologic characteristics of HBV infection in different geographical parts of the world, such as the different prevalent HBV genotype and sub-genotype between Taiwan and Qidong. Recently, Zhu et al. demonstrated that A2189C and G2203W mutations were independent risk factors for HCC in another study from Qidong, showing odds ratios 3.99 and 9.70, respectively [22]. In accordance with the results of Zhu et al., in the present study, we confirmed that A2159G, A2189Y, G2203W, and C2288R mutations were associated with high HCC risk in univariate analysis. However, in multivariate analysis, G2203W and C2288R mutations were not independent risk factors in predicting HCC occurrence. The exact mechanism of hepatocarcinogenesis relating to the above C gene mutations remains uncertain. Theoretically, hepatitis B core antigen (HBcAg) contains the principal target for the cytotoxic T lymphocyte (CTL) attack and various epitopes of HBcAg recognized by immune cells, such as T or B lymphocytes. The amino acid substitutions in the C gene may permit a change in the immune recognition sites of HBcAg, thereby allowing the virus to elicit or evade immune clearance and have a more direct impact on the natural course of hepatitis B [23][24][25]. A2159G and A2189Y mutations are missense mutations resulting in an amino acid change of HBcAg codons 87 (S87G) and 97 (I97L/F), respectively. Because codon 87 is located in the B cell epitope of C gene, the missense substitution at codon 87 may alter the recognition site for B cells or antibodies and allow the virus to escape from attacking antibodies [26]. Substitution at codon 97 was the most frequently detected substitution of the C gene in this study. Since codon 97 is located within a potent T-cell epitope, this substitution may lower the quantity of antigen presentation in secretions of immature HBV particles. Thus, the codon 97 substitution may inhibit the immune response and lead to successful maintenance of chronic infection in human HBV carriers [27,28]. The prolonged viral persistence causes continuous liver injury and subsequent regeneration, which significantly increases the risk for HCC.
The effect of combined mutations in Enh II/BCP regions on increased risk of HCC was extensively identified in several studies [29][30][31]. It has been reported that most C gene mutations were prone to be clustered in the middle core region [32,33]. However, the majority of earlier studies primarily focused on the relationship between a certain point mutation of the C gene and HCC. In the present study, the risk of combined mutations in the C gene, including the A2159G, A2189Y, G2203W, and C2288R mutations, on HCC was explored. To examine the potential value of the presence of HBV mutation patterns-either alone or in combination-we evaluated the potential value of each mutation or combined mutations in the C gene for the prediction of HCC. The key finding of this study was that the number and pattern of multiple mutations in the C gene (A2159G, A2189Y, G2203W, and C2288R mutations) showed the additive combined effects that related to HCC progression. Compared to patients with wild-type in four hot spot nucleotides, our data indicated that the presence of any mutation combination in the C gene was associated with an increased risk of HCC. The OR of HCC cases that had any single hot spot mutation was 2.904 (95% CI, 1.508-5.590), it increased to 6.027 (95% CI, 2.232-16.275) with double mutations, and to 8.630 (95% CI, 1.588-46.891) with triple mutations. A significant biological gradient of HCC risk by an increasing number of mutations in the C gene was observed. We then recruited a series of serum samples spanning years before and after HCC diagnosis. The longitudinal observation demonstrated a sequential and accumulative combination of mutations in the C gene during the development of HCC. During the course of chronic HBV infection, it is speculated that the accumulation of HBV complex mutations may have a sequential and synergistic role in the development of HCC. Although the mechanism is unclear, this finding suggests that the detection of these combined mutations may aid in screening the high HCC risk subjects in chronic HBV carriers. Additionally, HCC mostly develops in patients with cirrhosis. Therefore, HCC and cirrhosis may share the same risk factors, including high HBV DNA levels, certain genotypes, and naturally occurring viral mutations. In view of this, we speculate that such mutations in the C gene were accompanied by the progression of advanced liver disease-not only HCC but also liver cirrhosis.
There are also some limitations that should be considered in the present study. First, most analysis of HBV mutations was based on a single blood sample obtained at the diagnosis of HCC, and we could not assess the effect of changes in mutation status on the development of HCC; Second, the direct sequencing method only revealed the predominant strains in the host, and it may underestimate the real mutation level in patients, as in most cases, mixture infection of different viral strains was common; Third, as an important risk factor of HCC, liver cirrhosis or fibrosis was not evaluated in this study; Finally, the result was limited because all the study subjects were males, a larger cohort and a longer follow-up time are needed for a similar study in females. Because there were several limitations existing in the current study, our results should be interpreted with caution. Likewise, the conclusions of this study should also be drawn cautiously. Therefore, future studies or analyses assessing the risk of C gene mutations on the development of HCC should be performed on the basis of overcoming such limitations.
Study Population
This study was based on a prospective cohort in Qidong County, Jiangsu Province, China. The recruitment of this cohort has been described elsewhere [17,18]. Briefly, a total of 2387 males living in 17 townships of Qidong who were seropositive for hepatitis B surface antigen (HBsAg) and free of HCC at recruitment were followed up from August 1996 to October 2006. Study participants were scheduled to receive serum α-fetoprotein (AFP) level, conventional liver function, and ultrasonography measurements every 6 to 12 months. The diagnosis of HCC was based on the following criteria: 1 imaging technique and a serum AFP level ≥400 ng/mL; a histopathological examination; or a positive lesion detected by at least two different imaging techniques (US, CT, MRI, and hepatic angiography). Several HCC cases qualified based on more than one criterion. At entry of the program, each study participant provided written informed consent and a structured questionnaire to obtain information on demographic characteristics and lifetime habits of alcohol and tobacco consumption. Serum samples collected at interview were stored at −70 • C before analysis. The study was approved by the Clinical Research Ethics Committee of Affiliated Hospital of Nantong University, Jiangsu Province, China (date of approval, 18 May 1996; permission code, 1996025). The study was conducted according to the tenets of the Declaration of Helsinki.
Cases and Controls
The HCC data were obtained from medical records and searches of computer files of death certification and cancer registry systems. To ensure complete ascertainment, we also contacted relatives to identify cases. For the complete HBV genome analysis, 30 HCC cases and 30 non-HCC control patients were enrolled. In further validation of mutations in the C gene, we recruited an independent set of 100 HCC patients and 100 chronic hepatitis (CH) patients as controls. We have excluded the HCC cases that were diagnosed within the first two years of follow-up. Meanwhile, the controls from the cohort of HBsAg carriers were all alive and had not been diagnosed with HCC throughout the follow-up period from August 1996 to October 2006. The control patients were individually matched to the HCC cases by age (within two years). No subjects receiving anti-viral therapy were included. All 260 participants were positive for serum HBsAg and HBV DNA. For the longitudinal investigation, serial serum samples of 11 HCC patients were analyzed. Subjects were excluded if they had poor sequence data in the C gene (one case and two controls in the validation set). Consequently, a total of 129 cases and 128 controls were included in the analysis.
Serology
Serum HBsAg, HBeAg, and anti-HCV antibody were tested by commercially available enzyme immunoassay kits (Shanghai Kehua Bio-engineering Co., Ltd., Shanghai, China). Serum alanine aminotransferase (ALT) level was determined by ultraviolet-lactate dehydrogenase (UVLDH) method (Shanghai Kehua Bio-engineering Co., Ltd). The serum HBV DNA levels were determined using the Fluorescein quantitative polymerase chain reaction (FQ-PCR) detection system (Taqman; Roche Applied Science, Indianapolis, IN, USA), according to the manufacturer's instructions. The lower limit of detection was 100 IU/mL.
Amplification and Sequencing of the Full HBV Genome and Pre-S/Enh II/BCP/C Regions
HBV DNA was extracted from 200 µL serum samples using a commercial Kit (Shanghai Shenyou Biotech Company, Shanghai, China). The HBV full-length sequence was amplified by PCR using FLP1 (5 -TTTTTCACCTCTGCCTAATCATCTCA-3 (nt 1821-1846), sense) and FLP2 (5 -AAAGTTGCATGGTGCTGGTGAA-3 (nt 1823-1802), antisense) as primers. PCR reaction was carried out in 50 µL containing 5 µL 10× buffer, 4 µL 2.5 mmol/L deoxynucleoside triphosphates (dNTP), 2 µL 10 µmol/L sense and antisense primers, and 1.5 U PlatinumTaq DNA polymerase (Invitrogen, Shanghai, China). PCR was performed as follows: 95 • C for 2 min; 95 • C for 30 s, 56 • C for 30 s, and 68 • C for 3 min for 35 cycles; and finally, 68 • C for 10 min. PCR products were purified (BioDev-Tech. Co., Ltd., Beijing, China) and cloned into the PMD18-T Vector (TaKaRa Bio, Dalian, China) for sequencing. Sequencing was conducted with ABI 3700 sequencer and a commercial kit (Applied Biosystems, Foster City, CA, USA) using PMD18-T vector universal primers and HBV-specific primers. For C region sequence analysis, C gene corresponding to nucleotides 1901 to 2450 was amplified by the nested PCR. First-round PCR primers were (5 -TTCACCTCTGCCTAATCATCTC-3 (nt 1824-1845), sense) and (5 -TCTCCTGTTTTCATTTACTGTAA-3 (nt 2624-2602), antisense). Second round PCR primers were (5 -TCCAAGCTGTGCCTTGGGTG-3 (nt 1871-1890), sense) and (5'-GAAGAATAAAGCCCAGTAAA-3 (nt 2500-2481), antisense). PCR was done under the same PCR condition described above, except the primers were used. Nested PCR for the amplification of pre-S, enhancer II, and basal core promoter regions was performed as previously described [17,18]. All necessary precautions to prevent cross-contamination were taken, and negative controls were included in each assay. Amplified products were directly sequenced in both the forward and reverse directions using an ABI 3700 sequencer and commercial kit (Applied Biosystems). Sequences of the complete genome or C gene and deduced amino acid sequences were aligned and compared by using the software MEGA version 4.1 (Biodesign Ins., Phoenix, AZ, USA).
HBV Genotyping
HBV genotyping was determined by phylogenetic analysis using the sequences of the HBV C gene. The compared standard genome sequences were The genetic distances were estimated by Kimura's two-parameter method, and the phylogenetic trees were constructed by the neighbor-joining method. To confirm the reliability of downloaded from GenBank. HBV nucleotide sequences were multiple-aligned by the Clustal X program (MEGA version 4.1 software). The phylogenetic tree analysis, bootstrap re-sampling and reconstruction with 1000 replicates were used. These analyses were carried out using MEGA version 4.1 software.
Statistical Analysis
Data are presented as means ± SD, proportions, or median (range). To compare the values between the two groups, Pearson's χ 2 or Fisher exact tests were performed for categorical variables, and the Student's t-test was used for continuous variables with normal distributions. Binary unconditional logistic regression models were used to estimate the odds ratios (ORs) of HCC associated with HBV-related factors and corresponding 95% confidence intervals (CIs). Potential confounders including age, and history of cigarette smoking and alcohol consumption were adjusted. Multivariate analyses with stepwise logistic regression were used to determine the independent factors correlated with HCC risk. All of the statistical tests were two-tailed, and p < 0.05 was considered statistically significant. All statistical analyses were performed using SPSS 11.5 for Windows (SPSS Inc., Chicago, IL, USA).
Conclusions
In summary, the pilot full genome analysis of HBV provided initial information for the identification of potential mutations in the C gene related to HCC. Then, an independent case-control study revealed that the A2159G and A2189Y mutations were independent risk factors for HCC in chronic HBV-infected subjects in Qidong. A combined examination of these mutations might help to predict the clinical outcomes of chronic HBV carriers more precisely, thus helping those who are at high-risk of HCC to benefit from early diagnoses and interventions. | 2016-10-11T18:22:25.667Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "02a0bc093a62d367080871e633cd8a26977b77d6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/17/10/1708/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "02a0bc093a62d367080871e633cd8a26977b77d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3120432 | pes2o/s2orc | v3-fos-license | Semi-visible Jets: Dark Matter Undercover at the LHC
The dark matter may be a composite particle that is accessible via a weakly coupled portal. If these hidden-sector states are produced at the Large Hadron Collider (LHC), they would undergo a QCD-like shower. This would result in a spray of stable invisible dark matter along with unstable states that decay back to the Standard Model. Such"semi-visible"jets arise, for example, when their production and decay are driven by a leptophobic $Z'$ resonance; the resulting signature is characterized by significant missing energy aligned along the direction of one of the jets. These events are vetoed by the current suite of searches employed by the LHC, resulting in low acceptance. This Letter will demonstrate that the transverse mass---computed using the final-state jets and the missing energy---provides a powerful discriminator between the signal and the QCD background. Assuming that the $Z'$ couples to the Standard Model quarks with the same strength as the $Z^0$, the proposed search can discover (exclude) $Z'$ masses up to 2.5 TeV (3.5 TeV) with 100 fb$^{-1}$ of 14 TeV data at the LHC.
The existence of dark matter provides one of the strongest motivations for physics beyond the Standard Model, and its discovery is one of the core missions for the Large Hadron Collider (LHC) program. Under the assumption that the dark-matter particle is neutral and stable, it escapes the detector and manifests as missing transverse energy ( E T ). The LHC collaborations have developed a comprehensive search strategy to look for signals with significant E T , accompanied by jets and/or leptons (see e.g. [1] for a review). These searches are typically cast in terms of a Simplified Model [2] for supersymmetry or an effective theory of dark-matter interactions [3,4]. Yet if one relaxes the assumption that the dark sector is weakly coupled, a new class of dark-matter signatures emerge that evade this entire suite of analyses. Namely, it is possible that the dark matter has been lurking undercover within hadronic jets. The purpose of this Letter is to propose a straightforward discovery strategy for these "semi-visible" jets.
Another possibility is that the final state resulting from strongly coupled hidden sectors may contain a new type of jet object-a semi-visible jet. In this case, the dark matter is produced in a QCD-like parton shower along with other light degrees of freedom that decay hadronically. The result is a multijet+ E T signature where one of the jets is closely aligned with the E T . A corner-stone of the standard multijet+ E T searches is to require a minimum angular separation between the jets and E T to remove QCD background contamination arising from jet-energy mis-measurement [34,35]. This implies that events containing semi-visible jets have a low acceptance for the currently implemented suite of searches.
To further illustrate this point, Fig. 1 compares selected observables for QCD with those for example weakly coupled and strongly coupled dark-matter models. The weakly coupled model is derived from supersymmetric theories and results from pair production of 1.5 TeV scalar quark partners. Each squark decays to a jet and 1 GeV neutral dark-matter particle. The signal from the strongly coupled model, which will be described more fully later, comes from the production of a 3 TeV resonance which then decays to a pair of dark-sector particles that subsequently shower and hadronize, yielding semi-visible jets. Both these examples yield topologies with jets and missing energy. As the left panel shows, the weakly coupled (labeled WIMP) and strongly coupled (labeled semi-visible jet) dark-sector models produce considerable E T , with tails that extend beyond the QCD distribution. However, ∆φ ≡ min {∆φ j1 E T , ∆φ j2 E T }, where j 1,2 are the two hardest jets, is different between these models, as illustrated in the right panel. The ∆φ distribution falls relatively steeply for the strongly coupled case, while it remains relatively flat for the weakly coupled scenario. Typical LHC searches require ∆φ 0.4 [34,35]. For illustration, after requiring E T > 500 GeV and ∆φ > 0.4, the acceptance of the WIMP (semi-visible) example is ∼40% (1%).
To regain sensitivity to final states containing semivisible jets, the cut on the angular separation ∆φ must be removed. This comes at the expense of an unsuppressed QCD multijet background, which must be eliminated using other techniques. In this Letter, we focus on the case where the dark sector is accessed via a heavy resonance. In such scenarios, one can take advantage of structure in the transverse mass-calculated using the final-state jets and E T -to distinguish the signal from QCD. The strategy employed is similar to others proposed for semivisible Higgs decays [36].
We now introduce example messenger and dark-sector Lagrangians which will enable us to analyze the LHC sensitivity for semi-visible jets. In particular, the example studied below first appeared in the context of "Hidden Valleys" [7]. This example is presented for illustration and concreteness; semi-visible jets will be among the LHC signatures for a vast class of dark-sector models.
The messenger sector is described by a simple phenomenological model for a TeV-scale U (1) gauge boson. The new leptophobic Z gauge boson couples to the SM baryon current J µ SM : Note that the Z is treated as a Stueckelberg field-the Higgs sector has been neglected as it is not relevant for the LHC phenomenology discussed below. We also ignore the additional matter that must exist in order to render the U (1) of baryon number anomaly free. The dark sector is an SU (2) d gauge theory with coupling α d and two scalar quark flavors χ i = χ 1,2 with masses M i . The scalar quark coupling to the Z is g d Z . In general, the couplings g d Z and g SM Z do not have to be comparable; we focus on the case where g d Z is large for the Z to decay frequently into the dark sector.
The SU (2) d confines at a scale Λ d M Z . A QCDlike dark shower occurs when M 2 i ∼ Λ 2 d so that many dark gluons and scalar quarks are produced, which subsequently hadronize. Some of these dark hadrons are stable, while others decay back to the SM via an off-shell Z . The detailed spectrum of the dark hadrons depends on non-perturbative physics. Nonetheless, some properties of the low-energy states can be inferred from symmetry arguments. There are two accidental symmetries: a dark-isospin number U (1) 1−2 and a dark-baryon number U (1) 1+2 , where "1" and "2" refer to the χ i flavor index. For example, the mesons χ † 1 χ 1 and χ † 2 χ 2 are not charged under either of these symmetries, and are thus unstable. The other mesons ( , respectively, and are stable.
By construction, this phenomenological model only contains terms and interactions that have a direct impact on the jet distributions and on the missing transverse energy. The strength of the dark shower, parametrized by α d , plays a critical role. The coupling α d controls how many dark hadrons are emitted in the shower as well as their p T distributions, which has a direct and measurable impact on the jet observables. In addition, the mass scale of the dark quarks is relevant, affecting the jet masses.
The number of dark-matter particles produced in the shower impact E T . It is useful to parametrize these effects in terms of the quantity The value of r inv depends on the details of the dark-sector model. For the model described above with M 2 1 = M 2 2 , the average proportion of the stable and unstable hadrons is equal, implying r inv 0.5. This assumes that the hadronization process is flavor-blind and that the dark quark masses are degenerate, and ignores baryon production, which is suppressed by a factor of 1/N 2 c , where N c is the number of dark colors.
A mass splitting between the flavors can lead to variations in r inv . Assuming M 2 ≥ M 1 , in the Lund string model [37], fragmentation into heavier dark quark pairs is suppressed by the factor Because of the exponential dependence of the fragmentation process, r inv is very sensitive to small splittings of into the dark sector σ × Br, and the mass of the Z .
To perform a detailed collider study, uū, dd → Z → χ † χ events were simulated for the 14 TeV LHC using PYTHIA8 [38] using the default CTEQ6 parton distribution functions. The dark-sector shower was simulated using the Hidden Valley Pythia module [28,29], modified to include the running of α d as was done for [33], with subsequent hadronization into mesons with mass M d . Each meson had a probability r inv to be a dark-matter particle. The non-dark-matter particles could decay to all four light quarks with equal probability. The possible decays of dark baryons/mesons into each other were neglected. The resulting particles were processed through DELPHES3, with the default CMS settings [39], including particle flow.
Anti-k T R = 0.5 jets [40] were constructed and then reclustered into two large jets [41] using the Cambridge/Achen (CA) algorithm [42] with R = 1.1. One could perform a resonance search using the invariant mass M 2 jj = (p 1 + p 2 ) 2 , where p 1,2 are the momenta of the two final large jets j 1,2 . However, the M jj variable degrades when there are a significant number of darkmatter particles. A variable that incorporates the missing momentum is the transverse mass: In a detector with perfect resolution, M jj ≤ M T ≤ M Z . Figure 2 shows the distribution of M jj , M T and M mc after event selection. M mc is the reconstructed M Z computed from all the reclustered jets and truth-level darkmatter four-vectors. M T can yield a narrower, more prominent peak and be closer to M mc depending on the choice of α d and r inv . The top panels of Fig. 2 show sample events for the different signals. The dark-sector particle multiplicity decreases for smaller α d . As r inv is increased, the signal degrades because more stable mesons are produced and more information is lost. Note that when α d is large enough, the radiation will not be fully captured unless the jet radius is made larger, perhaps at the expense of increasing the sensitivity to pile-up.
To estimate the reach at the LHC, we simulated 60 × 10 6 QCD events, 5 × 10 6 W ± /Z + jj events, and 5 × 10 6 tt events. All samples were binned in H T in order to increase statistics in the high-M T tails [43], using Madgraph5 [44] at parton level and PYTHIA8 for the shower and hadronization. The dominant background after event selection is QCD and W ± /Z+jj. For the signal, 25000 events were generated for each choice of M Z in increments of 500 GeV, using the benchmark parameters in Table I. An 8 TeV sample was used to validate the QCD background and limit-setting procedure [45] against the CMS dijet resonance search [46]. The E T distribution was also validated [47].
The R = 1.1 jets capture the wider radiation pattern expected from dark-shower dynamics. The cut on the pseudo-rapidity difference removes t-channel QCD [46,48]. The lepton veto and ∆φ requirements suppress electroweak backgrounds. Finally, the E T /M T cut effectively acts as a missing energy requirement; cutting on the dimensionless ratio avoids sculpting the M T distribution.
After applying these cuts, a bump hunt was performed using M T . At small M T , the dominant background comes from QCD. When M T 3 TeV, the background is dominated by W ± /Z + jj, where the gauge bosons decay leptonically. Following the dijet resonance searches at CMS [46] and ATLAS [48], the resulting background distribution was parametrized using a fitting function.
Assuming the background exactly follows the fit obtained from simulation, the exclusion reach for the signal benchmark can be computed. Figure 3 shows the results for 100 fb −1 of 14 TeV LHC data as a function of M Z for the benchmark parameters (Table I). We assume a 10% width for the Z , as computed using the benchmark parameters. The production cross section times branching ratio for a Z with the same coupling as the SM Z 0 is shown as a reference. A Z with SM couplings can be probed up to masses of ∼ 3.6 TeV.
We estimate that the dijet limit on σ × Br(Z → qq) is comparable to the limit obtained for the dark-sector decay mode. For g d Z 1, the branching ratio to the dark sector varies from 80% to 50% along the expected exclusion bound as the Z mass increases. Thus, the model would be discovered in the semi-visible jet channel before it would be observed by the irreducible dijet channel; this conclusion only gets stronger for more integrated luminosity.
In the simulations, we assume prompt decays for the dark mesons. For a sufficiently heavy Z and small couplings, the dark meson decays could yield displaced vertices. Requiring that the lab-frame decay length be O(1 mm) and assuming that the dark meson can decay into all four light quarks, a lower bound on the couplings can be obtained: where B ∼ 10 is the average boost factor computed from the benchmark simulation. Eq. (5) gives the lower purple region in Fig. 3. However, it is important to emphasize that modifications of the search strategy can still be effective in this region.
This Letter proposed a new search strategy for the discovery of hidden-sector physics in resonance searches. In particular, the focus was on dark-sector showers that result in novel semi-visible jets-objects that are com- posed of SM hadrons and dark matter. We argued that this generic signature could arise from a large class of strongly coupled dark-matter models. Furthermore, we gave a simplified parameterization that allows for a systematic treatment of the signature space. Finally, we provided expected exclusion limits using a bump hunt in transverse mass. A Z with SM-size couplings to quarks could be probed up to ∼ 3.6 TeV. There are two main extensions that can be explored that may require new strategies beyond the one discussed here. First, one can lift the restriction that the only SM states produced in the shower are quarks, and allow for leptons, photons, and/or heavy-flavor particles. Second, one can consider other production modes. In this case, the semi-visible jets may not be aligned with the E T and additional variables using jet substructure, along the lines of [49], displaced vertices, and/or the presence of low-mass resonances may be necessary.
With the LHC Run II on the horizon, it is important to rethink the program of dark-matter searches to guarantee that a wide range of new-physics scenarios are covered. Non-trivial dynamics in the dark-matter sector is one of the many fantastic and unexpected ways that new physics can emerge. This Letter provides a simple approach in preparation for this possibility. | 2015-07-14T20:23:59.000Z | 2015-02-27T00:00:00.000 | {
"year": 2015,
"sha1": "bb82dfc8e4eca08e3565621fe93d318bb3770946",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.115.171804",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "bb82dfc8e4eca08e3565621fe93d318bb3770946",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
23827318 | pes2o/s2orc | v3-fos-license | Factors affecting the likelihood of monkeypox's emergence and spread in the post-smallpox era
Highlights ► We evaluate whether monkeypox can fill the niche left vacant by smallpox eradication. ► We discuss ecologic and epidemiologic limits that could impede monkeypox's emergence. ► We assess genetic constrains that may hamper monkeypox from becoming a human-adapted virus.
Factors affecting the likelihood of monkeypox's emergence and spread in the post-smallpox era §
Mary G Reynolds, Darin S Carroll and Kevin L Karem
In 1980, the World Health Assembly announced that smallpox had been successfully eradicated as a disease of humans. The disease clinically and immunologically most similar to smallpox is monkeypox, a zoonosis endemic to moist forested regions in West and Central Africa. Smallpox vaccine provided protection against both infections. Monkeypox virus is a less efficient human pathogen than the agent of smallpox, but absent smallpox and the population-wide immunity engendered during eradication efforts, could monkeypox now gain a foothold in human communities? We discuss possible ecologic and epidemiologic limitations that could impede monkeypox's emergence as a significant pathogen of humans, and evaluate whether genetic constrains are sufficient to diminish monkeypox virus' capacity for enhanced specificity as a parasite of humans.
Background
The history of vaccination begins with the use of an animal virus to immunize humans against smallpox [1]. It ends with this same practice. By the close of 1979, the concerted application of vaccinia virus-based vaccine in at-risk populations had effectively interrupted the spread of smallpox, resulting in the eradication of naturally occurring disease throughout the world. This was possible because of antigenic similarities between vaccinia and variola (the agent of smallpox) viruses, and the fact that Variola is human-specific, leaving no potential for zoonotic reservoirs.
Vaccinia and variola are Orthopoxviruses. Orthopoxviruses encompass an array of pathogens that elicit serologic cross-reactivity, among which only one, variola, is an exclusive parasite of humans. Several zoonotic Orthopoxviruses -including vaccinia virus, cowpox virus, and monkeypox virus -can infect humans opportunistically (in the event of an encounter between a virus-infected animal and a susceptible individual), but none manifest variola's capacity for relatively efficient inter-human spread, with the possible exception of monkeypox.
Initial observations of a 'smallpox-like' illness caused by infection with monkeypox virus rather than variola were made in 1970, during the final stages of smallpox eradication [2]. The discovery occurred during time of intensified effort to verify that smallpox had been eliminated from regions of West and Central Africa that had been deemed 'smallpox-free'. The two diseases, smallpox and monkeypox, share a distinctive clinical presentation and almost certainly existed historically in sympatry across what is recognized now as the endemic range for monkeypox ( Figure 1). But, in the absence of laboratory testing to specify the etiologic agent responsible for the condition, it is likely that most Orthopoxvirus-associated 'smallpox-like' illnesses were assumed to be smallpox; smallpox being broadly distributed and extremely well known.
It is now more than 30 years since the WHO recommended cessation of routine smallpox vaccination. Very few individuals born since eradication have received smallpox vaccination, and among those over 30 years of age who did receive vaccination, immunity is waning. This increasing deficit of human immunity raises the specter of whether, under these conditions, monkeypox might emerge as a more significant human pathogen, perhaps even 'replacing' smallpox. Indeed recent reports of increasing monkeypox incidence in the Democratic Republic of the Congo (DRC) [3 ], as well as sporadic occurrences in neighboring countries imply that this may be a possibility [4][5][6]. But before concluding that smallpox eradication and the cessation of vaccination have opened an ecologic, or immunologic, niche for monkeypox to exploit, it seems reasonable to address the following questions. First, during the era before smallpox eradication, was the level of immunity in human populations -engendered either by smallpox vaccination or by the circulation of smallpox -responsible in some way for suppressing the emergence and spread of monkeypox? Or, are there in fact particular characteristics intrinsic to § Disclaimer: The findings and conclusions in this report are those of the author(s) and do not necessarily represent the views of the funding agency. monkeypox (ecologic requirements, genetic determinants, among others) that have served to establish fundamental limits on the virus' capacity to emerge and spread beyond its current geographic confines? And if so, are there mechanisms or opportunities that could allow monkeypox to overcome these limitations?
Immunologic niche
Smallpox (variola major) is associated with higher fatality rates than monkeypox, but the clinical presentation of monkeypox is difficult to distinguish from discrete, ordinary smallpox (Figure 2), and smallpox vaccine is protective against both. A debate continues as to the duration of immunity provided by smallpox vaccine in the absence of periodic boosting [7][8][9], but it is inarguable that lifelong protection from re-infection was a lasting indemnity for having survived smallpox [10]. Presumably, a smallpox survivor would also possess life-long protection against infection with monkeypox virus (and vice versa), thus an individual infected with one virus would be permanently removed from the pool of susceptible hosts for the other.
In contrast, smallpox vaccination provides only limitedterm protection from infection with either monkeypox or variola. Could smallpox circulation in monkeypox endemic regions of Africa then have been sufficient to impede the spread of human monkeypox? What level of vaccination would have been required to achieve the same effect?
During a 40-year period from 1919 to 1958 an estimated 122 600 cases of smallpox were reported by the Colonial authorities in DRC (Belgian Congo), on average $3065 cases per year [11]. During that same period, roughly 78 million vaccinations and re-vaccinations were administered, the vast majority after 1945. In the 7 years immediately before the inception of the concerted vaccination programs, 14 000 total cases of smallpox were reported (from a population of $7.7 million persons [11]). In time, vaccination had a clear impact on smallpox and ultimately led to eradication, and collaterally on potential human monkeypox infections. However, in the absence of data describing the incidence of monkeypox human infections before and during much of the eradication era, it is difficult to determine the role of smallpox population (herd) immunity for vaccine induced immunity in the incidence of monkeypox human disease. Despite the absence of quality control of vaccines before the 1960s, the combination of smallpox and vaccine-derived immunity would provide protection against monkeypox infection. Since vaccination rates exceeded smallpox case rates in central Africa during this period, it is easy to imagine a converse relationship between smallpox vaccination and human monkeypox incidence during this period and immediately following eradication of smallpox.
Between 1967 and 1971, at the height of the smallpox eradication efforts, an estimated 15 236 000 doses of vaccine were provided to 21 countries in West and Central Africa for purposes of vaccination or re-vaccination against smallpox [12]. The sheer numbers of immunizations doubtless had an impact on the incidence, and possibly the geographic distribution of not just smallpox, but of other Orthopoxvirus-associated human infections including monkeypox. No human monkeypox cases have been reported from West Africa since 1981 [13], though evidence points to the fact that monkeypox virus still circulates enzootically [14,15]. And in the years immediately following smallpox eradication (1985-95), reports of monkeypox from the Congo Basin declined measurably [16].
It is difficult to assess what the ecologic impact of variola might have been on monkeypox virus over earlier timeperiods, when the viruses were circulating independent of the influence of vaccine. As a solely human pathogen, variola's ability to persist in populations is vulnerable to immunologically driven interruptions in human-tohuman transmission chains. By contrast, monkeypox is Map depicting the distribution of the smallpox in Africa 1954-1958, by country, before the inception of global eradication efforts ('Smallpox 1958', light green), and during the latter stages of eradication, at the time human monkeypox was discovered ('Smallpox 1971', dark green) [12]. Countries reporting at least one case of human monkeypox through 1990 are depicted with cross-hatching. Image courtesy of Benjamin Monroe, CDC. a sylvatic zoonosis and human infections are incidental and probably of little consequence to the overall persistence of the virus in nature. The current endemic distribution of monkeypox is in all likelihood governed by the distribution of its principal host(s). However, incidence of human infection is also dependent on cross protective immunity stemming from the vaccination campaign and previous exposure to variola or other Orthopoxviruses. Monkeypox virus is, however, capable of infecting a broad range of hosts, and spillover into a new permissive host with a more cosmopolitan distribution could -in theory -contribute to the virus' emerging as a threat to humans.
Ecologic context of monkeypox
Broad host range zoonotic agents have been highlighted as being more likely to be emerging or re-emerging human pathogens. Over 50% of zoonotic viruses with 3 or more types of non-human hosts have been classified as emerging agents [17]. Monkeypox virus can infect an array of mammalian taxa including Sciurid, Glirid and Nesomyid rodents (Cynomys sp., Funisciurus sp., Graphiurus sp., Cricetomys sp.), marsupials (Monodelphis domestica, Delphius marsupialis), and primates (Callithrix jacchus, Homo sapiens) [18,19 ]. In each of the examples provided, infections occurred without experimental induction by humans, but for most, human intervention was responsible for bringing the species in question into proximity with the virus. Monkeypox virus has only been isolated once from an animal captured in its natural environment -in 1986 monkeypox virus was isolated from the carcass of a Funiscirus squirrel found in Equateur Province of DRC [20]. The host range of naturally occurring (sylvatic) monkeypox remains undefined, but given its capacity to infect many different types of animals, it is likely to exceed the 3-host threshold.
Large mammals, gazelles and primates, have been singled out as potentially important sources of human infection in Central Africa [21], but the consistency of associations between rodent hosts and viruses across the Orthopoxvirus clade suggest that a rodent reservoir (or reservoirs) would be more likely for monkeypox [1,22] (Table 1). The perpetuation of acute viral infections in small populations is often theorized to necessitate either virus persistence or latency in the host (which is not characteristic of Orthopoxvirus infections) or high host turnover [23], which again points toward a rodent reservoir. Rodent fauna, such as squirrels (Funiscirus sp., Heliosciurus sp.) and Cricetomys, that are known to be susceptible to monkeypox virus infection, and that exploit food sources and refuges in areas adjacent to forest margins and human communities in DRC, are perhaps the most likely reservoirs and agents of virus transmission to humans [24,25]. Virus spillover into a more widely distributed sister taxa could raise concerns about the spread of disease beyond Africa.
In artificial settings, the common European squirrel Scirurus vulgaris has proven to be sensitive to infection with monkeypox virus [26] and the North American Sciurid rodent Cynomys ludovicianus has proven to be not only susceptible to infection but also capable of transmitting infection to humans [14,19 ]. The more common commensal rodents, Rattus spp. and Mus spp. are not considered to be especially susceptible to monkeypox virus infection, although monkeypox virus can be propagated in several inbred strains of mice and in immature animals [27,28].
In the absence of virus spillover and perpetuation in a readily susceptible, broadly distributed animal host, the spread of monkeypox beyond its areas of current endemicity in Africa would be dependent on human-to-human transmission which prompts the question of whether the inter-human transmission of monkeypox is sufficiently robust for this to occur?
Inter-human transmission potential
Whether monkeypox virus can exploit humans as a viable maintenance host will inevitably depend on the virus' capacity for sustained inter-human transmission. Epidemiologic modeling studies performed in the 1980s led to the conclusion that it would be highly improbable for monkeypox to become established in human populations owing to the virus' intrinsic lack of transmissibility [29 ,30]. The stochastic models used in these studies incorporated numerical estimates for contact and transmission rate variables that were derived from directly observed data [31]. Observations collected from 1980 to 1984 in DRC showed that people living in communities at risk for monkeypox had on average 10.7 close contacts (with 50% being highrisk household contacts), that secondary attack rates were approximately 6.7 times higher for unvaccinated contacts than vaccinated contacts, and that approximately 70% of the population had been vaccinated. Assuming these conditions, only 2% of model simulations resulted in a 3rdgeneration virus transmission event, and no iterations resulted perpetuation beyond the 6th generation of spread. And even assuming 'worst case scenario' conditionswhereby vaccine-derived immunity in the starting population was 0% -the resultant number of cases per simulation increased by approximately a factor of 4, but still no simulations resulted in indefinite, sustained virus transmission [30]; the R 0 never achieved !1.
The basic reproductive rate of an infection, R 0 , describes the inherent transmissibility of an infection within a Table 1 Examples of (non-primate) mammalian species that are susceptible to infection with monkeypox virus, and their suitability as vectors of infection to humans. population which has no prior immunity [32], effectively however the value is subject to influence by population demographics, contact patterns, and heterogeneities of susceptibility among individuals. Employing a straightforward calculation of the number of new cases generated by a single monkeypox infection [29 ], the R 0 of the modeled scenario above could pass the threshold of 1 by simply augmenting of the total number of close contacts from 10.7 to 13.7. Alternatively, increasing the proportion of contacts that are high-risk household contacts from 50% to 80% achieves the same outcome. Thus, within this framework (which assumes an absence of vaccine-derived immunity), fairly minor shifts in the epidemiologic context of monkeypox could tip the balance in favor of sustained spread even in the absence of other ecologic or evolutionary modifications.
Obtaining modern estimates of secondary contact rates and knowledge of human contact patterns in monkeypox endemic areas will be important for assessing the epidemiologic potential of monkeypox for sustained interhuman transmission in contemporary at-risk communities. Regardless, however, of the current reproductive rate of monkeypox in human populations, probabilistic arguments suggest that a zoonotic pathogen with an R 0 near to one (such as monkeypox) retains a greater potential to evolve to a state of higher transmissibility as transmission chains lengthen and as the number of primary introductions increases [33]. Under this scenario, evolutionary advancements could accrue in stepwise fashion through individual character state changes, provided each step were to confer an incremental advantage in transmissibility (fitness) [33]. For example, an initial (hypothetical) virus mutation that enhances seeding and proliferation of virus in the epithelium of the human throat, followed by a second mutation that potentiates irritation and coughing, could provide a theoretical fitness advantage at each step; whereas reversing the steps would likely not.
Zoonotic pathogens of intermediate transmissibility to humans such as monkeypox may be well positioned to derive selective advantage (for heightened transmissibility) from minor gains in host specialization. But, would increasing the inter-human transmission potential of monkeypox necessarily require increased specialization for humans and, if so, would that in turn necessarily lead toward recapitulation of a pathogen with the virulence and characteristics of variola?
Evolutionary constraints
Though monkeypox and (discrete ordinary) smallpox would be difficult to distinguish from one another in a clinical setting, there are subtle clues that point toward one illness as opposed to the other. Lymphadenopathy, for example, is a prominent feature of monkeypox [34,35] yet was nearly absent in smallpox patients. Nodal swelling has been described with smallpox [36,37], but the underlying process for this -localized edema -is distinct from the process of lymphoid hyperplasia (lymphocyte proliferation) observed in non-human primates infected with monkeypox virus [38,39]. Other functional differences affecting immune evasion and manipulation of the host immune system are predicted based on genome-level comparisons between variola and monkeypox viruses.
A core set of 90 conserved genes has been proposed as the 'minimum essential genome' of all Chrodopoxviruses (the subfamily that encompasses those poxviruses that parasitize vertebrate animals) [40]. This set accounts for only $50% of the haploid gene content of variola virus [41 ]. A typical Orthopoxvirus such as variola or monkeypox will have, in addition, genes associated with host specificity, immunomodulation and subcellular trafficking (for example), as well as a complement of open reading frames (orfs) with unknown function, regions of non-coding sequence, and long inverted terminal repeats (ITRs). Fluctuations in gene content -gene gain, gene loss -can provide opportunities for Orthopoxvirus adaptation to alternative hosts [42]. In fact, broad-scale evaluation of Orthopoxviruses genomes suggests that it is not uncommon for genes that have been acquired or lost to be those associated with hostspecific properties [40,42].
In general, monkeypox virus genomes have, or have retained, considerably more DNA content than variola. A comparison of the Zaire-96 strain of monkeypox [41 ,43] and the Kuwait-1967 strain of variola captures trends present across a broader sampling of each species: here, the monkeypox genome includes 4 additional genes and is $11 000 nucleotides longer than the variola genome; it has $10.5Â longer ITRs, and extra coding sequences within the ITRs (whereas variola has none) [41 ]. Variola unquestionably has one of the most significantly size-restricted genomes of all the Orthopoxviruses, yet it is not a trimmed-down version of monkeypox. Variola has (depending on the analysis) up to 9 defined coding sequences that monkeypox viruses do not have, or of which monkeypox viruses have only retained fragments [44 ]. In contrast, monkeypox has $16 defined orfs not present in variola [44 ,45] ( Table 2).
Several of the loci found in variola that are missing or truncated in monkeypox are hypothesized to play a role in immune evasion and virulence. For example, the variola genome harbors a virulence-associated gene (C3L) that expresses an inhibitor of complement enzymes. The ortholog of this gene (D14L) is either missing or expressed as a truncated (but functional) protein in monkeypox viruses [45,46 ]. The question of how pivotal the protein is to establishing robust Orthopoxvirus infections in humans is still the subject of investigation, but the smallpox protein is presumed to modulate a critical feature of the host innate immune system early during infection [45,46 ,47,48]. (Experimental attempts to demonstrate the functional importance of this locus to other orthopoxirus virulence phenotypes -either by adding the locus to deficient genetic backgrounds or by ablating the function from virulent background -have generated inconsistent results [49,50].) If the gene complement of monkeypox is lacking certain essential coding sequences related to host specialization, monkeypox virus' larger genome size and unique orfs could theoretically provide enough genetic plasticity to overcome the limitation. For instance, deficiencies in certain variola-specific functions could be met through alternative pathways -that is, functional pathways for immune evasion or inhibition that differ from variola's, yet ultimately impact the same target within the host.
A scan of the genome indicates that monkeypox viruses are deficient with respect to full-length orthologs for the two prominent loci in variola that influence interferonresistance (E3L, K3L) [44 ,45]. Yet, host-expression microarrays generated following infection of primary human monocytes with monkeypox virus unambiguously demonstrated diminution of interferon-associated host gene expression [51]. Thus, although monkeypox virus lacks full-length orthologs for these variola genes associated with interferon resistance, suppression of host interferon-induced gene expression is still achieved. This particular phenomenon, though not fully characterized, provides one example of a virus phenotype, common to both variola and monkeypox, that is manifest through non-equivalent processes. The inhibition of host interleukin-1 beta (IL1-b) may be another.
Variola expresses a full-length IL1-b antagonist protein (C10L ortholog) that binds at its C-terminal end to host IL-1b receptors effectively preventing or diminishing host cell activation by the cytokine [52 ]. Only the Nterminal portion of the protein is expressed by monkeypox virus [45] suggesting that the monkeypox protein would not demonstrate IL1-b receptor binding capacity. However, some Central African strains of monkeypox appear to possess the capacity to interfere with host cell activation by IL1-b. These variants of monkeypox virus putatively express a protein (B15R) that binds directly to IL-1b, rather than to its host cell receptor [44 ,46 ,52 ]. If borne out by functional studies, this could constitute and alternative means -not found in variola -of achieving the same host immune-modulatory effect.
Conceivably, further adaptation of monkeypox virus to humans, if it happens at all, could arrive through gene gain, or through nucleotide changes and optimization of these non-equivalent, redundant pathways (convergent evolution).
Conclusion
If the question initially posed was 'What is the intrinsic potential of monkeypox to fill the void left by the eradication of smallpox?', we conclude here with a mixed assessment. The scope of human immunity generated by eradication-era vaccinations unquestionably had an impact on the prevalence and distribution of both monkeypox and smallpox. But only smallpox was eradicable 340 Vaccines Table 2 Loci with known function in variola that are missing or truncated in monkeypox. a,b through the human vaccination program. The immunologic picture appears favorable for the resurgence of monkeypox in disease endemic areas -owing to increasing population-level vulnerability -but several factors inherent to the genetic makeup and ecology of monkeypox virus would seem to diminish the probability that this disease will spread to a significant degree outside the moist tropical forests of West and Central Africa.
The 2003 outbreak of monkeypox in the United States, which began with importation of infected animals from West Africa, provided a stark example of how spillover and propagation in a permissive animal could, at least temporarily, expand the range of monkeypox. Yet the most plausible animal taxa for monkeypox virus propagation and spread (Sciurid rodents, for example) are likely to be inefficient transmitters of infection to humans. Conversely, taxa more frequently implicated in transmission of zoonotic diseases to humans (Mus and Rattus) are not particularly susceptible to infection with monkeypox virus. It is arguable that for emergence to occur, gains in transmission efficiency and in the capability of monkeypox virus to exploit humans as hosts would be required. The path to achieving these gains (and an R 0 >1 in human populations) could involve relatively minor changes to the epidemiology of the disease (e.g. increasing the number of high-risk contacts by $20%) or evolutionary modifications that enhance infection success and specificity in humans hosts. But, in the immediate future, neither path is likely to lead to the recapitulation of a pathogen with the same virulence properties as smallpox.
In the meantime, monkeypox will continue to be a significant public health concern for people living in endemic areas. Waning immunity, inadequate housing and health infrastructure, and the lack of alternatives to bush meat consumption all likely contribute to increasing the concern that monkeypox may re-emerge in Central Africa. This in turn contributes to fears about export of the virus to neighboring countries. Appropriate and effective interventions are urgently needed to prevent ongoing human infections. By focusing on disease prevention efforts in areas already affected by monkeypox, we may ultimately diminish the probability that monkeypox will be a future threat in other environments. | 2018-04-03T00:39:12.206Z | 2012-03-05T00:00:00.000 | {
"year": 2012,
"sha1": "31ff6f22ee33f12e2b5cbb0ab69faf998ca26b72",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.coviro.2012.02.004",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5dd20c16d785b176f27cfc6d31eb0f5ad8f2400",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
146988547 | pes2o/s2orc | v3-fos-license | FORMATION OF STUDENT PERSONALITY’S PHYSICAL CULTURE AS SUBJECT OF PROFESSIONAL FUNCTIONING
. Purpose: generalization of experience of higher educational establishments’ future specialists’ professional training, oriented on formation of students’ personalities’ physical culture. Material: we questioned students (n=50) and institute teachers (n=30). Results: it was found that for increase of future specialists’ professional fitness effectiveness it was important to consider orientation of educational process on formation of student personality’s physical culture. Besides, it was noticed that professional fitness of future specialists is greatly influenced by implementation of modern technologies of formation of students’ physical culture in educational-learning process. Physical education means, oriented on aesthetic are of great health related and recreation significance. Conclusions: educational process shall be oriented on support of active motor functioning, motivation for physical exercises’ and healthy life style practicing.
Introduction 1
Entering of Ukraine in European educational and scientific space is conditioned by reformation of higher education and rising of requirements to professionalism of higher educational establishments' graduates in conditions of market competition [9, с. 1]. It should be stressed that at the stage of modern society's development in educational system modernization of educational teaching process, directed at personality oriented teaching is going. In this case organization of future specialists' professional training acquires the character of dialogue, cooperation, joint creation. Alongside with health problems of young generation more attention is paid to professional training of HEE specialists, in which wide opportunities are open for student to prepare for successful realization.
In spite of significant quantity of researches the problem of student personality's physical culture is an urgent one. There are unsolved problems of increase of future specialists' professional training effectiveness.
Purpose, tasks of the work, material and methods The purpose of the research is to analyze scientific methodic literature on the topic of the research; to study advanced experience of future specialists' professional training, oriented on formation of student personality's physical culture. Material and methods: the methods of the research are analysis and generalization of scientific-methodic literature, questioning of Luhansk Taras Shevchenko National University students. In questioning 20 students of economic and business institute and 30 1 st year students of physical education and sports faculty participated.
Results of the research Continuous education ensures constant progressing, perfection and creative renewal of a specialist during all his (her) life. Objective content of education is determined by social order as well as by the tasks, which are set by society for education. Subjective content is expressed in individual-personality's sense and is bases on principles of active, systemic, individual and differentiated approaches [2].
In opinion of many authors [1][2][3][4][5][6][7][8][9], one of after effects of scientific-technical progress is increase of knowledge volume of mankind. The category of mental labor persons includes students as well. Recent years, the flow of scientific information has significantly increased that requires its processing in short terms. Besides, application of different means of education is being expanded. All these result in intensification of HEE educational process and set increased requirements to professional training of future specialists in sphere of physical culture and management.
The sense of "personality's physical culture" concept in context of future specialist's professional training is regarded by us in the following way: Combination of demands, motives, knowledge, oriented on formation of sound, successful personality and physical perfectness; Formation of professionally important qualities; Training of motor skills; Ability to realize learning, scientific, motor, health related-physical culture and sport functioning for ensuring healthy life style; Teaching of mobilization, relaxation, body perfection techniques. In such context there is a demand in radical changes of HEE physical education system on the base of understanding the sense, purpose, tasks and content of pedagogic process, functioning of physical culture instructors. It is caused by understanding the fact that physical education shall not be reduced to compensation of motor functioning deficit. Motor functioning deficit results in lack of individuality of educational process, averaging of requirements to students' physical fitness. The ideas of personality's progress, student with his (her) individual features as the highest value shall be in the base of educational system. With it physical education system shall create maximally favorable conditions for students' complex development (spiritual, aesthetic, motor) for their conscious practicing healthy life style.
In modern conditions regular practicing of physical culture is undoubtedly an effective mean of health strengthening, diseases prophylaxis, increase of organism's resistance. Besides, it is a mean of favorable influence on formation of young generation's active life style, development of interest to social information, expansion of information contacts.
Analysis of economic and business students' attitude to physical culture classes showed that physical exercises, as mean of workability increasing is used by insignificant quantity of students (27.4%). Only 23.8% of students attend optional "physical education" classes. For 32.3% of students, physical culture is not a component of personality's general progress. The rest of students did not think about it.
When questioning physical culture instructors we found that professional training of future specialists in marketing and management is influence to large extent by implementation of different innovative technologies of students' physical culture formation in educational process. In this case aerobics, fitness, bodybuilding and so on are the means of physical and aesthetic education.
Let us compare opinions of students and teachers and see what means of physical education are preferred by students (see table 1). Volleyball Hiking and orientation Hiking and orientation 10 Hiking and orientation Handball Sport dances As we can see in students' answers the most popular and interesting were those kinds of physical exercises, which are aesthetically oriented: aerobics, fitness, bodybuilding, oriental martial arts. Thus, present time sets requirement to implement interesting for students physical education means in educational process, as well as means, which facilitate formation of student personality's physical culture, oriented on interconnected physical and aesthetic training.
Observations over young specialists' social adaptation in working collective showed that the higher level of graduates' (providing they know foreign languages) physical culture-sport qualification facilitates better and more effective usage of their potentials in production activities.
In institute of physical education and sports in specialties "physical education", "human health", "physical rehabilitation", "Olympic and professional sports", "fitness and recreation" rather little class hour are allocated for foreign languages. Independent study of foreign languages does not attract students.
Studying this problem, employers pay special attention to high culture level and knowing of foreign languages (especially in management and marketing sphere). Management is a system of rational managing of production activities, directed at achievement of planned results; it is a field of human knowledge, which helps to realize effective administration [5]. Innovative management is regarded by us as managing of innovative processes activity in physical culture -sport organization.
In opinion of many authors [3,5,6,7], innovation is a final result of innovative activity. It is realized in the form of a new (perfected) product or technological process, or as new approach to rendering social-cultural services. It should be noted that spectrum of innovations in functioning of physical culture-sport organizations is rather various. It can be classified by different principles: technological parameters, type and degree of novelty, spheres of functioning and etc.
Discussion
Physical culture of higher educational establishments' students was researched through formation of future teachers' motivation for healthy life style and creation of the required educational medium in higher educational establishment. Such medium is directed at increase of students' interest in own health condition, student's development as personality, individuality, active subject of professional functioning.
Results of our research confirmed the data of other authors [2,3,4,9], that achievement of high indicators in system of preparation of harmoniously developed specialists is impossible without scientific approach to organization of physical education at higher educational establishments. Humanistic, ethic and pedagogic ideas shall be in the base of such approach. Accordingly, humanistically oriented education shall not restrict personality's independence. Such education shall rest on internal, natural thirst of person for self-perfection and give him (her) opportunity of choice and independent solution of problems, connected with physical self-perfection.
For achievement of the above said it is necessary to change orientation of students' physical education, which is now oriented on physical training and physical fitness. Orientation of students' physical education shall imply formation of system of special knowledge, which would permit for students to consciously organize their life activity, to attach students to values of health related physical culture and recreation.
Conclusions
Analysis of theoretical researches and practical experience showed that formation of students' personality's physical culture, for them to be ready for creative interaction and successful self-realization envisages the following: 1) Its implementation in academic, scientific, health related-physical culture and sport functioning through independent choice of knowledge system: Formation of moral, humanistic relations, development of pedagogic tact, mastering of administrative functions in sphere of physical culture and management; Active participation in students' scientific practical conferences, Olympiads, competitions, master classes, forums, which create conditions for students' complex progress. 2) Teacher's and student's transition to technology of pedagogic cooperation for health strengthening, increase of workability. The system of curriculum, extra-curriculum and independent classes shall be oriented on the following: Individualization and integral character of learning, Proper mastering of foreign languages, Implementation of modern informational and innovative technologies, means of health related physical culture and recreation. 3) Personal physical culture of a student is reflected in his (her) attitude to physical culture values. In this case main place is taken by active motor functioning, motivation for physical exercises' practicing and healthy life style. It was determined that the main methodological tools of personality's physical culture formation are physical education means, aesthetically oriented. They are the mechanisms of influence on inner essence of a man, his spirituality, emotionality, expressiveness. Such approach has great health related and recreation significance.
Conflict of interests
The author declares that there is no conflict of interests. | 2019-05-08T13:29:40.499Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "b3e209c7d951a10a91d14f6a9eed0e37526f4101",
"oa_license": "CCBY",
"oa_url": "http://www.sportedu.org.ua/html/journal/2015-N6/pdf-en/15oovopf.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e2c56741b28d2bbd5a0fd2b6aa4d6d72d35e1a6b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
118975843 | pes2o/s2orc | v3-fos-license | SHARP-INTERFACE APPROACH FOR SIMULATING SOLID-STATE DEWETTING IN THREE DIMENSIONS
The problem of simulating solid-state dewetting of thin films in three dimensions (3D) by using a sharp-interface approach is considered in this paper. Based on the thermodynamic variation, a speed method is used for calculating the first variation to the total surface energy functional. The speed method shares more advantages than the traditional use of parameterized curves (or surfaces), e.g., it is more intrinsic and its variational structure (related with Cahn-Hoffman ξ-vector) is clearer and more direct. By making use of the first variation, necessary conditions for the equilibrium shape of the solid-state dewetting problem is given, and a kinetic sharp-interface model which includes the surface energy anisotropy is also proposed. This sharp-interface model describes the interface evolution in 3D which occurs through surface diffusion and contact line migration. By solving the proposed model, we perform lots of numerical simulations to investigate the evolution of patterned films, e.g., the evolution of a short cuboid and pinch-off of a long cuboid. Numerical simulations in 3D demonstrate the accuracy and efficacy of the sharp-interface approach to capture many of the complexities observed in solid-state dewetting experiments.
Modeling solid-state dewetting has been one of active research areas and become increasingly urgent in decades. In general, surface diffusion and contact line mi-gration have been recognized as the two main kinetic features for the evolution of solid-state dewetting [4,26]. In 1986, Srolovitz and Safran [51] proposed a simplified sharp-interface model to study the hole growth during the dewetting under the three assumptions, i.e., isotropic surface energy, small slope profile and cylindrical symmetry. Based on the above model, Wong et al. designed a "marker particle" numerical scheme to investigate the two-dimensional retraction of a discontinuous film (a film with a step) and the evolution of a perturbed cylindrical wire on a substrate [58,14]. These earlier studies were focused on the isotropic surface energy, although recent experiments have demonstrated that the crystalline anisotropy could play important roles in solid-state dewetting. To include the surface energy anisotropy, many approaches have been proposed in recent years, such as a discrete model [13], a kinetic Monte Carlo model [42,15], a crystalline model [9,65] and continuum models based on partial differential equations [5,25,26,55]. From a mathematical perspective, theoretical solid-state dewetting studies can be categorized into two major problems: one focuses on the equilibrium of solid particles on substrates [4,33]; the other focuses on investigating the kinetic evolution of solid-state dewetting [25,26,55]. In this paper, we will develop a sharp-interface approach for studying these problems about solid-state dewetting in 3D.
Under isothermal conditions, the equilibrium shape for a free-standing solid particle can be formulated by minimizing the interfacial energy subject to the constraint of a constant volume: where Ω ⊂ R 3 is the enclosed domain by a closed surface S, and γ(n) is the surface energy (density) with n = (n 1 , n 2 , n 3 ) T representing the crystallographic orientation. Based on the γ-plot, the equilibrium shape can be geometrically constructed via the well-known Wulff (Gibbs-Wulff) construction [59]. The resulted Wulff shape, is the inner convex region bounded by all planes that are perpendicular to orientation n and at a distance of γ(n) from the origin. The Winterbottom construction [57,5] was subsequently proposed to handle with the case about particles on substrates by truncating the Wulff shape with a flat plane, and where the Wulff shape is truncated depends on the wettability of the substrate. Meanwhile, many theories [7,8] demonstrated that the derivative of γ(n) plays an important role in investigating equilibrium and kinetic problems for solid particles with anisotropic surface energies. In 1972, Cahn and Hoffman developed the theory of ξ-vector [22,7] to describe the surface energy anisotropy of solid materials. It is defined based on a homogeneous extension of γ(n): where |p| := p 2 1 + p 2 2 + p 2 3 for p = (p 1 , p 2 , p 3 ) T ∈ R 3 . Under this extension,γ(p) satisfies (1.3)γ(λp) = |λ|γ(p), ∇γ(p) · p =γ(p), ∀λ = 0, p ∈ R 3 \{0}.
Compared to the traditional use of scalar function γ (or γ-plot), ξ-vector formulation has some advantages in the description of equilibrium shapes and thermodynamic evolution for crystalline interfaces [26,56]. From (1.3), we have ξ · n = γ(n), and the (c) ellipsoidal surface energy γ(n) = 4n 2 1 + n 2 2 + n 2 3 ; (d) "cusped" surface energy defined as γ(n) = |n 1 | + |n 2 | + |n 3 |. magnitude of the normal component for ξ equals to γ(n). Meanwhile, ξ-plot shares similar geometry with the Wulff shape, and it can be regarded as a mathematical representation of the equilibrium shape [7,41,47] when its 1/γ-plot is convex (i.e., weakly anisotropic). Fig. 1.1 depicts the γ-plot, 1/γ-plot and ξ-plot for four different types of surface energy anisotropies: (a) isotropic surface energy, i.e., γ(n) ≡ 1; (b) cubic surface energy γ(n) = 1 + a(n 4 1 + n 4 2 + n 4 3 ) with a representing the degree of anisotropy; (c) ellipsoidal surface energy γ(n) = a 2 1 n 2 1 + a 2 2 n 2 2 + a 2 3 n 2 3 ; (d) "cusped" surface energy defined as γ(n) = |n 1 | + |n 2 | + |n 3 |. In the application of materials science, the surface energy could be piecewise smooth and have some "cusped" points, where it is not differentiable [5,19,41]. A typical example is the "cusped" surface energy defined above. For these cases, we can regularize the surface energy with a small parameter 0 < ε ≪ 1 to ensure the usage of sharp-interface approach proposed in this paper, i.e., The Cahn-Hoffman ξ-vector has been recently utilized to describe the solid-state dewetting problem in two dimensions (2D) [26]. Based on the thermodynamic variation, the authors derived a sharp-interface approach via the ξ-vector formulation for describing the kinetic evolution of solid-state dewetting in 2D. In this approach, the moving interface is described as a parametrization over a time-independent domain, and the variation is performed by considering an infinitesimal perturbation with respect to an open interface curve coupled with contact points [26]. However, when we want to generalize this approach to 3D, we realize that it would be very different for calculating the thermodynamic variation for the 3D problem by using the approach of parameterized surfaces. First, the calculations of the variation in 3D via surface parametrization approach would become complicated, extremely tedious and a nightmare, and it unavoidably involves in a lot of knowledge about differential geometry; Second, for the solid-state dewetting problem, the infinitesimal perturbation to a surface in the tangential direction plays an important role in investigating the contact line migration along the substrate [25,4], and it would make the calculations become more complicated; Third, complicated calculations often make people easily forget the nature of the problem, and we need to investigate and make use of the variational structure of the problem. These difficulties motivate us to look for a new approach to calculating the thermodynamic variation of solid-state dewetting in 3D. In the literature, the shape optimization problem is popular in the design of industrial structures. The speed method and shape derivatives have been widely utilized to perform the shape sensitivity analysis of shape optimization problems [49,21,12]. This approach avoids the parametrization of a surface and is able to deal with perturbations along arbitrary directions, and it is the desired tool we are searching for.
Therefore, based on the ξ-vector formulation and the speed method, the objectives of this paper are as follows: (i) to calculate the thermodynamic variation of the energy functional for solid-state dewetting in 3D; (ii) to provide a rigorous derivation of the thermodynamic description of the equilibrium shape for solid-state dewetting in 3D; (iii) to develop a sharp-interface model which includes surface diffusion and contact line migration for simulating kinetic evolution of solid-state dewetting in 3D; and (iv) to present numerical simulations to investigate important characteristics of the morphological evolution for solid-state dewetting observed in experiments.
The rest of the paper is organized as follows. In Section 2, we briefly introduce the speed method and sharp derivatives, and then apply them for calculating the first variation of the total free energy functional. In Section 3, we rigorously derive the necessary conditions for the equilibrium shape and explicitly give an expression for the equilibrium shape by using a parametric formula. In Section 4, based on thermodynamic variation, a sharp-interface model is proposed for simulating solid-state dewetting of thin films in 3D. Subsequently, we perform some numerical simulations to demonstrate the accuracy and efficacy of our proposed model in Section 5. Finally, we draw some conclusions in Section 6.
2. Thermodynamic variation. The solid-state dewetting problem can be illustrated as Fig. 2.1, where a solid thin film (in blue) can dewet or agglomerate on a flat rigid substrate (in gray) due to capillarity effects. The total interfacial free energy of the system can be written as [4,26] (2.1) where S F V := S, S F S and S V S represent the film/vapor, film/substrate and and vapor/substrate interfaces, respectively, and γ F V , γ F S and γ V S represent the corresponding surface energy densities. In solid-state dewetting problems, we often assume that γ F S , γ V S are two constants, and γ F V is a function of the orientation of the film/vapor interface, i.e., γ F V := γ(n) with n representing the unit normal vector of the film/vapor interface, which points outwards to the vapor phase. The film/vapor interface is here described by an open two-dimensional surface S with boundary Γ (i.e., the contact line), which is a closed plane curve on the flat substrate S sub .
Substrate
Film Assume that we consider a bounded domain with size L x × L y on the substrate (shown in Fig. 2.1), and we label the surface area enclosed by the contact line Γ as A(Γ), then the total interfacial free energy of the system can be calculated as By dropping off the constant term L x L y γ V S , we can simplify the total interfacial free energy (still labeled as W ) as the following two parts, i.e., the film/vapor interface energy term W int and the substrate energy term W sub , As shown in Fig. 2.1, we introduce three unit vectors n Γ , τ Γ and c Γ , which are defined along the boundary Γ. More precisely, n Γ is the outer unit normal vector of the plane curve Γ on the substrate S sub ; τ Γ is the unit tangent vector of Γ on the substrate plane S sub , which points anticlockwise when looking from top to bottom; c Γ is called as the co-normal vector, which is normal to Γ and tangent to the surface S, and points downwards. For any point x ∈ S (with x = (x 1 , x 2 , x 3 ) T or (x, y, z) T ), if we label T x S and N x S as the tangent and normal vector spaces to S at x, respectively, then the following properties are valid, 2.1. Differential operators on manifolds. To obtain the thermodynamic (or first) variation of the above shape functional (2.3), we start by introducing some basic knowledge about surface calculus and differential geometry. For more details, readers can refer to [16,11].
Definition 2.1. Suppose that S ⊂ R 3 is a two-dimensional smooth manifold, and a function f is defined on S such that f ∈ C 2 (S). Let n = (n 1 , n 2 , n 3 ) T be the unit outer normal vector of S, andf be an extension of f in the neighbourhood of S such thatf is differentiable, then the surface gradient of f on S is defined as with ∇ denoting the usual gradient in R 3 . It is easy to show that ∇ S f is independent of the extension of f and only dependent on the value of f on S. If we denote ∇ S as a vector operator then we can easily obtain is the position vector of the surface and δ ij is the Kronecker delta. The surface divergence of a vector-valued function g = (g 1 , g 2 , g 3 ) T ∈ [C 1 (S)] 3 is defined as Moreover, the Laplace-Beltrami operator on S can be expressed as In the definition of surface gradient, since the normal component has been subtracted from ∇f , ∇ S f can be viewed as the tangential component of ∇f , and thus we have ∇ S f · n = 0 and ∇ S f (x) ∈ T x S, ∀x ∈ S. Note that it can be rigorously proved that Definition 2.1 is consistent with the conventional definition in differential geometry [16], and it generalizes the definition domian of surface divergence from the vector in tangent vector spaces to any vector in R 3 .
On the other hand, the integration by parts on an open smooth surface S with smooth boundary Γ reads as (see Theorem 2.10 in [16], and we omit the proof here) where n and c Γ are the normal and co-normal vectors (shown in Fig. 2.1), respectively, and H is the mean curvature, which is defined as the surface divergence of the unit normal vector, i.e., H = ∇ S · n. Similarly, by the above equation and Definition 2.1, we can obtain the integration by parts about a vector field F = (f 1 , f 2 , f 3 ) T ∈ R 3 on an open smooth surface S with smooth boundary Γ, If F lies in the tangent vector space of S, i.e., F·n = 0, then the second term vanishes.
Furthermore, by using the product rule that ∇ S (f g) = g ∇ S f + f ∇ S g, we can obtain In a simple case, if S is a flat surface (i.e., H = 0) with a plane boundary curve Γ, then Eq. (2.10) reduces to which is the Gauss-Green theorem in the multivariable calculus, because ∇ S f collapses to the gradient of f in 2D, and c Γ collapses to the unit outer normal vector of Γ.
2.2. The speed method and shape derivative. In this section, the objective is to calculate the first variation of the energy (or shape) functional defined in (2.3). To this end, we first introduce an independent parameter ǫ ∈ [0, ǫ 0 ) to parameterize a family of perturbations of a given domain D ⊂ R 3 , where the parameter ǫ controls the amplitude of the perturbation and ǫ 0 is the maximum perturbation amplitude. Furthermore, we assume that the domain D is of class C k with k ≥ 2.
More precisely, we consider a domain D ⊂ R 3 with a piecewise smooth boundary ∂D, then we can construct a family of transformations T ǫ , which are one-to-one, and T ǫ maps fromD ontoD, i.e., where ǫ is the small perturbation parameter. Generally, we assume that: (i) T ǫ and . Given any point X ∈D (with X = (X 1 , X 2 , X 3 ) T ) and ǫ ∈ [0, ǫ 0 ), we can define the point x = T ǫ (X) which moves along the trajectory. Here, the point X represents the Lagrangian (or material) coordinate, while x is the Eulerian (or actual) coordinate. Therefore, the speed vector field V(x, ǫ) at point x is defined as On the other hand, the transformation T ǫ can be uniquely determined by the speed vector field V via the following ordinary differential equation (ODE) Therefore, the transformation T ǫ and the smooth vector field V are uniquely determined by each other. For a smooth vector field V, e.g., V ∈ C(C k (D, R 3 ); [0, ǫ 0 )), the equivalence between the transformation T ǫ and the speed vector field V has been strictly established by Theorem 2.16 in [49]. In the following, we use T ǫ (V) to denote the transformation associated with vector field V. For simplicity, we also denote V 0 = V(X, 0). Let J(G) be a shape functional defined on a shape G ⊂D, where G could be a three-dimensional domain (e.g., Ω) or a two-dimensional manifold (e.g., a surface S).
The first variation of the functional J(G) at G in the direction of a speed vector field V ∈ C(C k (D,D); [0, ǫ 0 )) is given as the Eulerian derivative: To obtain the first variation and based on the transformation, we first define the material derivative and shape derivative of a function on a domain Ω or a surface S. For more details about the shape differential calculus, we refer to the book by Sokolowski and Zolesio [49]. [49]) The material derivativeψ(Ω; V) of ψ on a domain Ω in the direction of a speed vector field V is defined as Similarly, the material derivativeφ(S; V) of ϕ on a surface S in the direction V is defined as [49]) The shape derivative ψ ′ (Ω; V) of ψ defined on a domain Ω in the direction V is defined as Similarly, the shape derivative ϕ ′ (S; V) of ϕ defined on a surface S in the direction V is defined as Let Ω ⊂ R 3 be a smooth bounded domain inD with smooth boundary ∂Ω, and V be a speed vector field such that V ∈ C(C k (D,D); [0, ǫ 0 )). Suppose that ψ = ψ(Ω) is given such that the material derivativeψ(Ω; V) and the shape derivative ψ ′ (Ω; V) exist. Then, the shape functional J(Ω) = Ω ψ(Ω) dΩ is shape differentiable and we have Proof. By using the change of variables x = T ǫ (V)(X) for J(Ω ǫ ), we have . By noting the fact that By using the definition of the shape derivative ψ ′ (Ω; V) of ψ(Ω) on a domain Ω, i.e., and integration by parts for the second term of Eq. (2.24), we immediately obtain which completes the proof.
Remark 2.1. If V 0 · n = 0 on the boundary ∂Ω, we obtain that the first variation of the functional reduces to Remark 2.2. The material derivative can be regarded as the derivative with respect to the geometry in the moving coordinate systems. Therefore, it is very natural to subtract the term ∇ψ · V 0 from the material derivativeψ to define the shape derivative ψ ′ with respect to the geometry in the stationary coordinates. If ψ(Ω) is independent on the geometric object Ω, then we have ψ ′ (Ω; V) = 0.
Furthermore, the definition of shape derivative for a function ϕ(S) defined over a two-dimensional manifold S, ensures that the shape derivative shows no dependence on the extension of ϕ in the near neighbourhood. We propose the following proposition to show that the first variation of a functional on S is closely related to the shape derivative.
Proposition 2.2. Let S be a two-dimensional smooth manifold inD with smooth boundary Γ, and V be a speed vector field such that V ∈ C(C k (D,D); [0, ǫ 0 )). Suppose that ϕ = ϕ(S) is given such that the material derivativeφ(S; V) and the shape derivative ϕ ′ (S; V) exist. Then, the shape functional J(S) = S ϕ(S) dS is shape differentiable and we have where H is the mean curvature of the surface S, and c Γ is the unit co-normal vector.
Furthermore, if ϕ(S) = ψ(Ω) S , then we have Proof. According to the change of variables x = T ǫ (V)(X), and by using the transformation T ǫ (V), the shape functional J(S ǫ ) over the perturbed surfaces S ǫ can be expressed as follows: where ω(X, ǫ) is defined as Note that the following expressions hold according to the Lemma 2.49 in Page 80 of [49], Therefore, based on the definition of the first variation, we have By using the integration by parts and also making use of Eq. (2.21), we obtain Furthermore, if we assume that ψ is a function defined on the domain Ω, such that its restriction on S is equal to the function ϕ(S), namely which completes the proof.
Remark 2.3. If S is a closed surface, then the boundary term about Γ in (2.27) will vanish. The similar results for a closed curve or surface can be found in [12,21].
In the following, we will apply (2.27) in Proposition 2.2 for calculating the first variation of the energy (or shape) functional defined in (2.3), where the integrand is the surface energy density γ(n). To calculate the shape derivatives and obtain the first variation, we shall make use of the signed distance function, which is a powerful tool in shape sensitivity analysis. Consider a closed domain Ω ⊂ R 3 with a smooth boundary surface ∂Ω, and then the signed distance function is defined as Here, d(x, ∂Ω) = inf y∈∂Ω ||x − y||. The signed distance function b(x) can be used to determine the unit outer normal vector n and the mean curvature H on the boundary surface ∂Ω. More precisely, we can extend the functions n and H which are defined on ∂Ω in terms of b(x) in a tubular neighbourhood such that The shape derivative of the signed distance function in the direction of a vector field V is calculated as b ′ (Ω; V) = −V 0 · n (see [12,21] for more details). Moreover, based on the extension, the shape derivatives of the two extension functions restricted on ∂Ω are also obtained (see Lemma 3.1 in [12]), i.e., 2.3. First variation. By applying Eq. (2.27) and making use of the shape derivative of the unit outer normal vector, we obtain the following lemma.
Lemma 2.1. Assume that S ⊂D is a two-dimensional smooth manifold with smooth boundary Γ. Let n be the unit outer normal vector of S, and V be a speed vector field such that V ∈ C(C k (D,D); [0, ǫ 0 )). If the shape functional J(S) = S γ(n) dS with a surface energy (density) γ(n), then the first variation of J(S) is given as where ξ := ξ(n) is the Cahn-Hoffman vector, which is defined previously in Eq. (1.2), and V 0 · n represents the deformation velocity along the outer normal direction of the interface S, and the vector c γ Γ := (ξ · n) c Γ − (ξ · c Γ ) n with c Γ representing the unit co-normal vector (shown in Fig. 2.1).
Proof. We firstly assumeγ(p) is a homogeneous extension of γ(n), where the definition domain of the function γ(n) changes from unit vectors n to arbitrary non-zero vectors p ∈ R 3 . We next consider a bounded domain Ω ⊂ R 3 such that S ⊂ ∂Ω. Then, based on the signed distance function defined in (2.34), we can define ∇b(x) ∈ R 3 as an extension of the normal vector n in the neighbourhood of S. Thus we can reformulate with ψ(Ω) :=γ ∇b(x) . Using the chain rule for shape derivatives and the definition of Cahn-Hoffman ξ-vector in (1.2), we conclude that the following expression holds Moreover, by noting the fact |∇b(x)| = 1, we obtain For the first term, by using the integration by parts, we obtain Based on Eq. (1.3), we have γ(n) = ξ · n. Thus we can rewrite Finally, by combining the above three terms together, we immediately have with c γ Γ = (ξ · n) c Γ − (ξ · c Γ ) n. By using the above Lemma, we can easily obtain the first variation of the energy functional for solid-state dewetting problems defined in (2.3).
Theorem 2.1. The first variation of the free energy (or shape) functional (2.3) used in solid-state dewetting problems with respect to a smooth vector field V can be written as: where n Γ is the unit outer normal of the contact line curve Γ on the substrate (shown in Fig. 2.1). Proof. From (2.3), we observe that the total free energy consists of two parts: the film/vapor interface energy W int and the substrate energy W sub . First, by using Lemma 2.1, we can directly obtain the first variation of the film/vapor interface energy W int as follows, Here, c γ Γ is a linear combination of c Γ and n, which is defined on the contact line Γ. Therefore, as shown in Fig. 2.1, we have For solid-state dewetting problems studied in this paper, we assume that the contact line Γ must move along the substrate plane S sub , i.e., Therefore, for any x ∈ Γ, V 0 (x) can be decomposed into two vectors along the directions n Γ (x) and τ Γ (x), i.e., V 0 = k 1 n Γ + k 2 τ Γ , where k 1 and k 2 represent the corresponding components. By making use of (2.49), we can obtain Thus we can reformulate (2.48) as On the other hand, we can rewrite the substrate energy W sub as By using Proposition 2.2, and noting that the integrand ϕ in (2.26) is a constant and S F S is a flat surface with a plane boundary curve Γ (i.e., H = 0 and n Γ is the unit co-normal vector of the flat surface S F S ), we directly have By combining Eqs. (2.51) and (2.53), we obtain the following conclusion which completes the proof. Remark 2.4. The variational result given by (2.47) tells us that the rate of change of the total interfacial free energy is contributed from the two parts: one part results from the change of the interface S, and it is proportional to the weighted mean curvature (i.e., ∇ S · ξ) [53] and the rate of change of the volume (i.e., V 0 · n dS, the normal velocity times the surface area element); the other part comes from the change of the contact line Γ.
Remark 2.5. In 2D case, the variational result given by (2.47) in Theorem 2.1 will reduce to the variational result presented in the reference [26].
Remark 2.6. When the substrate is curved in 3D, the variational result given by (2.47) in Theorem 2.1 is still valid. We can perform similar discussions as the reference [24] for curved substrates in 2D.
3. Equilibrium shapes. The equilibrium shape of the solid-state dewetting problem can be stated as follows [4,25]: where C > 0 is a prescribed constant representing the total volume of the dewetted particle, and Ω represents the domain (or the particle) enclosed by the interface S and the substrate plane S sub . The Lagrangian for the above optimization problem can be defined as with λ representing the Lagrange multiplier. The first variation of the total volume term can be obtained by simply choosing the integrand ψ(x) ≡ 1, ∀ x ∈ Ω in (2.22) by Proposition 2.1. Therefore, by combining with Eq. (2.47), the first variation of the Lagrangian with respect to a smooth vector field V can be given as Based on the above first variation, we have the following theorem which yields the necessary conditions for the equilibrium shape of solid-state dewetting problem. Theorem 3.1. Assume that a two-dimensional manifold S e with smooth boundary Γ e is the equilibrium shape of the solid-state dewetting problem (3.1), then the following conditions must be satisfied where the constant λ is determined by the prescribed total volume, i.e., the constant C.
Proof. If S e is the equilibrium shape, then (3.3) must vanish at S = S e for any smooth vector field V. Therefore, we immediately obtain the above two necessary conditions.
For isotropic surface energy, i.e., γ(n) ≡ 1 (scaled by a constant γ 0 ), we have ξ = n and c γ Γ = c Γ . By simple calculations, Eq. (3.4a) will be reduced to the condition of constant mean curvature. Denote Γ e as the boundary of S e , for arbitrary x ∈ Γ e , let θ i (x) represent the equilibrium contact angle at boundary point x. Then, Eq. (3.4b) will reduce to (3.5) cos where the (dimensionless) material constant σ := = cos θ i , and it is the well-known isotropic Young equation [64].
Thus we can rewrite Eq. (3.4b) as which is consistent with the anisotropic Young equation discussed for the solid-state dewetting problem in 2D [4,55]. If X := X(θ, φ) represents the position vector of a surface, we have ∇ S · X = 2 by using Definition 2.1. Therefore, if we use the ξ-plot to represent the position vector of the equilibrium shape, then the necessary condition (3.4a) will be automatically satisfied. From one side, this is the reason why the ξ-plot can yield equilibrium shapes for a free-standing solid particles (as shown in Fig 1.1). Furthermore, based on the recent work for the generalized Winterbottom construction [4,57], we can construct its analytical expression for the equilibrium shape which also can satisfy the contact angle condition (3.4b). First, we define a domain of definition U φ for θ under a fixed value φ as (3.10) . Based on Theorem 3.1, we can explicitly construct its equilib-rium shape in the parametric formula as S e (θ, φ) := X(θ, φ) = (x(θ, φ), y(θ, φ), z(θ, φ)) T , where φ ∈ [0, 2π], θ ∈ U φ , and λ is the scaling constant determined by the total volume |Ω|. Based on the formula (3.11), the equilibrium shape under different types of surface energy anisotropies, e.g., the cubic anisotropy and regularized "cusped" anisotropy defined in Eq. (1.4), can be easily constructed. Fig. 3.2(a)-(c) depicts the equilibrium shapes for isotropic surface energy with the material constant σ chosen as σ = cos(π/3), cos(π/2), cos(3π/4), respectively. It clearly demonstrates the effect of the material constant σ on the equilibrium shape by influencing the equilibrium contact angle via Eq. (3.5). Moreover, we also present equilibrium shapes for the cubic anisotropic surface energy, i.e., γ(n) = 1+a(n 4 1 +n 4 2 +n 4 3 ) and regularized "cusped" surface energy defined in Eq. (1.4) with σ = cos(3π/4) in Fig. 3.2(d)-(e). The anisotropy for Fig. 3.2(f) is chosen by an anti-clockwise rotation along the x-axis by 45 degrees under the right-hand rule for the regularized "cusped" surface energy. We can observe that this rotation results in a corresponding rotation of the equilibrium shape.
A sharp-interface model and its properties.
In this section, we propose a kinetic sharp-interface model for simulating solid-state dewetting of thin films with anisotropic surface energies, and then we show that the proposed model satisfies the mass conservation and energy dissipation.
4.1. The model. Based on Eq. (2.47) in Theorem 2.1, we can define the first variation of the total interfacial energy functional with respect to the film/vapor interface S and its boundary curve (i.e., the contact line Γ) as From the Gibbs-Thomson relation [38,52], the chemical potential can be defined as with Ω 0 representing the atomic volume. The normal velocity of the moving interface is controlled by surface diffusion [7,38,55,25], and it can be defined as follows by Fick's laws of diffusion [3] (4.3) In these expressions, J is the mass flux of atoms, D s is the surface diffusivity, k B T e is the thermal energy, ν is the number of diffusing atoms per unit area, ∇ S is the surface gradient. In addition to the surface diffusion which controlled the motion of the moving interface, we still need the boundary condition for the moving contact line. Following the idea for simulating solid-state dewetting in 2D [55,25,24], we assume that the normal velocity of the contact line Γ is simply given by the energy gradient flow, which is determined by the time-dependent Ginzburg-Landau kinetic equations, i.e., with 0 < η < ∞ denoting the contact line mobility, which can be thought of as a reciprocal of a constant friction coefficient. For the physical explanation behind this approach, please refer to the recent paper [55]. We choose the characteristic length scale and characteristic surface energy scale as h 0 and γ 0 , respectively, the time scale as . Let X(·, t) = (x(·, t), y(·, t), z(·, t)) T be a local parameterization of the moving film/vapor interface S, then we can obtain a dimensionless kinetic sharp-interface model for solid-state dewetting of thin film via the following Cahn-Hoffman ξ-vector formulation as where t is the time, n is the unit outer normal vector of S, and ξ := ξ(n) is the Cahn-Hoffman vector (scaled by γ 0 ). Here, for simplicity, we still use the same notations for all the dimensionless variables.
Remark 4.2. The contact line condition in Eq. (4.8) ensures that the contact line must move along the substrate plane. Because the contact line Γ lies on the substrate (i.e., Oxy plane), the third component of n Γ is always zero, i.e., n Γ,3 = 0. As long as the initial condition satisfies z Γ (·, 0) = 0, it can automatically satisfy the boundary condition (i) z Γ (·, t) = 0, ∀ t > 0 by using the boundary condition (ii). The last boundary condition (iii) ensures that the total volume/mass of the thin film is conserved during the evolution, i.e., no-mass flux at the moving contact line.
Remark 4.3. The above governing equation is well-posed when the surface energy is isotropic or weakly anisotropic. But when the surface energy is strongly anisotropic, some missing orientations will appear on equilibrium shapes [47,50]; in this case, the governing equation becomes ill-posed, and it can be regularized by adding regularization terms such that the regularized sharp-interface model is well-posed [25,5]. For the analytical criteria about the classification of surface energy anisotropy in 3D, interested readers could refer to [47].
Mass conservation and energy dissipation.
In the following, we will rigorously prove that the proposed sharp-interface model satisfies the mass conservation and the total free energy dissipation during the evolution.
Proposition 4.1. Assume that X(·, t) is the solution of the sharp-interface model, i.e., Eqs. (4.5)-(4.6) with boundary conditions (4.8)-(4.10), and denote S(t) := X(·, t) as the moving film/vapor interface. Then, the total volume (or mass) of the thin film, labeled as |Ω(t)|, is conserved, i.e., Furthermore, the (dimensionless) total interfacial free energy of the system is nonincreasing during the evolution, i.e., Proof. By making use of the first variation (2.22) and simply choosing the integrand ψ(x) ≡ 1, ∀ x ∈ Ω, and using the governing equation (4.5), we can calculate the time derivative of the total volume as (noting that V 0 = ∂ t X) where the last equality comes from the integration by parts and the zero-mass flux condition (4.10), and it indicates that the total volume/mass is conserved. To obtain the time derivative of the (dimensionless) total free energy, by making use of Theorem 2.1 and Eq. (2.47), but replacing the perturbation variable ǫ with the time variable t, we can immediately obtain By substituting the governing equations and the relaxed contact angle boundary condition, i.e., (4.14) , into the above equation and using the integration by parts and the zero-mass flux condition, we obtain where the constant η > 0. The last inequality immediately implies the energy dissipation.
Remark 4.4. In the above proof, we need to calculate the time derivatives of the total volume and the total free energy. These two derivatives can be easily obtained by making use of the speed method and the first variation presented in Section 2. In Section 2, we consider any type of smooth perturbations. In fact, a family of evolving interface surfaces {S(t)} t≥0 can be also thought of as a type of perturbations, only by replacing the perturbation variable ǫ with the time variable t. Therefore, the time derivatives can be directly obtained by using the first variation of the total volume functional and the total free energy functional.
Numerical results.
In this section, we perform numerical simulations for solid-state dewetting in 3D to investigate the morphological evolution of thin films in various cases. We implement the parametric finite element method (PFEM) [5,6] for solving the proposed sharp-interface model in 3D. For the detailed introduction of numerical algorithms about PFEM in 3D, interested readers could refer to [6].
Equilibrium convergence.
We have presented a mathematical description of the equilibrium shape in Section 3. Here, we present some numerical convergence results to equilibrium shapes by numerically solving the proposed kinetic sharp-interface model.
From the relaxed contact angle boundary condition (4.9), which describes the migration of the contact line, we know that the contact line mobility η precisely controls the relaxation rate of the contact angle towards its equilibrium state. The large η will accelerate the relaxation process [55]. Here, we numerically investigate the effect of η on the evolution of the dynamic contact angle. energy W (t)/W (0) under different choices of the contact line mobility η. The initial thin film is chosen as a unit cube. From the figure, we can observe that the larger mobility η will accelerate the process of relaxation such that the contact angles evolve faster towards its equilibrium contact angle 3π/4. As shown in Fig. 5.1, the energy decays faster for larger mobility, but finally the equilibrium contact angle converges to the same value. It indicates that the equilibrium contact angles as well as the equilibrium shape are independent of the choice of the contact line mobility η. In the following numerical simulations, the contact line mobility is chosen to be very large (e.g., η = 100). This choice of η will result in a very quick convergence to the equilibrium contact angle (defined by Eq. (3.4b)). The detailed investigation of the influence of the parameter η on the solid-state dewetting evolution process and equilibrium shapes was performed in 2D [55].
We next show a convergence result between the numerical equilibrium shape by solving the proposed sharp-interface model and its theoretical equilibrium shape. ). The initial shape is chosen as a (1, 2, 1) cuboid, then we numerically evolve it until the equilibrium state by using different meshes, which are given by a set of small isosceles right triangles. If we define the mesh size indicator h as the length of the hypotenuse of the isosceles right triangle, then "Mesh 1" represents the initial mesh with h = h 0 = 0.125, and the time step is chosen as τ = τ 0 = 0.00125 for numerical computation. Meanwhile, the time step for "Mesh 2" (h = h0 2 ) and "Mesh 3" (h = h0 4 ) are chosen as τ = τ0 4 and τ = τ0 16 , respectively. For a better comparison, we plot the cross-section profiles along the x-direction for the numerical equilibrium shapes and the theoretical equilibrium shape. As shown in Fig. 5.2, we can clearly observe that as the computational mesh size gradually decreases, the numerical equilibrium shapes uniformly converge to the theoretical equilibrium shape (constructed by Eq. (3.11)).
Kinetic evolution.
First, we focus on the case for isotropic surface energy, i.e., γ(n) ≡ 1. We start with numerical examples for an initially, short cuboid island with (2, 2, 1) representing its length, width and height, respectively (as shown in Fig. 5.3(a)). The computational parameter is chosen as σ = cos 5π 6 . As can be seen in Fig. 5.3, we show several snapshots of the morphology evolution for the short cuboid towards its equilibrium shape. As time evolves, the initial sharp corners and edges on the island become smooth in a very short time (Fig. 5.3(b)), and finally the island film forms a spherical shape as its equilibrium shape (Fig. 5.3(f)).
Short cuboid island films tend to form a single spherical island shape as its equilibrium shape minimizing the total free energy (i.e., the minimal surface area). However, the morphological evolution for long cuboid islands could be quite different. Due to the Plateau-Rayleigh instability [31,45,36], long cuboid islands could pinch off and break up into a number of small isolated particles on the substrate before they form a single spherical shape as its equilibrium. In order to investigate this phenomenon, we perform the simulation by fixing the same material constant as σ = cos(3π/4), and choosing the shape of initial island film as a long cuboid with (1, 12, 1). For isotropic case, as can be seen in Fig. 5.4, the island quickly evolves into a cylinder-like shape during the evolution; then it accumulates more and more materials near the two edges, while its neck becomes thinner and thinner; finally, it pinches off at the neck and breaks up into two small isolated islands on the substrate. For cubic anisotropic surface energies, long cuboid islands also exhibit the similar pinch-off process as the isotropic surface energy case. We test the numerical example for an initially cuboid island with the same material constant and initial shape, as shown Fig. 5.5. From the figure, we observe that it finally forms three isolated small islands, while it only produces two isolated small islands in the isotropic surface energy case. This indicates that for this type of cubic anisotropic surface energy, the solid island tends to dewet more easily than in the isotropic surface energy. . Several snapshots during the evolution of an initial, cuboid island film with isotropic surface energy towards its equilibrium shape: (a) t = 0; (b) t = 0.004; (c) t = 0.008; (d) t = 0.0120; (e) t = 0.020; (f ) t = 0.080, where the initial shape is chosen as a (3.2, 3.2, 0.1) cuboid, and the material constant σ = cos(5π/6).
Finally, we investigate the morphological evolution of square island films with size (m, m, h) on a flat substrate. We start by simulating the evolution of an initial, small square island with size (3.2, 3.2, 0.1), and the material constant is chosen as σ = cos 5π 6 . As can be seen in Fig. 5.6, the four corners of the square island retract much more slowly than the middle points of the four edges at the beginning, thus resulting in an almost cross shape for the island film (see Fig. 5.6(d)). This phenomenon about "mass accumulation" at the corner has also been observed in experiments [54,61,63] or numerical simulations by a phase-field approach [23,39]. Subsequently, because the square island is small, these retracting corners eventually catch up with the edges, and the contact line begins to move towards a circular shape in order to form a spherical shape as its equilibrium. During the evolution, we can also observe that a valley forms at the center of the island, but finally it disappears. To observe the possible pinch-off phenomenon, we enlarge the square size and simulate the evolution of an initial, large square island with size (6.4, 6.4, 0.1) (shown in Fig. 5.7). From the figure, we observe that the valley at the center becomes deeper and deeper, and it eventually touches the substrate, and produce a hole in the center of the island. We stop the numerical simulation at the moment when there exists one new mesh point which touches the substrate. For a better illustration, in Fig. 5.8, we also plot several snapshots about its corresponding cross-section profile of the island film during the evolution.
6. Conclusions. We proposed a sharp-interface approach for simulating solidstate dewetting of thin films in three dimensions (3D), and this approach can handle with the effect of the surface energy anisotropy. Based on the Cahn-Hoffman ξ-vector formulation and the speed method, we derived rigorously the first variation of the total free energy functional of the solid-state dewetting problem. From the first variation, necessary conditions for the equilibrium shape of solid-state dewetting were rigorously given in mathematics. Furthermore, a kinetic sharp-interface model was also proposed for simulating the solid-state dewetting of thin films in 3D. The governing equations described the interface evolution which is controlled by surface diffusion and contact line migration. A lot of numerical examples were performed for solving the sharpinterface model, and numerical results reproduced the complex features in the solid thin film dewetting observed in experiments, such as edge retraction, hole formation, faceting, corner accumulation, pinch-off and Rayleigh instability. | 2019-04-21T22:46:48.292Z | 2019-02-14T00:00:00.000 | {
"year": 2019,
"sha1": "42f731cf1c97e9c7bc5f577861f6ea13e141b442",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9ebcb58ee103b8b6ed7b811d65fa062cb42d50ad",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
18824359 | pes2o/s2orc | v3-fos-license | The Relationship of Metabolic Syndrome with Stress, Coronary Heart Disease and Pulmonary Function - An Occupational Cohort-Based Study
Background and Aims Higher levels of stress impact the prevalence of metabolic syndrome (MetS) and coronary heart disease. The association between MetS, impaired pulmonary function and low level of physical activity is still pending assessment in the subjects exposed to stress. The study aimed to examine whether higher levels of stress might be related to MetS and the plaque presence, as well as whether MetS might affect pulmonary function. Design and Methods The study embraced 235 police officers (mean age 40.97 years) from the south of Poland. The anthropometrics and biochemical variables were measured; MetS was diagnosed using the International Diabetes Federation criteria. Computed tomography coronary angiography of coronary arteries, exercise ECG, measurements of brachial flow-mediated dilation, and carotid artery intima-media thickness were completed. In order to measure the self-perception of stress, 10-item Perceived Stress Scale (PSS-10) was applied. Pulmonary function and physical activity levels were also addressed. Multivariate logistic regression analyses were applied to determine the relationships between: 1/ incidence of coronary plaque and MetS per se, MetS components and the number of classical cardiovascular risk factors, 2/ perceived stress and MetS, 3/ MetS and pulmonary function parameters. Results Coronary artery atherosclerosis was less associated with MetS (OR = 2.62, 95%CI 1.24–5.52; p = 0.011) than with a co-existence of classical cardiovascular risk factors (OR = 5.67, 95% CI 1.07–29.85, p = 0.03; for 3 risk factors and OR = 9.05; 95% CI 1.24–66.23, p = 0.02; for 6 risk factors, respectively). Perceived stress increased MetS prevalence (OR = 1.07, 95% CI 1.03–1.13; p = 0.03), and impacted coronary plaque prevalence (OR = 1.05, 95% CI 1.001–1.10; p = 0.04). Leisure-time physical activity reduced the chances of developing MetS (OR = 0.98 95% CI 0.96–0.99; p = 0.02). MetS subjects had significantly lower values of certain pulmonary function parameters. Conclusions Exposure to job-specific stress among police officers increased the prevalence of MetS and impacted coronary plaque presence. MetS subjects had worse pulmonary function parameters. Early-stage, comprehensive therapeutic intervention may reduce overall risk of cardiovascular events and prevent pulmonary function impairment in this specific occupational population.
Design and Methods
The study embraced 235 police officers (mean age 40.97 years) from the south of Poland. The anthropometrics and biochemical variables were measured; MetS was diagnosed using the International Diabetes Federation criteria. Computed tomography coronary angiography of coronary arteries, exercise ECG, measurements of brachial flow-mediated dilation, and carotid artery intima-media thickness were completed. In order to measure the self-perception of stress, 10-item Perceived Stress Scale (PSS-10) was applied. Pulmonary function and physical activity levels were also addressed. Multivariate logistic regression analyses were applied to determine the relationships between: 1/ incidence of coronary plaque and MetS per se, MetS components and the number of classical cardiovascular risk factors, 2/ perceived stress and MetS, 3/ MetS and pulmonary function parameters.
Introduction
Cardiovascular diseases (CVD) account for a substantial amount of deaths worldwide, also in Poland. In the developed countries metabolic syndrome (MetS) affects up to 25% of the population and continues to spread, becoming a major clinical and public health problem, mainly due to its association with CVD [1]. In 2005, the American Diabetes Association and the European Association for the Study of Diabetes emphasized the need to identify CVD risk associated with MetS [2]. Meta-analyses of prospective studies revealed the risk of cardiovascular morbidity and mortality to be 2-fold higher in the patients with MetS [3,4]. There is an ongoing debate about viability of MetS as a predictor of CVD risk, and whether it might be a more effective predictor than individual risk factors.
The relationship between individual lifestyle, in particular smoking, psychosocial or work stress, and lack of regular physical activity with morbidity and mortality due to CVDs was demonstrated [5][6][7][8][9][10]. Physical inactivity is a health behavior strongly associated with obesity and MetS [11]. Despite popular assertion that lack of physical activity is harmful to health, a large proportion of the world's population remains physically inactive. The lack of regular physical activity or physical inactivity is responsible for 6-10% of major non-communicable diseases, whereas individual life expectancy may increase in the subjects with higher physical activity level [12].
In clinical practice the risk of atherosclerosis is established through the identification of key risk factors for CVD. The assessment of endothelial function (flow-mediated dilation; FMD) and the measurement of carotid artery intima-media thickness (IMT) can also be helpful in assessing the advancement of atherosclerosis and prediction of future CVD outcomes [13,14].
The most commonly used, non-invasive techniques for assessing the severity of atherosclerosis consist in the calcium score and computed tomography coronary angiography (CTCA). Despite many controversies regarding the diagnostic efficacy of CTCA, some reports praise its appreciable potential for assessing the presence of coronary plaque and, to some extent the severity of atherosclerosis in MetS subjects [15,16]. Several studies demonstrated a clear linkage between the work-related stress and the key risk factors for CVD [17,18]. The very nature of police work imposes a significant psychological burden on the police officers. Only a few studies had investigated this occupational group, yet none of them ever attempted to offer a broad assessment of atherosclerotic coronary plaque presence with the aid of CTCA [17,19,20].
It is for this reason that an integrated prevention strategy, based on a systematic evaluation of the total risk of a disease at an individual level, appears vital for this particular occupational group. It is well expected to facilitate reaching the subjects earlier in the course of a vascular disease, and possibly also mitigate the attendant risk factors, help reduce clinical manifestation of CVD, as well as other consequences of obesity and MetS (e.g. impairment of pulmonary function).
It was demonstrated that the subjects with pulmonary function impairment had a higher risk of MetS and its individual components than the subjects with a normal pulmonary function [21]. A positive independent relationship between a pulmonary function impairment and abdominal obesity was established [22]. Forced expiratory volume in 1 second (FEV 1 ) was regarded as an independent predictor of MetS, although restriction was also related to this syndrome [23][24][25]. Low pulmonary function was associated with MetS also within a general population [25].
The present cohort study is focused on the police officers, as the individuals particularly exposed to a highly stressful occupation. Therefore many aspects of this study may well be translated into developing an optimal preventive strategy for this particular occupational group. We primarily aimed to investigate whether MetS and its components, as well as CVD risk factors, were in any way correlated with the presence of coronary artery plaque. We also assessed whether stress was related to both MetS and the presence of coronary artery plaque. The relationship between MetS and pulmonary function was also evaluated.
Study design and population
The present study embraced a cohort of police officers from the southern region of Poland who volunteered for coronary heart disease (CHD) screening. Two hundred and thirty-five consecutive, professionally active subjects (216 men, 19 women), aged 27-58 years, were enrolled.
Physical examination, medical and structured interviews were applied to collect personal and clinical information. Anthropometric measurements, i.e. circumference of waist and hip, height and body weight were also measured. Obesity was defined as BMI>30.0 kg/m 2 and overweight as BMI >25.0 kg/m 2 . Cardiovascular disease risk was prospectively evaluated through Framingham risk score (FRS).
Subjects were subsequently split into two groups, i.e. either with MetS (n = 109), or without it (n = 126), using the criteria proposed by the International Diabetes Federation: waist 94 cm in men and 80 cm in women, fasting glucose 5.6 mmol/L (100.0 mg/dL) or previously diagnosed type 2 diabetes, hypertriglyceridemia 1.7 mmol/L (150.0 mg/dL) or specific treatment for this lipid abnormality, HDL-cholesterol <1.03 mmol/L (40.0 mg/dL) in men and <1.29 mmol/L (50.0 mg/dL) in women, and blood pressure systolic/diastolic 130/85 mm Hg or treatment of previously diagnosed hypertension [26,27]. The study was approved by the local Ethics Review Committee of the Jagiellonian University, and an informed written consent was granted by all participants.
Friedewald formula. Basic biochemical tests (liver enzymes, glucose, total protein, urea, uric acid and creatinine) were carried out with the aid of Vitros 350 biochemical analyser. Ultrasensitive CRP in the serum was determined by nephelometry and tissue necrotic factor-α (TNF-α) in the plasma by ELISA.
Treadmill exercise ECG testing
Symptom-limited maximal treadmill exercise tests were performed using the standard Bruce protocol [28] on a treadmill (Marqueette Electronics). The end point of the test was usually fatigue, or an individual inability to keep pace with the treadmill, unless another indication for test termination was met first. A 12-lead ECG was obtained every minute during the exercise, at peak exertion, and in the recovery phase. Exercise workload was estimated in metabolic equivalents (METs), where 1 MET = 3.5 ml/kg per minute of oxygen consumption. Positive test result was determined when ST-segment depressed or elevated horizontally >0.1 mV 80 ms after the J point (ST80) from normal baseline, in three consecutive beats. Criteria for ischemic response also included slow upsloping ST-segment depression >0.15 mV with ST-segment slope >1.0 mV/sec, and downsloping ST-segment depression occurred when the depression was 0.1 mV, and the ST-segment slope was -1.0 mV/sec.
Ultrasound imaging
The measurements of brachial artery diastolic response (flow-mediated dilation; FMD) and the average thickness of the carotid intima-media thickness (IMT) were obtained using an ultrasonograph (Sequoia 512, Mountain View, Ca, USA) with a 6 MHz linear transducer.
The measurements of endothelium-dependent FMD of brachial artery in response to reactive hyperemia were evaluated non-invasively, in compliance with the ultrasound method described by Celermajer et al. [29]. All measurements were taken on the right brachial artery 2-3 centimeters above antecubital fossa after a patient had stayed in the supine position for 5 min. Reactive hyperemia was induced by the inflation of sphygmomanometer cuff around the forearm to 200 mmHg for 5 min. Endothelium-dependent response was construed as the dilation of the brachial artery induced by an increased flow. The subjects were studied in a fasting state (between 7.00 p.m. and 8.00 a.m.); exposure to caffeine, smoking and exercise were prohibited prior to the imaging study. All FMD measurements were performed on the same apparatus, by the same person, and repeated in a period of 1-2 months. IMT measurements of the distal wall of the carotid artery were taken in three locations: 1. common carotid artery (2 cm below the bulb), 2. carotid artery bulb, and 3. proximal internal carotid artery. The final IMT value was the mean from all measurements on both carotid arteries.
Computed tomography coronary angiography
Out of the entire cohort, 154 subjects (65.53%) underwent computed tomography coronary angiography (CTCA). All scans were performed by dual source CT scanner (Somatom Definition; Siemens Medical Solutions). Coronary Computed tomography was performed by a 64-slice configured CT. Data were acquired in a craniocaudal direction with a detector collimation 2x32x0.6 mm and a gantry rotation time of 0.33 seconds. Image acquisition was performed during an inspiratory breath hold of ca. 10 s. Image reconstruction was retrospectively gated to the ECG. CT image was reconstructed by mono-segmental mode, using the section thickness of 0.6 mm and a smooth-tissue convolution kernel (B26F). All images were evaluated using a remote workstation with dedicated software (Siemens Leonardo Station). The contrast material was administrated in an antecubital vein in the amount of 70-100 ml, at the rate of 5.5 mL/s [30]. The observer then compared the minimal lumen area to an arterial one at an appropriate reference site, in a non-diseased arterial segment, in the closest proximity to the lesion, preferably with no branch vessels in between. The subjects with supraventricular and ventricular arrythmias, renal insufficiency, and confirmed allergy to contrast media were excluded.
Physical activity
Physical activity was evaluated using the International Physical Activity Questionnaire-Long Form (IPAQ-LF). The items in the IPAQ-LF form were structured to provide separate domain specific scores for walking, moderate-intensity and vigorous-intensity activity within each of the work, transportation, domestic chores and gardening (yard) and leisure-time domains. Domain-specific scores required summation of the scores for walking, moderate-intensity and vigorous-intensity activities within the specific domain, whereas activity-specific scores required summation of the scores for the specific type of activity across domains.
Based on the questionnaire data, weekly physical activity score was calculated. Using the Ainsworth et al. [31] Compendium an average Metabolic Equivalent of Task (MET) score was derived for each type of activity. A specific method was also used with a view to correcting MET values, so as to account for personal variation in sex, body mass, height, and age, to provide more accurate estimates of individual physical activity level [32]. The resulting MET value was referred to as a "corrected MET" value, and expressed as the corrected MET-minutes/kg of body weight.
Perceived Stress Scale
In order to measure the self-perception of stress Perceived Stress Scale-10 (PSS-10) in Polish adaptation was applied [33,34]. It is a measure of the extent to which situations in one's life are perceived as stressful. Specifically structured questions were designed to probe how unpredictable, uncontrollable, and overloaded respondents find their lives. Participants are asked to respond to each question on a 5-point Likert scale ranging from 0 (never) to 4 (very often), indicating how often they have felt or thought in a certain way within the past month. Overall score was completed through the reverse scoring of the four positively-worded items and summing all item scores. Scale scores ranged 0-40, with the higher scores indicating higher levels of stress.
Cronbach's alfa was used to evaluate reliability, while exploratory and confirmatory factor analyses were applied to evaluate validity of PSS-10. To confirm the observations and fit the two-factor model, a confirmatory factor analysis was performed. The goodness of fit of models were assessed using Goodness of Fit Index (GFI) and the Root Mean Square Error of Approximation (RMSEA). The following statistics were obtained: GFI = 0.916 and RMSEA = 0.077. The PSS-10 demonstrated good reliability, as Cronbach's alpha was 0.85 (0.84-0.86).
Pulmonary function
Standard spirometry was performed (Master Screen MS PFT, Jaeger, Wurzburg, Germany) according to the 2005 American Thoracic Society/European Respiratory Society recommendations [35]. The best one of the three repeatable manoeuvres was recorded. The measured volumes were adjusted for sex, age, and height using equations from a reference population of non-smoking Caucasians, and were expressed as a percentage of the predicted values. The following values were selected for the study: forced expiratory volume in 1 second (FEV 1 ), forced vital capacity (FVC), forced expiratory flow (FEF 25 , FEF 50 , FEF 75 ), vital capacity (VC), expiratory reserve volume (ERV), FEV1 to FVC ratio (FEV 1 %FVC), and FEV1 to VC ratio (FEV 1 % VC). Calibration check was performed every morning by using a 3-liter syringe. In order to verify that the spirometer remained within the desired calibration limits (±3%), the maneuvers were repeated 6-8 times, if required.
Statistical analyses
Statistical analysis was completed using STATISTICA 10 PL and IBM SPSS Statistics 21. For comparison of the two groups a nonparametric Mann-Whitney test was used. In order to evaluate the relationship between respective qualitative variables contingency tables were created and the value of the chi-square (χ2) was calculated. Coronary plaque, and MetS were treated as binary variables. To evaluate the relationships between coronary plaque, MetS and its components, CVD risk factors, and perceived stress score, as well as MetS and stress, separate univariate logistic regression models were applied.
Multivariate logistic regression models were constructed to assess the respective relations of different variables for the incidence of coronary plaque, i.e. a number of CVD risk factors without, and after adjustment for perceived stress score; MetS per se (one-dimensional model), and after adjusting for age, sex, smoking, and perceived stress score, and also the impact of MetS components after adjusting for age, sex, smoking and perceived stress score. To obtain crude and adjusted odds ratios (ORs) and 95% confidence intervals (CIs) the models of multivariate logistic regression were used. In order to identify the pulmonary function affecting factors, stepwise multiple regression models were devised.
As independent variable applied MetS components. For evaluating the correlations between respective variables, Spearman's rank correlation coefficient was used. P<0.05 indicated statistical significance.
Establishing the extent of coronary atherosclerosis and clinical characteristics
Clinical characteristics of the participants (216 M, 19 F) are comprised in Table 1, while characteristics of the study subjects stratified by MetS status are presented in Table 2. The MetS subjects, as compared to the non-MetS subjects, showed higher (although not significantly; p = 0.053) prevalence of stable CHD, diagnosed by typical clinical coronary symptoms and positive exercise ECG testing ( Table 2). Among 194 (82.55%) subjects (88 with and 106 without MetS) who underwent exercise stress testing, positive result was significantly more frequent in the MetS subjects (p = 0.012), whereas exercise workload was markedly lower in this group Metabolic syndrome, cardiovascular risk factors, coronary atherosclerosis, carotid IMT and FMD As shown in CTCA, the MetS subjects had a higher prevalence of any type of coronary artery atherosclerosis (coronary plaque and/or stenosis), as compared to the non-MetS subjects (p<0.007). Logistic regression demonstrated that MetS (binary variable) significantly increased by 2.5-fold the chance of coronary artery atherosclerosis (OR = 2.62, 95%CI 1.24-5.52, p = 0.01; Table 3). Age and cigarette smoking were the variables that affected the relationship (models 2,3,5). After adjustment for age, we found OR = 2.16 (95% CI 1.02-4.78; p = 0.04), and after further adjustment for smoking the relationship appeared even weaker (OR = 2.09, 95% CI 1.02-4.76; p = 0.04). Out of all MetS components only hypertension impacted the incidence of coronary plaque in the univariate model (OR = 1.03, 95% CI 1.009-1.054; p = 0.004; OR = 1.049, 95% CI 1.01-1,08; p = 0.007, for systolic and diastolic blood pressure, respectively) as well as in the multivariate analysis ( Table 4).
The odds ratios for coronary plaque presence, relative to a number of CVD risk factors, are shown in Table 5. A trend towards an increased prevalence of coronary atherosclerotic plaques with an increasing number of CVD risk factors (i.e. obesity, dyslipidemia, hypertension, dysglicaemia, cigarette smoking and CV history) was established. Each additional consecutive risk factor (above two) increased the OR for the atherosclerotic coronary plaque presence. In multivariate model, adjusted for age, sex and stress, each successively added risk factor had greater odds for coronary atherosclerosis (OR = 5.67, 95% CI 1.07-29.85, p = 0.03; for 3 risk factors and OR = 9.05; 95% CI 1.24-66.23, p = 0.02 for 6 risk factors), when compared to the subjects with 0-2 CVD risk factors (Table 5).
Apart from the MetS components, a significantly higher carotid IMT value, plasma CRP, but no TNFα levels, and brachial artery FMD in the MetS subjects were found ( Table 2). The mean carotid IMT was significantly higher in the subjects with coronary plaque presence, as compared to the subjects without it (0.61±0.13 mm vs. 0.56±0.10 mm; p = 0.02). The highest mean IMT value was found in the MetS subjects with coronary plaque, as compared to the non-MetS subjects, and the ones without any coronary plaque presence (p = 0.006).
Carotid IMT value positively correlated with a number of CVD risk factors, all MetS components, plasma CRP levels, and inversely correlated with brachial FMD (S1 Table). The mean Table). The association between plaque prevalence and MetS (binary variable), depending on the correcting variables, are presented in Table 3. Perceived stress was not associated with the Table 4. In multivariate logistic regression analysis, stress affected the prevalence of coronary plaque (OR = 1.05, 95% CI 1.001-1.10; p = 0.04), and out of MetS components only hypertension had its effect. Age was also a variable related to the incidence of plaque. In turn, as shown in Table 5, perceived stress appeared the variable that actually modified the effect of more than two CVD risk factors on the incidence of coronary lesions. The association of perceived stress with the values of carotid artery IMT was weak (r = 0.15, p = 0.052), and altogether non-existent for FMD. No association whatsoever was found between the perceived stress score and the level of individual physical activity. The nature of relationships between select study variables are presented in S1 Table. Physical activity and metabolic syndrome Out of all physical activity domains only leisure-time physical activity was significantly lower in the MetS subjects (p = 0.0001; Table 6). Logistic regression showed that leisure-time physical activity reduced the chances of developing MetS (OR = 0.98, 95% CI 0.96-0.99; p = 0.022). On the other hand, the intensity of physical activity associated with transportation and total walking was not significantly lower in the MetS subjects (p = 0.08). There were no differences between the respective study groups with regard to moderate, vigorous and total physical activity (Table 6).
Pulmonary function and metabolic syndrome
The MetS subjects had significantly lower values of FEV 1 , FVC and VC than the non-MetS subjects (Table 6). However, the values in both groups remained within normal limits, though. As regards the individual values of pulmonary function parameters, only 2 out of 235 subjects (0.85%) had FEV 1 , VC and FVC under the 5 th percentile, and only 7 subjects (3%) had FEV 1 / FVC ratio under the 5 th percentile. FEV 1 %FVC and FEV 1 %VC ratios did not differ between the groups. Only ERV was evidently diminished, and MetS subjects had significantly lower ERV than the non-MetS subjects (p = 0.001). Neither of the groups differed with respect to FEFs. The plasma levels of CRP negatively correlated with FEV 1 (r = -0.16, p = 0.03), ERV (r = -21, p = 0.006), FVC (r = -0.20, p = 0.008), and FEF 75 (r = -0.21, p = 0.005). There were no associations between TNF-α and pulmonary function parameters.
Discussion
This study is the first to address the cardiovascular risk profiles, including coronary plaque presence and surrogate markers of atherosclerosis in a cohort of police officers. We also assessed the relationship of perceived stress with MetS prevalence, and the coronary atherosclerosis. Since the interrelationship of CVD with the pulmonary function and MetS has recently come to some prominence amongst the investigators, pertinent spirometry variables were also assessed.
Metabolic syndrome, cardiovascular risk factors, and coronary atherosclerosis
High incidence of risk factors for CVD was established in the study group, especially obesity affected 43.83% of the study subjects, while MetS-46%. Those proportions greatly exceeded the incidence of those factors within general Polish population within the same age range [36,37]. Frequently, the incidence of obesity and overweight among policemen was also highlighted by other investigators [17,38]. In comparison to general population, police officers are up to 1.7 times more likely to develop CVD [39,40].
Motillo et al. in their 2010 meta-analysis demonstrated that MetS was associated with a 2-fold increase in risk for morbidity and mortality CVD, and a 1.5-fold increase in the risk for all-cause mortality [3]. Even though deemed a useful, though controversial construct, for many years there has been no consensus as to MetS significance in clinical practice [41,42]. Some years back, it attracted substantial criticism from the American Diabetes Association for its modest consistency and rather limited clinical application [43].
In 2009, a consensus statement on the definition of the MetS, representing the views of six major organizations and societies, was published [27]. In this document they underscored the need to identify the cardiovascular risk associated with MetS. In contrast to previous clinical definitions, which have differed in the priority given to obesity, the waist measurement would to be a useful preliminary screening tool, while on a temporary basis the national or regional cut-points for waist circumference may well be used. Obesity plays a central role in the development of MetS, and should therefore be considered a key component of any clinical definitions. Especially obesity in isolation, before the actual hallmarks of metabolic dysfunction that typify MetS have developed. Clinical definitions of MetS were designed to identify a population at high lifetime CVD and type 2 diabetes risk, but in the absence of several major risk factors (age, gender, cigarette smoking, total and LDL cholesterol, CV history), are not optimal risk prediction devices for either. The over 33-year follow-up study demonstrated MetS to be a risk factor independent of other established CVD risk factors, indicating the longer-term prognostic value of MetS for CVD, over and above that achieved by short-term global risk calculators [44]. Its presence or absence should therefore be considered an indicator of long-term risk. On the other hand, the short-term (5-10 years) risk is better calculated using the classical algorithms (Framingham), as they include several major CVD risk factors [45]. Despite this, MetS boasts several properties that make it a useful construct, in conjunction with short-term risk prediction algorithms e.g. FRS, and sound clinical judgment for the identification of those at high lifetime risk of CVD and diabetes.
In our study, logistic regression confirmed that each additional CVD risk factor (above 2) increased the chances for the incidence of coronary artery atherosclerotic plaque. Study Wanahita et al. failed to show any evidence of an increased prevalence of coronary artery disease among police officers, as evaluated by calcium scores only, when compared with the general population [46]. We demonstrated that the relationship of MetS alone with coronary atherosclerosis was also of some significance, even though less so than the effect of concomitant, classical CVD risk factors, as detected by CTCA.
Our results remain in agreement with the study of Butler at al., whereas Pigna et al. found that atherosclerotic burden was more strongly correlated with the number of individual MetSrelated factors than with the clinical diagnosis of MetS itself [15,16]. On the other hand, Sattar et al. showed that MetS and most of its components were associated with the risk for the newly onset diabetes only in the elderly populations [47]. Some angiographic studies produced inconsistent results with regard to this issue [48][49][50]. Recently, a prospective, multicenter study provided data that MetS patients were significantly more prevalent, and CHD prognosis was comparable to the patients with 1, but not with 2 MetS components [50]. However, none of the above referenced studies addressed the population of police officers.
Stress and metabolic syndrome
Significant role of stress in the pathogenesis of MetS and atherosclerotic process has also recently been granted its due acknowledgement, as police work is deemed one of the most stressful professions [38,51,52]. In the present study perceived stress appreciably increased the chance for MetS prevalence. Violanti et al. [51] revealed that three or more of MetS components were encountered in police officers exhibiting the highest levels of posttraumatic stress disorder symptoms, in comparison to the officers allocated to the lowest stress symptom category. The INTERHEART study showed that work stress doubled the risk of CHD, and the data from meta-analysis of the published prospective cohort studies suggested that work stress was associated with 50% excess risk of CHD [8,53]. Accumulated work stress proved a risk factor for CHD, especially among the younger, working-age population and was associated with a higher risk of MetS and incidental obesity [54][55][56]. As reported by Chandola et al., about 16% of the effect of work stress on CHD may well be attributed to its effect on MetS [57].
Even though PSS-10 scale has been used in various populations, including Poland, very few studies assessed the perceived stress using this scale in police officers [34,[58][59][60]. In our own study the mean PSS-10 scores in MetS and non-MetS subjects were higher than the results demonstrated in Chinese young female police officers (mean 15.2; 38.0%), and were also higher, as compared to the ones yielded by the community residents, as quoted in the original norms [58,61]. Furthermore, in our cohort group the perceived stress value was higher than in the Ramey et al. [38] study where the mean score was 20.0, since the PSS-14 scale was used by the investigators (scores range 0-56; 35.71%). In the recently published study of Carson et al., a correlation between PSS-10 score and BMI was demonstrated [60]. We established that perceived stress level was associated with MetS, and actually correlated with three of its components (waist circumference, triglycerides, and diastolic/systolic blood pressure). We also found that blood pressure and stress was associated with coronary plaques, as well as that stress was the variable that actually modified the effect of more than two CVD risk factors on the incidence of coronary lesions.
There is an increasing evidence of potential role of stress in the pathogenesis of premature CHD, even though the actual mechanisms underlying this association remain unclear [52,53,57,62]. Stress may act alone, or through the development of risk factor clustering, represented by MetS, as well as may impact the other CVD risk factors at different stages of life [54]. In our study, though, perceived stress was not associated with the prevalence of coronary plaque. The long latency period between some distant risk factors and manifest CHD, and the fact that CHD is a multi-etiological disease, make it difficult to distinguish between single causal risk factors [63]. Consequently, the heterogeneity of MetS poses a problem when assessing the risk for CVD, while the actual level of risk has been demonstrated to differ, depending on the combination of its components [64]. Subject to a combination of certain abnormalities, CVD risk may be higher or lower than the estimates for the syndrome considered as a whole.
In our study, the relationship of stress with MetS and its components (hypertension, waist circumference, triglicerydes) was demonstrated. Stress at work may well lead to CHD through direct activation of the neuroendocrine stress pathways, and indirectly through individual health behavior. Higher cortisol level, and hypertension are deemed the adaptive physiological responses to stress [57,65]. In a recently published study, the relationships between shift work, circadian rhythm, and MetS, were consistently documented [66].
In summary, stress may affect the body through direct activation of neuroendocrine responses to stressors, continuing high blood pressure in relation to long-term stress, or more indirectly through bad habits, such as intensified smoking, reduced leisure-time physical activity, and unhealthy diet, which all increase the risk of CHD and the development of obesity and MetS [54,55,57,[67][68][69].
Metabolic syndrome, carotid IMT and brachial FMD
Localized inflammation in adipose tissue propagates an overall systemic inflammation associated with visceral obesity, insulin resistance and sub-clinical vascular inflammation, which modulates and results in atherosclerotic processes [70]. Sub-clinical atherosclerosis, i.e. increased carotid IMT observed in the MetS subjects, and spontaneous recovery from MetS was associated with a reduced carotid IMT progression [71][72][73]. The studies addressing the relationship between carotid IMT and CVD risk factors, and MetS components were published by other authors [74,75]. No association between brachial artery FMD and MetS was encountered in the other studies which applied the NCEP-definition of MetS [76,77]. In contrast, in the Framingham Offspring study a significant inverse relationship between the incidence of MetS and FMD was demonstrated [78].
The FMD value depends on the diameter of the artery before occlusion, and no significant differences in the brachial artery diastolic response between respective study groups might result from a larger diameter at baseline in MetS subject. Similar findings were also reported by other investigators [76,78]. The Young Finns Study demonstrated that the increased carotid IMT was associated with a number of CVD risk factors only in the subjects with impaired FMD, although not with those with preserved FMD [79]. The prevalence of established CVD risk factors does not seem to be the sole determinant of endothelial function, as the individuals with normal endothelial function and the patients with various stages of endothelial dysfunction may not, in fact, differ in terms of their respective risk factor profiles [80,81]. Variable endothelial susceptibility of individual subjects to CVD risk factors may well underscore some other, as yet undetermined factors, inclusive of shear stress and genetic predisposition. Considering the age of the study subjects, no significant differences in the mean values of FMD in our study may well be attributed to efficient vasculo-protective mechanisms, despite numerous CVD risk factors.
Physical activity, CHD and metabolic syndrome
In the present study, intensity of physical activity in both groups was well above the average for adults. Work-related physical activity accounted for about 40% of total physical activity, and those values exceed the corresponding ones published to date [82]. The physical requirements of police work are essential not only with a view to maintaining good health, but also with regard to allowing the individuals to effectively pursue their work duties. We found that leisure-time physical activity reduced the chances of developing MetS. Recent meta-analysis of prospective cohort studies revealed that while a moderate level of work-related physical activity reduced the risk of CHD, high physical activity at work did not actually add to a potential protective effect [9]. On other hand, high level of physical activity at leisure time reduced the risk of CHD within a 20-30% range. In line with the recommendations comprised in the 2008 Physical Activity Guidelines for Americans, "some physical activity is better than none", and "additional benefits occur with more physical activity" [83].
High total value of physical activity in our study population, and a high profile of relative physical activity (as revealed in the self-assessment questionnaire) gives grounds to believe this might well be due to occupational specifics. Mandatory and formalized occupational activity during working time aggregates with physical activity during leisure time, as is necessary to maintain the occupationally required level of individual physical functionality. Individual pursuit of training regimens in the spare time, to a large extent of an organized type, making use of the available in-house training facilities and supervised by professional instructors, is both personally motivated, as well as formally imposed and duly verified by the employer.
Metabolic syndrome and pulmonary function
The results of the present study revealed that the MetS subjects had significantly lower FEV1, FVC and VC% predicted than the non-MetS subjects, although the values obtained in both groups were within normal limits. No differences were found as regards FEV1/FVC and FEV1/ VC ratio. Rogliani et al. reported the results quite similar to ours, when they compared pulmonary function test results between the non-smoker subjects with and without MetS [84]. Several population-based studies documented that restrictive pulmonary function impairment was associated with an increased risk of MetS [21,24,25,84,85]. Another study revealed, though, that FEV1 was an independent predictor of MetS development [23].
Taking into account that only certain subjects had the values of selected pulmonary function parameters under the lower level of limits, we may well assume that our cohort had normal pulmonary function, i.e. without any impairments. Only ERV% predicted was evidently lower than normal, and the MetS subjects had significantly lower value of this parameter in comparison to the non-MetS subjects. It might well be that the lower value of ERV might have an impact on the lower value of VC% predicted in this group. The results of residual volume and total lung capacity might help resolve this issue.
Lower pulmonary function in the MetS group may result from obesity which characterizes this group. In our study, BMI and waist circumference negatively correlated with several pulmonary function parameters, although by far the strongest relationship was encountered with ERV. Obesity may reduce chest wall compliance, impede diaphragm movement, increase thoracic pressure during expiration, and lead to the closure of peripheral lung units [86]. In a population-based study abdominal obesity was the key determinant of the association between MetS and the pulmonary function impairment [22].
The most likely mechanism that links the impaired pulmonary function with MetS consists in systemic inflammation. Negative associations between CRP levels and several pulmonary function parameters were established. Higher CRP and glucose levels, coronary plaque burden, and lower pulmonary function results in our MetS subjects are corroborated by other studies [23,[87][88][89]. Hsiao et al. found negative correlation between CRP and FEV1 [23]. Systemic inflammation was inversely linked with the quartile of the lowest FVC or FEV1 (% predicted), therefore suggesting its crucial role in the decline of pulmonary function [87]. In the epidemiological study, reduced FEV1 increased the risk of CV mortality irrespective of age, gender and smoking history [88]. Park et al. demonstrated that prevalence of MetS and coronary artery calcification score significantly increased, as FVC or FEV1 values decreased [89]. The results of the Moli-sani Project, pursued on an adult general population, revealed that pulmonary function decline was associated with the estimated risk of CVD in 10 years, as measured by the COURE risk score [90]. The CUORE risk score was inversely associated with FEV1, FVC and total lung capacity only. The authors concluded that a restrictive pattern rather than the airway obstruction was related in particular to a worse cardiovascular risk. Additional research is required to unequivocally determine which type of the pulmonary function abnormalities is associated with cardiovascular risk, as well as what is the potential mechanism that links the impaired pulmonary function with MetS.
Limitations
Some limitations of our study consist in there being no control groups, and the fact that 65.53% of the subjects only underwent CTCA. It might well be that the effects of stress, as demonstrated in our study, are smaller due to some repressive mechanisms (crowding stress) in this occupational group. The application of the lies scale and self-assessment questionnaire used for the evaluation of the intensity of individual physical activity might well facilitate a more reliable assessment. The fact that we did not perform bodypletysmography was yet another limitation of the present study.
Conclusions
In conclusion, our cohort study demonstrated high prevalence of MetS and CVD risk factors in police officers. The association of MetS with coronary artery atherosclerosis was weaker than with concomitant, classical CVD risk factors, as detected by CTCA. Stress increased the chance for MetS prevalence, was associated with its components, and also indirectly with the prevalence of coronary plaque. In contrast, leisure-time physical activity reduced the chances of developing MetS. There was neither any relationship between stress and carotid arteries atherosclerosis, nor one with the endothelial function. The lower, though still normal values of the pulmonary function test variables in the MetS subjects may indicate the impact of obesity, or systemic inflammation often associated with this syndrome. This is the first study demonstrating the relationship of MetS and the co-existence of CVD risk factors with prevalence of coronary atherosclerosis, as confirmed by CTCA test in an occupational group exposed to stress. Our findings have application potential both in the clinical practice and public health policy-making. Early primary prevention and a comprehensive therapeutic intervention with regard to CVD risk factors may also have appreciable potential to effectively reduce overall risk of CV events and prevent pulmonary dysfunction in this type of subjects.
Supporting Information S1 Table. Spearman correlation coefficients between metabolic components, pro-inflammatory markers, FMD, carotid IMT, blood pressure and perceived stress. Table. Spearman correlation coefficients between the pulmonary function parameters, BMI, metabolic syndrome components and pro-inflammatory markers. (DOC) | 2017-04-20T06:42:51.416Z | 2015-08-14T00:00:00.000 | {
"year": 2015,
"sha1": "8c6e0e3695f1e7eb37db30b2f5c334cdfc3493d5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0133750&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c6e0e3695f1e7eb37db30b2f5c334cdfc3493d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18487853 | pes2o/s2orc | v3-fos-license | Distribution of Black Carbon in Ponderosa Pine Forest Floor and Soils following the High Park Wildfire
Biomass burning produces black carbon (BC), effectively transferring a fraction of the biomass C from an actively cycling pool to a passive C pool, which may be stored in the soil. Yet the timescales and mechanisms for incorporation of BC into the soil profile are not well understood. The High Park fire (HPF), which occurred in northwestern Colorado in the summer of 2012, provided an opportunity to study the effects of both fire severity and geomorphol-ogy on properties of carbon (C), nitrogen (N) and BC in the Cache La Poudre River drainage. We sampled montane pon-derosa pine forest floor (litter plus O-horizon) and soils at 0– 5 and 5–15 cm depth 4 months post-fire in order to examine the effects of slope and burn severity on %C, C stocks, %N and BC. We used the benzene polycarboxylic acid (BPCA) method for quantifying BC. With regard to slope, we found that steeper slopes had higher C : N than shallow slopes but that there was no difference in BPCA-C content or stocks. BC content was greatest in the forest floor at burned sites (19 g BPCA-C kg −1 C), while BC stocks were greatest in the 5–15 cm subsurface soils (23 g BPCA-C m −2). At the time of sampling, unburned and burned soils had equivalent BC content, indicating none of the BC deposited on the land surface post-fire had been incorporated into either the 0–5 or 5–15 cm soil layers. The ratio of B6CA : total BPCAs, an index of the degree of aromatic C condensation, suggested that BC in the 5–15 cm soil layer may have been formed at higher temperatures or experienced selective degradation relative to the forest floor and 0–5 cm soils. Total BC soil stocks were relatively low compared to other fire-prone grassland and boreal forest systems, indicating most of the BC produced in this system is likely lost, either through erosion events, degradation or translocation to deeper soils. Future work examining mechanisms for BC losses from forest soils will be required for understanding the role BC plays in the global carbon cycle.
Introduction
While pyrogenic or black carbon (BC) is now recognized as a ubiquitous soil carbon (C) fraction, it is one of the least understood components of the terrestrial C cycle.Every year, fire burns approximately 10-15 × 10 6 ha of boreal and temperate forest and more than 500 × 10 6 ha of tropical and subtropical forests and savannas (Goldammer and Crutzen, 1993;Knicker, 2011), during which 0.12 to 9.5 % of the burned biomass is converted to BC (Forbes et al., 2006).Black C is utilized by soil microbes, but at a slow rate (Santos et al., 2012); it generally resides in the soil for a long time (from centuries to millennia; Singh et al., 2012), acting as a long-term C sink, with a potential negative feedback on climate warming.However, BC stocks in soils are not only related to BC production rate and decomposition but may also be lost through runoff, leaching or burning (Czimczik and Masiello, 2007;Foereid et al., 2011); thus, BC stocks are strongly dependent on surface topography and soil physicalchemical environment (Bird et al., 2015;Knicker, 2011).
BC persistence and dynamics in soil seem to be controlled by mechanisms similar to those that control soil organic matter dynamics including inherent chemical recalcitrance and organo-mineral interactions (Knicker, 2011).Persistent BC particles in soils are composed of a refractory, aromatic core and a reactive, oxidized patina (Keiluweit et al., 2010;Lehmann et al., 2005)
characterized by carbonyl and
Published by Copernicus Publications on behalf of the European Geosciences Union.carboxyl functionalities (Cheng et al., 2006(Cheng et al., , 2008)).The degree of condensation of the aromatic core has been shown to be quite variable (McBeath and Smernik, 2009;Wiedemeier et al., 2015b) but can be broadly characterized as dominated by C in condensed aromatic rings resistant to decomposition (Baldock and Smernik, 2002).Besides its inherent chemical recalcitrance, BC stabilization in soils likely occurs through bonding to minerals, which is thought to be the most persistent mechanism of SOM (soil organic matter) stabilization (von Lutzow et al., 2006).The presence of carboxyl functionalities on BC surfaces provides "teeth" available to chelate soil aluminum and iron, creating BC mineral complexes that are highly refractory to microbial decay and have longer mean residence times than non-mineralassociated BC (Christensen, 1996;von Lutzow et al., 2006).
In order to become stabilized in soils, BC must first be transferred from burned surface material to the subsurface, and the process of incorporation will be strongly related to surface topography.The shape of a landscape and propensity for erosion versus deposition is dependent upon several variables including bedrock composition, slope, elevational gradients in temperature and precipitation, and disturbance history such as the frequency of wildfires.While the strong relationships between geomorphology, soil erosion and sediment transport are fairly well understood (Ritchie and Mc-Carty, 2003;Slater and Carleton, 1938), the relationship between soil erosion and the fate of different components of SOM that are eroded, including BC, is relatively unknown (Bird et al., 2015;Rumpel et al., 2006).
The difficulty in measuring BC contributes to our limited understanding of its transport processes and function in the global C cycle.Because BC exists along a continuum of combustion products, from charred biomass to soot, with differing physical and chemical features, no single method can accurately quantify total BC content (Hammes et al., 2007;Masiello, 2004).Visual counts of charcoal, resistance to oxidation methods, nuclear magnetic resonance (NMR) spectroscopy or the quantification of BC-specific molecular markers (e.g., benzene polycarboxylic acids, BPCAs) have each been employed for the quantification of BC.While each approach has advantages and disadvantages, the BPCA method has been shown to yield conservative estimates of BC with charred inputs and more consistent results than many other quantification methods (Hammes et al., 2007).Moreover, the BPCA method yields additional information about BC quality related to its degree of aromatic condensation and aromaticity (Schneider et al., 2010(Schneider et al., , 2013;;Wiedemeier et al., 2015a;Ziolkowski et al., 2011).
A few estimates exist of BC production after fire (Santín et al., 2012), as well as of BC stocks in soils for different ecosystems (Bird et al., 1999;Cusack et al., 2012;Schmidt et al., 2002).Yet, estimates of BC production and losses are not balanced (Czimczik and Masiello, 2007;Rivas et al., 2012), clearly identifying our lack of understanding and the need for a full accurate accounting of BC dynamics after fire at the watershed level.
Between 9 and 24 June 2012, the High Park fire (HPF) burned more than 35 000 ha in northern Colorado along the Cache la Poudre (CLP) River in an area dominated by ponderosa pine (Pinus ponderosa; Fig. 1).The aims of this work were to (1) determine the C and BC stocks, and the proportion of C that was BC, in ponderosa pine forest floor and soils following the HPF; (2) examine the effects of burn severity and landscape slope on soil C, N and proportion of BC; and (3) use the distribution of individual BPCAs to understand the degree of condensation of BC through the soil profile.We expected that BC stocks would be the greatest at high burn severity sites, followed by moderate and then unburned sites, and that the hillslope would have the opposite effect, with lowest BC stocks on the steepest slopes and greatest BC stocks on shallow slopes.We anticipated that BC and C stocks would be greater in the forest floor than in soils and that soil C stocks would be diminished in high burn severity surface soils due to combustion during fire.We also expected that the molecular characteristics of BC would change with depth related to their degree of condensation.
Experimental design and site identification
The sites were located within the montane forest (elevation 1750 to 2850 m) of the CLP drainage, which is dominated by ponderosa pine (Pinus ponderosa) and Douglas fir (Pseudotsuga menziesii) and also includes aspen (Populus termuloides), Rocky Mountain juniper (Juniperus scopulorum), lodgepole pine (Pinus contorta) and other species (Veblen and Donnegan, 2005).Soils in the montane forests are Alfisols from the great group Cryoboralfs and Mollisols from the suborder Ustolls (Peet, 1981).
The montane ponderosa pine forest has a variable severity fire regime meaning there is a mixture of both high-severity, full-or partial-stand-replacing fires and low-severity, nonlethal, surface fires.The mean return interval is approximately 40 to 100 years, and most fire events have both highand low-severity components and are caused by a combination of human and lightening strike ignition (Veblen and Donnegan, 2005).A lightening strike started the HPF on 9 June 2012.It burned over 35 000 ha in the mountainous region of the CLP River drainage through early July 2012.
Our study was a fully factorial, randomized block design with four replicate blocks for all treatments plots, including three levels of burn severity (unburned, moderate burn, high burn) and three slopes (0-5, 5-15 and 15-30 • ), for a total of 36 plots.We opted to constrain the study by slope rather than landscape position (e.g., hilltop versus valley location of flat surface) in order to constrain study site criteria to public lands within the patchy distribution of fire-impacted sites of ponderosa pine vegetation on difficult to access terrain.Geographic information system (GIS) layers of land ownership, slope, fire intensity and burn severity were obtained prior to site location.Potential sampling areas were chosen in state or federal land in areas of homogenous vegetation stands where all slope classes and fire classes were present within a close distance (Fig. 1).Ground truthing was subsequently done to locate each specific slope and burn severity sampling treatment plot.Slopes were determined using a clinometer.Areas were classified as high burn when the fire had burned the entire tree and no needles or small branches remained, the litter layer was consumed in the fire, and there were some small pieces of charcoal throughout the surface layer.Moderate burn areas had ground fire and some crown scorch, but crowns did not burn, at least some needles remained on the trees, and the litter layer remained on the forest floor with some small pieces of charcoal.Unburned areas had no evidence of ground fire and no evidence of burned material on the ground surface.
Forest floor and soil collection
Soil and forest floor samples were collected between October and November of 2012.At each of the 36 experimental plots, forest floor and soils were collected from within a 20 by 20 cm wooden frame and frame GPS coordinates were recorded.The forest floor layer was sampled first including any litter plus organic soils down to the mineral layer, and then the soil was excavated with a hand shovel separately for the 0-5 and 5-15 cm depth.Due to the high surface variability, four additional forest floor samples and three surface (0-5 cm) soil samples were collected at each site, positioning the frame orthogonally at a distance of 2.5 m from the original position.All forest floor and surface soil samples were pooled by plot.
Due to the extreme rockiness at all of the sampling locations, soil bulk density was determined using pit excavation separately for each depth layer (Page-Dumroese et al., 1999).The volume of the pit was determined using volume displacement with millet seed (detailed description in the Supplement).Soil and forest floor samples were transported to the lab and stored at 4 • C until processing.
C. M. Boot et al.: Distribution of black carbon in ponderosa pine forest floor
for dry weight correction.Forest floor samples were then airdried and another subsample taken and heated in a muffle furnace at 600 • C for 12 h to correct forest floor dry weight for ash content.All remaining air-dried forest floor samples were passed through an 8 mm sieve, and any large pieces of plant material were broken up with clippers prior to the samples being ground with a 0.75 mm mesh-screen-equipped Wiley mill and dried overnight at 60 • C.
Soil samples were weighed field-moist and a subsample of each was dried at 105 • C for 48 h for dry-weight correction.Bulk density of each soil depth was calculated as the weight of oven-dry soil with rock removed (Throop et al., 2012) divided by the volume for the depth determined by millet with rock volume removed.Soils were sieved air-dry to 2 mm, and a subsample was oven-dried (105 • C) and finely ground.All the ground, dry, forest floor and soil samples were analyzed for total C and N by an elemental analyzer (LECO CHN-1000; LECO Corporation, St. Joseph, MI, USA) and for BC by the BPCA method as described below.
BPCA analyses
The BPCA method converts condensed aromatic structures to single aromatic rings with variable numbers of carboxylic acid moieties, and a greater degree of condensation (i.e., number of fused rings) correlates with a greater number of carboxylic acid moieties on the individual BP-CAs such that more condensed structures result in greater relative abundance of B6CAs and the least condensed BC would result in a greater proportion of B3CAs (Glaser et al., 1998;Wiedemeier et al., 2015a;Ziolkowski et al., 2011).Black C was determined on all forest floor and soil samples using high-performance liquid chromatography (HPLC) equipped with a photo diode array detector to quantify benzene polycarboxylic acids (BPCAs) as described by Wiedemeier et al., 2013.The BPCA method was validated with biochar-amended soils from the field site (see the Supplement).Briefly, 50-150 mg of finely ground, oven-dried sample was digested with 70 % nitric acid for 8 h at 170 • C. The solution was filtered with ashless cellulose filters, an internal check standard of phthalic acid was added to the solution and the filtrate was cleaned by cation exchange resin and freeze-dried.The freeze-dried sample was redissolved in HPLC-grade water.The redissolved solution containing the BPCAs was separated with a reversed stationary phase column (Waters X-Bridge C18, 3.5 µm particle size, 2.1 × 150 mm) using standard gradient conditions.Individual BPCAs were quantified with using a five-point calibration from standard solutions of benzenetricarboxylic acids (1,2,3-B3CA, i.e., hemimellitic acid; 1,2,4-B3CA, i.e., trimellitic acid; 1,3,5-B3CA, i.e., trimesic acid), benzenetetracarboxylic acid (1,2,4,5-B4CA, i.e., pyromellitic acid), benzenepentacarboxylic acid (B5CA) and benzenehexacarboxylic acid (B6CA, i.e., mellitic acid).The B4CA standards that are not commercially available (1,2,3,4-B4CA, i.e., pre-henitic acid, and 1,2,3,5-B4CA, i.e., mellophanic acid) were identified by their ultraviolet adsorption spectra and quantified using the calibration for 1,2,4,5-B4CA (Yarnes et al., 2011).Previous attempts to calculate a BPCA-C to BC conversion factor have resulted in values that range from 2.27 to 5 and have been difficult to reproduce (Brodowski et al., 2005;Glaser et al., 1998;Ziolkowski et al., 2011).Thus, to simplify empirical comparisons we report values as BPCA-C, either as a proportion of total C or as a stock.
Data analyses
The effects of layer (forest floor, 0-5 and 5-15 cm soil; n = 4 per layer), slope (0-5 • , 5-15 • , 15-30 • , n = 4 per slope) burn severity (unburned, moderate burn, high burn, n = 4 per severity) and all interaction terms on each response variable (soil C, soil N, BPCA C stock, BPCA C as a proportion of total C, and relative abundances of B4CA, B5CA, B6CA and B5CA : B6CA ratio) were compared using the SAS mixed procedure (proc mixed); fixed variables were layer, slope and severity, and block and core were designated as random effects.Post hoc analysis for significant terms was conducted using Tukey's test.When necessary, dependent variable data were log-transformed (%C, %N, C stock, BPCA-C g m −2 ) to meet assumptions of equal variance and normality, which were assessed with Studentized residual diagnostic plots.The null hypothesis, i.e., that the independent factor had no effect or that no linear correlation existed between variables, for all tests was evaluated at α < 0.05.Analyses were run using SAS 9.4.
Percent and stocks of C and N in forest floor and soil
Values ranged from 29 % in the forest floor to 0.9 %C in the 5-15 cm soil layer for %C, from 0.8 in forest floor to 0.08 % in the 5-15 cm soil layer for %N, and from 40 in the forest floor to 13 in 5-15 cm soil for C : N. We tested for effects of layer, burn severity, slope and their interactions and found that the main effects were distinct for each response variable (%C, %N and C : N).Effects of burn severity (p = 0.002) and layer (p < 0.001) on %C could not be independently assessed because the burn by layer interaction was also significant (p < 0.001, Table S1 in the Supplement).Only layer had an effect on %N (p < 0.001), while the C : N ratio was affected by slope (p = 0.009), burn intensity (p < 0.001), layer (p < 0.001) and in interaction (p < 0.001).
Post hoc comparisons (Table S2) confirmed expected decreases in %C and %N from forest floor to 5-15 cm soil (p < 0.001 for each successive layer), along with a decreasing C : N from forest floor to 0-5 cm soil (p < 0.001) and with no change between 0-5 and 5-15 cm soil (p = 0.703).The burn severity by layer interaction term illustrated that the ef-fects of burn were confined to the forest floor layer for %C and C : N. Within the forest floor layer, unburned sites had greater %C than moderately burned (p = 0.009) or highly burned sites (p < 0.001), and moderately burned sites also had greater %C than highly burned sites (p < 0.001).For the C : N ratio the pattern was the same: C : N was highest at unburned sites and decreased significantly at moderately burned sites (p < 0.001) and further still at highly burned sites (p < 0.001).Interestingly, slope also had an effect on the C : N. Post hoc comparisons indicated that C : N on 0-5 • slopes was lower than 5-15 • slopes (p = 0.028) and that the C : N on 0-5 • slopes was significantly lower than 15-30 • slopes (p = 0.012), while the 5-15 and 15-30 • slopes were not different (p = 0.916).
Total C stocks varied considerably between the layers from 3.8 in forest floor to 25.3 tons C hectare −1 in the 5-15 cm soil layer.The only significant effect on total C stocks was depth (p < 0.001), with the forest floor having a smaller C stock than 0-5 and 5-15 cm soil layers (p < 0.001, for each).Soil bulk density values were not significantly different among any of the study sites (Table S1).
Benzene polycarboxylic acid C in forest floor and soil
We determined BPCA-C both in reference to the amount of carbon and the stock by volume of soil or forest floor and found highly variable amounts of BPCA-C for both metrics.For forest floor, concentration values ranged from 0.09 g kg −1 C in unburned forest floor to 40.0 g kg −1 C in highly burned forest floor, and for stocks they range from 0.1 g m −2 in unburned forest floor to 19.52 g m −2 in moderately burned forest floor.In soils, concentration ranged from 2.86 g kg −1 C in moderately burned 0-5 cm soil to 33.83 g kg −1 C in 5-15 cm highly burned soils, and stocks ranged from 2.92 g m −2 in highly burned 0-5 cm soils to 96.66 g m −2 in unburned 5-15 cm soil.Burn severity and layer were the main effects on the concentration and stock of BPCA-C (Fig. 2, Table S3).Results of a mixed model (slope, burn severity, layer and interactions) indicated that there was no significant effect of slope either independently (p = 0.446) or in interaction (slope × burn p = 0.191, slope × layer p = 0.740) on BC concentration.Mean values for BPCA-C stock did decrease with increasing slope in moderately burned forest floor (0-5 • : 18.2 ± 7.1; 5-15 • : 14.8 ± 4.7; 15-30 • : 11.8 ± 4.3 g m −2 ); however, the trend was not significant due to high variability.The independent effects of burn severity (concentration: p = 0.007; stock: p = 0.012) and layer (concentration: p = 0.610; stock p < 0.001) could not be interpreted independently as the interaction of burn severity and layer was also significant (concentration and stock: burn x layer p < 0.001).
Post hoc comparisons indicated that within the forest floor layer, highly and moderately burned material contained significantly more BPCA-C, both by concentration and stock, than unburned material (Table S4, p < 0.001 for both).Within the 0-5 and 5-15 cm layers, there was no statistically significant difference in BPCA-C concentration or stock regardless of burn severity.Within unburned layers, 0-5 and 5-15 cm soils had significantly greater amounts of BPCA-C than forest floor, both by concentration (p < 0.001 and p = 0.004, respectively) and stock (p < 0.001 for both).Within high burn severity, forest floor and soil BPCA-C stocks and concentrations yielded distinct results: the amount of C that was BPCA-C was greater in the forest floor than in 0-5 and 5-15 cm soils (p = 0.023 and p = 0.027, respectively), whereas the stock of BPCA-C was not significantly different in the high burn among forest floor and soil layers.We expected that the layer (forest floor, 0-5 and 5-15 cm soil) and burn severity may contribute to the distribution of BPCAs with BC formed at different temperatures (B5CA : B6CA) or by a higher proportion of more condensed C (B6CA : total BPCAs) with increasing soil depth.Overall, the bulk of BPCAs were B5CA and B6CA varieties, together making up approximately 80 % of the total BPCA-C.The B4CAs were the next most abundant (10-20 %), and the B3CAs were less then 3 %.Results from statistical analyses indicated that "layer" was the main effect on the distribution of BPCAs (p < 0.001, Table S5).Layer also had a significant effect on the ratio of B5CA to B6CA (p = 0.002; Table S5, Fig. 4).Post hoc comparisons were used to evaluate the relative abundance of each BPCA by layer: the proportion of B6CA was greater in the 5-15 cm soils than both in the 0-5 cm soils and forest floor layers (p < 0.001); B5CA was greater in the forest floor than in 0-5 cm and 5-15 cm soils (p < 0.001 for both) and greater in 0-5 than 5-15 cm soils (p = 0.037); B4CA was greater in 0-5 cm soils than in forest floor (p < 0.001) and 5-15 cm soils (p = 0.002), with no difference in forest floor and 5-15 cm soils (p = 0.148).The ratio of B5CA : B6CA decreased with depth due to both decreasing amounts of B5CA and increasing amounts of B6CA.The B5CA : B6CA ratio was significantly greater in the forest floor than in 5-15 cm soils (p = 0.001; Fig. 3, Table S6).
Discussion
Our primary objective was to determine the C stocks, BPCA-C stocks and the proportion of C that was BPCA-C in ponderosa pine forest floor and soils following the HPF.BC can account for 1 to 45 % of the soil organic C, depending upon fire return interval (Czimczik et al., 2005;Saiz et al., 2014), ecosystem type, soil mineralogical properties (Preston and Schmidt, 2006) and other factors that influence OC stabilization (Knicker, 2011) as well as on the method used for quantification.Estimates of BC content based on BPCA measurements are generally lower than those made with photo-, chemical-or thermal oxidation-based measurements or with NMR (Preston and Schmidt, 2006).Only a few studies have estimated the amount of BC in forest soils using the BPCA method, with values that range from 10 to 60 g kg −1 organic C and 0-80 g m −2 (Czimczik et al., 2003(Czimczik et al., , 2005;;Rodionov et al., 2006).Excluding unburned forest floor samples, we found values within this range averaging 14 (±7) g BPCA-C kg −1 C and 19 (±5) g BPCA-C m −2 .It is important to note that BPCAs are markers for BC, and their total amount is estimated to be 2-5 times lower than the amount of BC.This should be taken into consideration when comparing BPCA estimates with BC distribution values in systems that have been assessed with different methods (Brodowski et al., 2005;Glaser et al., 1998;Ziolkowski et al., 2011).
We also aimed to determine how the slope of the landscape and burn severity would influence the amount of BC in forest floor and soil layers following a major wildfire.We found that neither slope nor burn severity had an effect on BC concentration in soils.Interestingly, even the soils from unburned sites had an average BC content of 14 g BPCA-C kg −1 of C, suggesting a persistent BC pool from past fires.Within the forest floor layer, however, un- burned sites contained very low BPCA-C and moderate and highly burned sites contained significantly more, averaging 18 g BPCA-C kg −1 of OC and suggesting that the majority of the BC remaining on the landscape after the HPF persisted in the forest floor rather than moving into the surface soil 4 months post-fire.
We expected that during the interval between the HPF (June 2013) and sample collection (October 2013), HPFderived BC would have begun to move off of steeper slopes during post-fire erosion events, resulting in lower BC deposits on steeper slopes.However, we observed consistent BC content across slopes with the HPF-derived BC isolated to the forest floor layer in both highly and moderately burned areas on a per unit C and per square meter basis.Although slope did not contribute to the landscape pattern of BC distribution over the time period of our study, the summer of 2013 was particularly dry with very few high-intensity rain events (Wohl, 2013).Thus, slope may only become a contributing variable to landscape-level post-fire BC distribution when there are precipitation events sufficient to produce significant sediment movement.In addition, steeper slopes generally have increased surface roughness in montane systems constraining overland sediment movement (Wohl, 2013).We qualitatively examined photos of each of the collection sites and noted increased surface roughness in some of the steeper replicates; thus, increased surface roughness is a plausible explanation for similar BPCA-C values on shallow vs. steeper slopes.
The position of our sites in the landscape may have also contributed to the lack of effect of slope on BC distribution.Because our aim was to address slope, rather than position, the sites were not oriented in a consistent up-or downslope manner; thus, some 0-5 • sites are located on hilltops and oth-ers at valley bottoms.In addition, the landscape position influences the location of ponderosa pine through elevational temperature and moisture gradients (Peet, 1981).We focused on the ponderosa pine because it is the dominant vegetation in the drainage, located on a variety of slopes, whereas consideration of hillslope processes would require accounting for the differences in fire properties and BC inputs that would likely result from grass-or shrub-dominated areas (DeBano, 2000).
The only variable that we found responsive to slope was the C : N ratio, which increased with increasing slopes.The constituent %C and %N values were not significantly different by slope, so the pattern was driven by both slight increases in %C and decreases in %N (Table 1).The trend of higher C : N at steeper sites has been noted on the Colorado Plateau (Norton et al., 2003) and was attributed to the accumulation of fresh, plant-derived high-C : N forest floor on steeper slopes in a N-immobilizing environment and the movement of lower-C : N, partially decomposed material, downslope with rain events.Thus over time, steeper slopes do preferentially move material downslope, but this export mechanism did not apply to the BC that was stabilized in soils over time.
Concentrations of post-fire BC have been shown to be highest in the surface of moderately burned soils due to consumption of relict BC content in highly burned areas (Czimczik et al., 2003).However, in our study, on a per unit C basis the amount of BC in surface 0-5 cm soils was not distinguishable across burn intensities (∼ 14 g BPCA-C kg −1 ), while on a per square meter basis, moderately burned material had greater BC content (20 g BPCA-C m −2 ) than unburned material (17 g BPCA-C m −2 ).The cumulative difference between unburned and moderately burned material was driven by low BC content in the forest floor layer at unburned sites.While the highly burned material did not contain significantly less BC than the moderately burned material, it was also not significantly different from unburned material, largely driven by cumulative losses from both the forest floor and 0-5 cm soil BC stocks.Essentially, the stocks of BC at unburned and highly burned sites are the same; they are just distributed differently: the highly burned sites have greater BC stocks in forest floor than soil, and the unburned sites have greater BC stocks in soil than forest floor (Fig. 2).
We were initially surprised to find the same amount of BPCA-C in soils from unburned and burned sites.The BC at unburned sites must be from prior fires, making up a relatively small stock twice the size of the BC found in the forest floor from the HPF.These data suggest that eventually a proportion of the BC produced during the HPF will be introduced into the soils and retained in the ecosystem.Given a fire return interval of ∼ 70 years in ponderosa pine forests and a mean residence time for BC stock in soils of approximately 300 years (Hammes et al., 2008;Schmidt et al., 2011), using first-order decay, we calculated that 2.4 g m −2 , or 17 %, of the HPF-derived BC in forest floor (∼ 14 g m −2 ) would be transferred to the 0-15 cm soils to maintain a steady-state stock (∼ 40 g m −2 ).This calculation contains a high degree of uncertainty; a greater residence time of BC would result in decreased incorporation, and the reverse would be the case for a shorter residence time; fires with different properties will deposit different amounts of BC on the soil.
The estimate for BC incorporation described above is not meant to be used as a characteristic value of this ecosystem, but instead is meant to illustrate that the bulk of the BC in this system likely moves off the surface, either through incorporation into deeper soils, biotic or abiotic degradation, or export through erosion.BC incorporation at depth via water flow and biotic infiltration processes stimulated by soil fauna has been suggested to be the prime mechanism by which BC is sequestered in the soil (Czimczik and Masiello, 2007), although we would have expected to see some increase in the BC content of surface soils at burned sites if incorporation to deep soil was the dominant mechanism.An additional alternative is the loss of BC through biotic and abiotic degradation, as a proportion of BC is known to be labile (Zimmerman, 2010); however, that proportion is small (Stewart et al., 2013;Zimmerman and Gao, 2013) and other mechanisms are most likely to contribute to major loss pathways.Erosion rates in montane ecosystems post-fire can increase up to 3 orders of magnitude depending on the severity of the fire and the intensity of precipitation (Wagenbrenner and Robichaud, 2014).Erosion has been shown to be important for BC distribution, as previous work has demonstrated that approximately 50 % of BC may be lost through erosion processes (Major et al., 2010;Rumpel et al., 2009).While each of these loss mechanisms -degradation, downward translocation and erosion -may be important for BC distribution in the CLP drainage, preliminary BC data from sediment fences and river banks (Boot et al., 2014), along with a report on dissolved and particulate BC export (Wagner et al., 2015), suggests that erosion may be a dominant source of BC loss in this system.
Our third objective was to describe the distribution of BP-CAs within forest floor and soil layers to determine whether the molecular structure of BC was characteristic by layer or influenced by burn severity.Recently, Wiedemeier and others confirmed that the proportion of B6CAs relative to the total BPCAs measured directly correlated with both the degree of condensation and the aromaticity of chars; thus, we used the relative abundance of B6CA : total BPCA to describe the molecular features of BC (Fig. 3).We found that B6CA relative abundance was greater in the 5-15 cm soils relative to forest floor, suggesting that more condensed BC is present in deeper soils at these sites; there was no effect of burn severity on BPCA abundances.The relative abundance of B6CA has also been associated with the highest heat treatment temperature (HTT), correlating increasing HTT with increasing condensation (Schneider et al., 2013).Forest fire temperatures are difficult to determine and can range from approximately 1000 • C in the canopy to a maximum of 850 • C at the surface, averaging approximately 300, and rarely exceed 150 • C at 5 cm in the mineral soil (De-Bano, 2000;Wolf et al., 2013).While it is tempting to derive HTT from BC deposited on the soil surface following the fire, it must be noted that surface and soil BC is likely to be a pool integrated across sources that were pyrolized over the range of fire temperatures.The amount of B6CA has been shown to correlate directly with HTT for bark and wood materials, yet no clear relationship exists between B6CA concentrations and the temperature of charring for pine-needleor leaf-derived chars (Schneider et al., 2010(Schneider et al., , 2013)).Information on HTT from B6CA alone can be bolstered by also using the ratio of B5CA : B6CA, which has a significant inverse linear relationship with combustion temperature.Natural chars range from B5CA : B6CA values 1.3 to 1.9 for cooler burning forest fires (∼ 300 • C), 0.8 to 1.4 for hotter grass and shrub fires (∼ 500 • C) and < 0.8 for the hottest burning domestic fires (800 • C; Wolf et al., 2013).In HPF impacted areas, the forest floor had a B5CA : B6CA ratio of 1.2, which would be at the border between grass or shrub and forest fires and yield an integrated predicted temperature of around 400 • C, whereas the B5CA : B6CA ratio for 5-15 cm soils was significantly lower, averaging 0.8 and thus corresponding to a higher combustion temperature of approximately 600 • C, which matches well with the temperatures that would be predicted from the B6CA content alone.Other studies have suggested that the pattern of BPCAs may be informative for determining the amount of processing by microorganisms (Rodionov et al., 2010), although these correlations have not been empirically validated and abiotic degradation, such as preferential leaching of less condensed forms of BC, would also shift the relative abundance of the BPCA pattern (Abiven et al., 2011).Thus, the greater B6CA content and decreasing B5CA : B6CA ratio in deeper soils from our study may represent either BC derived from greater average HTT in past events, selective removal of less condensed forms of BC through preferential solubilization (Abiven et al., 2011) or other biotic or abiotic degradation of less condensed forms of BC.
Conclusions
The distribution of BC on a landscape will influence how an ecosystem recovers following a wildfire.Although BC is generally considered nearly biologically inert, its impact on soil physical properties may alter biogeochemical cycling.For example, BC amendments in agricultural systems (as biochar) have been shown to change water-holding capacity and nutrient retention (Lehmann, 2007); thus, its persistence in post-fire soils may be beneficial to, or otherwise alter, vegetation recovery dynamics.BC has also been shown to enhance the growth of microorganisms, potentially increasing the accumulation of new SOM (Bird et al., 1999).In addition to altering post-fire recovery dynamics, the movement of BC following wildfire also has implications for water quality, including municipal water treatment techniques as well as reductions in primary productivity in streams and sediments through increased sediment load (Wood and Armitage, 1997).Our results suggest the vast majority of HPF-derived BC deposited on the landscape persisted in the forest floor 4 months post-burn, regardless of slope, and was formed at an average temperature of approximately 400 • C. Stocks of BC in this montane ecosystem were relatively small and were not altered by the HPF; thus, subsequent distribution will be governed by modes of BC loss likely related to erosion of the forest floor layer and may also include transport into the soils via dissolution and translocation as well as biotic or abiotic degradation.
The Supplement related to this article is available online at doi:10.5194/bg-12-3029-2015-supplement.
Figure 1 .
Figure 1.Location and classification (burn severity and slope) of study sites (n = 36) in the dominant ponderosa pine vegetation highlighted in green within the High Park fire burn area.
Figure 4 .
Figure 4. Ratio of B5CA to B6CA from 0.2 to 2.0, illustrating an increasing amount of B6CA and decreasing amount of B5CA with increasing soil depth (n = 12 per layer) and a significant difference between forest floor ratio and 5-15 cm ratio (p < 0.001).
Author contributions.M. F. Cotrufo and K. Paustian designed the experiment.M. Haddix coordinated and executed field sampling and site characteristic analyses.C. M. Boot and M. L. Haddix performed BPCA analyses and C. M. Boot prepared the manuscript.
Table 1 .
Site characteristics (%C, %N, C : N, C stock) of forest floor, 0-5 cm soil and 5-15 cm soil classified by burn severity and slope.Mean values reported with standard errors in parentheses (n = 4). | 2015-06-01T23:46:22.000Z | 2014-12-05T00:00:00.000 | {
"year": 2014,
"sha1": "f6f9e6a7265834533fb7172bb7d741f6e5941cf1",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/12/3029/2015/bg-12-3029-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "68daf4ea788163dd52da8bb752b34e41903a0d2c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
53580856 | pes2o/s2orc | v3-fos-license | Sensitivity Analysis of Alisma plantago-aquatica L . , Cyperus difformis L . and Schoenoplectus mucronatus ( L . ) Palla to Penoxsulam
Determining the intra-specific variability of response to a given herbicide is important for monitoring the possible shifts in the sensitivity of weed populations. This study describes the responses of populations of Alisma plantago-aquatica, Cyperus difformis, and Schoenoplectus mucronatus from Italy, Greece, Portugal, and Spain to penoxsulam, an acetolactate synthase (ALS) inhibitor widely used in rice. To evaluate previously evolved resistance to ALS inhibitors, sensitivity to azimsulfuron and bensulfuron-methyl was assessed. Dose-response experiments with penoxsulam were performed in a greenhouse simulating paddy rice field conditions. Log-logistic dose-response curves were used to estimate the ED50, ED80, ED90 and GR50, GR80, and GR90. To calculate the average ED and GR and assess the intra-specific variability, an artificial resampling method was performed. Populations ALSPA 0364, 0365, 0469, 0470, 0471; SCPMU 0371, 0475, 0267; CYPDI 0013, 0431, 0432, 0433 appeared to be resistant to sulfonylureas, while a higher sensitivity to penoxsulam was observed, while populations ALSPA 0363, CYPDI 0223 and SCPMU 9719 proved to be cross-resistant. Regardless of species, ED90 of susceptible populations were below penoxsulam label dose (40 g ai ha−1) while they reached values higher than 320 g ai ha−1 for resistant populations. Average GR50 were generally lower than ED50. Sensitivity variability among susceptible populations is relatively low, allowing for discrimination between susceptible and resistant populations, and previously evolved resistance to sulfonylureas can influence sensitivity to penoxsulam.
Introduction
Rice is grown on about 500,000 ha in Europe, with Italy being the main producer cultivating around 240,000 ha.Weeds are the main pest category for rice, especially in paddy conditions; so, chemical weed control is an essential component of crop management.Intensive use of herbicides with the same site of action (SoA), mainly inhibitors of acetolactate synthase (ALS), is a common characteristic of weed management in rice.This situation strongly increases the risk of selecting herbicide resistant biotypes, which have constantly increased over the last 20 years [1,2].Since the mid-1990s, several rice weed species have evolved resistance to ALS inhibitors in Europe, United States of America (USA), and Korea [3][4][5][6][7].More specifically, it is estimated that at least half of the area cultivated with rice in Italy has been infested with ALS-resistant weed biotypes [2].
Agronomy 2018, 8, 220 3 of 14 seed batch representative of the intra-population variability.Some known ALS-resistant biotypes, i.e., populations 0363 of A. plantago-aquatica, 0013 and 0223 of C. difformis and 9719 and 0267 of S. mucronatus, were also included in the experiment as ALS-resistant checks.The samples were collected before penoxsulam was commercialized, however other ALS inhibitors (SU) had been used in previous years.Some populations, i.e., 0367, 0368, 0473, 0474, and 0475 of A. plantago-aquatica, 0439 and 0440 of C. difformis and 0478 of S. mucronatus were deliberately collected in areas where no ALS inhibitor had been used before to assess whether previous exposure to a herbicide with this SoA could affect sensitivity to penoxsulam.Information regarding origin and any previous exposure to ALS inhibitors of each population is reported in Table 1.
Plant Material and Growing Conditions
Different treatments were performed to break dormancy and promote the germination of the three species.Seeds of C. difformis were stratified at 4 C on moistened filter paper in Petri dishes for 15 days and then immersed for 10 min in a sodium hypochlorite solution (10 mL L −1 ) to prevent fungi development.Seeds were finally put in Petri dishes filled with an agar substrate (60 mg g −1 ) containing KNO 3 (20 mg g −1 ) and incubated at a thermal regime of 25 (light)/15 (dark) C with a 12/12 h (light/dark) photoperiod.Seedlings were transplanted at the cotyledon stage into Styrofoam containers each with 24 cavities (diameter 60 mm and 100 mm deep) filled with a mixed substrate (silt loam soil 62%, sand 35%, and peat 3%).A longer chilling treatment (30 days) at 4 C, as reported by Scarabel et al. [21], was adopted for dormancy-breaking of S. mucronatus seeds that were then directly sown in Styrofoam containers.Seeds of A. plantago-aquatica were sown in plastic trays with a perforated bottom and filled with a layer of 2-3 cm of peat.The trays were left floating on water for 20 days to maintain the substrate wet and achieve good germination.Seedlings were then transplanted into Styrofoam containers.Seedlings of the three species were produced from the collected seed batches and then placed in Styrofoam containers each with 12 cavities (diameter 60 mm and 100 mm deep) filled with a mixed substrate (silt loam soil 62%, sand 35%, and peat 3%).Two seedlings were maintained in each cavity and so the experimental unit was made up of 24 plants.In order to simulate the environmental conditions of rice fields, the containers were placed in a greenhouse and left floating in plastic trays filled with water.Two days before the herbicide application, the containers were completely submerged and then remained in this condition for the rest of the experiment.Temperatures in the greenhouse were maintained throughout the experiments in a range similar to paddy rice field conditions in the period of herbicide applications.Daily temperature fluctuations ranged from an average minimum of 14.3 C (range 7.5-23.0C) during the night to an average maximum of 33.1 C (range 12.0-44.0C) during the day.
Preliminary Screenings
Given that several populations had previously been exposed to other ALS-inhibiting herbicides, a separate preliminary screening was conducted with each species according to the methodology described by Sattin et al. [22] to assess the sensitivity of the populations to penoxsulam, as well as to two other ALS inhibitors widely used in rice fields in the areas where the studied populations were collected, i.e., azimsulfuron and bensulfuron-methyl.Population 0363 of A. plantago-aquatica was tested only with bensulfuron-methyl due to poor germination and consequently reduced number of seedlings available for the experiment.Azimsulfuron (Gulliver ® , 600 g ai kg −1 , Cheminova Agro Italia, Bergamo, Italy) and bensulfuron-methyl (Londax 60DF ® , 600 g ai kg −1 , Du Pont de Nemours Italiana, Milano, Italy) were applied at the label dose corresponding to 20 and 60 g ai ha −1 , respectively, while penoxsulam (ViperTM, 20 g ai L −1 , Dow Agrosciences Italia, Bologna, Italy) was applied at half the label dose, i.e., 20 g ai ha −1 , to identify populations that could be at an initial phase of developing resistance.A surfactant (Trend 90 ® , isodecyl alcohol ethoxylate 900 g L −1 , Du Pont de Nemours Italiana, Milano, Italy) was added to azimsulfuron solution at the dose of 1 mL L −1 .Herbicide application was performed on plants at the 2-4 true leaves stage, corresponding to stage 12-14 of the extended BBCH scale [23].Applications were made using a precision bench sprayer with a boom equipped with three flat fan hydraulic nozzles (TeeJet XR11002-VK, Glendale Heights, IL, USA).Nozzles were 0.5 m apart and delivered a spray volume of 300 L ha −1 , applied at a pressure of 215 kPa and speed of 0.75 m s −1 .Experimental layout was a completely randomized design with three replicates of 24 plants each.Plants were counted just before the herbicide application and plant survival was determined 4-5 weeks after treatment (WAT) and expressed as percentage of the plants observed before the treatment for that specific experimental unit.Plants showing no active growth, i.e., no size increase or new leaves production, regardless of color, were considered to be dead.Fresh weight of above-ground biomass was recorded for each replicate and expressed as percentage of the average fresh weight biomass of the untreated replicates of the corresponding population.Means and standard errors were calculated for plant survival and the fresh weight above-ground biomass of each combination population x herbicide.Populations that had a plant survival percentage above 20% when treated with one of the tested herbicides at label dose were considered to be resistant [24].The experiments were repeated twice.
Dose-Response Experiments
Separate dose-response experiments were conducted for each species.All of the populations were included in these experiments to assess their sensitivity to penoxsulam.Following the same procedures used in the preliminary experiment, described above, plants were treated with a range of penoxsulam doses (2.5, 5, 10, 20, 40, 80, and 160 g ai ha −1 covering from 1/16 to 4 times the label dose).Additional doses of 1.25 and 320 g ai ha −1 were included for the populations that exerted a very high or low sensitivity to ALS inhibitors according to the preliminary experiment, respectively.Untreated controls were also included for all populations.The experimental layout was a completely randomized design with three replicates of 24 plants for each treatment.Plant survival and fresh weight were estimated at 4 WAT while using the same procedure as in the preliminary experiment.Means and standard errors were estimated for plant survival and fresh weight of each treatment.The fresh weight and survival data were used to estimate ED 50 , ED 80 , and ED 90 , i.e., the effective dose required to kill 50%, 80% and 90% of the treated plants, and GR 50 , GR 80 , and GR 90 , i.e., the effective dose required to obtain a growth reduction of 50%, 80% and 90% in comparison with the untreated plants for each population using a log-logistic model [25] with the following equations: where U ij denotes fresh weight or survival at the jth dose of the ith penoxsulam preparation (zij), D denotes the upper asymptote.i.e., the average of fresh weight/survival at infinite doses of the untreated plants and b is slope of the dose response in the inflection point.Since fresh weight for each replicate was expressed as percentage of the average fresh weight of untreated plants of the corresponding population, the corresponding upper asymptote was set to 100%.A common D value for all penoxsulam preparations was assumed, while the ED 50 , ED 80 , ED 90 , and b parameters were estimated for each dose response curve.ED 50 , ED 80 , and ED 90 doses were compared using a t-test.SAS 9.1 software (SAS Institute Inc., Cary, NC, USA) was used to perform this analysis.Given that the number of susceptible populations of each species was rather low, to calculate the average ED 50 , ED 80 of fresh biomass and plant survival and to assess the range of intra-specific variability (95% confidence interval) for the three species, an artificial re-sampling method (with 5000 samples) was performed with a bootstrap procedure [26] using a Microsoft Excel 2010 macro (Microsoft, Redmond, WA, USA).Any populations that proved to be resistant or even partially resistant were not included in the analysis.
Preliminary Screenings
Bartlett's test revealed that the two runs of the experiment conducted for each species did not differ, so the data were averaged over the six replicates (three for each run).A wide variability of sensitivity to ALS inhibitors was observed among the populations for all three species regarding both percentages of plant survival and fresh biomass, which gave similar results (Spreadsheet S1).Standard errors for A. plantago-aquatica were rather high, especially for the fresh weight data (Figure 1).This was likely a consequence of the extremely small seeds and very slow seedling establishment.The efficacy of the two SUs was very good on still susceptible populations of C. difformis and S. mucronatus, whereas none of the A. plantago-aquatica populations were completely controlled by these herbicides, not even those collected in sites where SU had never or sporadically been used (see Table 1 and Figures 1-3).As well as the resistant checks (ALSPA 0363, SCPMU 9719, CYPDI 0013 and 0267), other populations that were collected in Italy as well as in Spain and Portugal appeared to be clearly resistant to both sulfonylureas, i.e., ALSPA 0364, 0365, 0469, 0470, and 0471 (Figure 1), CYPDI 0013, 0223, 0431, 0432, and 0433 (Figure 2), and SCPMU 9719, 0371, 0267, and 0475 (Figure 3).In general, a higher sensitivity to penoxsulam in comparison with the two SUs was observed for the three species.The application at half of the full label dose of penoxsulam (i.e., 20 g ai ha −1 ) achieved a higher or equal control level than the application of the full label dose of azimsulfuron and bensulfuron-methyl, even on most of the ALS resistant populations (Figures 1-3).This effect is particularly notable for S. mucronatus (Figure 3).However, significant percentages of plants surviving the penoxsulam application were observed for ALSPA populations 0364, 0365, 0469, 0470, and 0471 (Figure 1), CYPDI 0013, 0223, 0431, 0432, and 0433 (Figure 2), SCPMU 9719 and 0475 (Figure 3).These populations can therefore be considered cross-resistant to penoxsulam, azimsulfuron, and bensulfuron-methyl, or at least as shifting towards this situation.
Agronomy 2018, 8, x FOR PEER REVIEW 6 of 15 mucronatus, whereas none of the A. plantago-aquatica populations were completely controlled by these herbicides, not even those collected in sites where SU had never or sporadically been used (see Table 1 and Figures 1-3).As well as the resistant checks (ALSPA 0363, SCPMU 9719, CYPDI 0013 and 0267), other populations that were collected in Italy as well as in Spain and Portugal appeared to be clearly resistant to both sulfonylureas, i.e., ALSPA 0364, 0365, 0469, 0470, and 0471 (Figure 1), CYPDI 0013, 0223, 0431, 0432, and 0433 (Figure 2), and SCPMU 9719, 0371, 0267, and 0475 (Figure 3).In general, a higher sensitivity to penoxsulam in comparison with the two SUs was observed for the three species.The application at half of the full label dose of penoxsulam (i.e., 20 g ai ha −1 ) achieved a higher or equal control level than the application of the full label dose of azimsulfuron and bensulfuron-methyl, even on most of the ALS resistant populations (Figures 1-3).This effect is particularly notable for S. mucronatus (Figure 3).However, significant percentages of plants surviving the penoxsulam application were observed for ALSPA populations 0364, 0365, 0469, 0470, and 0471 (Figure 1), CYPDI 0013, 0223, 0431, 0432, and 0433 (Figure 2), SCPMU 9719 and 0475 (Figure 3).These populations can therefore be considered cross-resistant to penoxsulam, azimsulfuron, and bensulfuron-methyl, or at least as shifting towards this situation.
Dose-Response Experiments
Although the populations that are resistant to SUs were included in the dose-response experiment, and some of them were still adequately controlled by the recommended field dose of penoxsulam, they were not considered for the estimation of intra-specific variability, because most of them appeared to be shifting towards a less susceptible status.Three of the SU-resistant checks (i.e., ALSPA 0363, CYPDI 0223, and SCHMU 9719) were confirmed to be highly cross-resistant to penoxsulam, whereas others (i.e., ALSPA 0365, 0469, 0470, and 0471; CYPDI 0013, 0431, 0432, and 0433; SCPMU 0267 and 0475) were partially controlled by the triazolopyrimidines sulfonamides (Figures 4-6) (Spreadsheet S1).
Dose-Response Experiments
Although the populations that are resistant to SUs were included in the dose-response experiment, and some of them were still adequately controlled by the recommended field dose of penoxsulam, they were not considered for the estimation of intra-specific variability, because most of them appeared to be shifting towards a less susceptible status.Three of the SU-resistant checks (i.e., ALSPA 0363, CYPDI 0223, and SCHMU 9719) were confirmed to be highly cross-resistant to penoxsulam, whereas others (i.e., ALSPA 0365, 0469, 0470, and 0471; CYPDI 0013, 0431, 0432, and 0433; SCPMU 0267 and 0475) were partially controlled by the triazolopyrimidines sulfonamides (Figures 4-6) (Spreadsheet S1).Statistical analysis of the susceptible populations revealed that, within each species, the curves fitted for both "traits" were not parallel, and therefore ED 50 or GR 50 alone does not explain the overall variability.ED 80 and ED 90 as well as GR 80 and GR 90 were then also calculated, however ED 90 and GR 90 were not considered because of their relatively high variability in the analysis to calculate an average value for each species and to assess the range of intra-specific variability.Overall, GR values were lower than ED The intra-specific variability regarding EDs, GRs, and curve slopes (data not shown) among populations was relatively low in all three weeds, with no apparent effect of the population origin (i.e., country or cultivated/uncultivated sampling site).Regardless of species, all SU-susceptible Agronomy 2018, 8, 220 9 of 14 populations were very well controlled by penoxsulam, and even all ED 90 were well below half of the label dose (Figures 4-6).GRs of C. difformis and S. mucronatus were even lower, with GR 80 and GR 90 ranging below one-fourth of the recommended field dose of penoxsulam (i.e., 40 g ai ha −1 ).Variability among and within replicates for A. plantago-aquatica was again rather high, especially for fresh weight data, so only EDs values for plant survival are reported (Figure 4).The average ED 50 , ED 80 , GR 50 , and GR 80 values estimated for the three species using a bootstrap procedure and considering only the populations still susceptible to ALS inhibitors, were clearly lower than the recommended field dose of penoxsulam (40 g ai ha −1 ), ranging from 1.4 to 8.4 g ai ha −1 (Table 2).
overall variability.ED80 and ED90 as well as GR80 and GR90 were then also calculated, however ED90 and GR90 were not considered because of their relatively high variability in the analysis to calculate an average value for each species and to assess the range of intra-specific variability.Overall, GR values were lower than ED The intra-specific variability regarding EDs, GRs, and curve slopes (data not shown) among populations was relatively low in all three weeds, with no apparent effect of the population origin (i.e., country or cultivated/uncultivated sampling site).Regardless of species, all SU-susceptible populations were very well controlled by penoxsulam, and even all ED90 were well below half of the label dose (Figures 4-6).GRs of C. difformis and S. mucronatus were even lower, with GR80 and GR90 ranging below one-fourth of the recommended field dose of penoxsulam (i.e., 40 g ai ha −1 ).Variability among and within replicates for A. plantago-aquatica was again rather high, especially for fresh weight data, so only EDs values for plant survival are reported (Figure 4).The average ED50, ED80, GR50, and GR80 values estimated for the three species using a bootstrap procedure and considering only the populations still susceptible to ALS inhibitors, were clearly lower than the recommended field dose of penoxsulam (40 g ai ha −1 ), ranging from 1.4 to 8.4 g ai ha −1 (Table 2).Table 2. Values of effective dose for 50% and 80% estimated plant survival (ED50 and ED80) and fresh biomass reduction (GR50 and GR80) of Alisma plantago-aquatica (ALSPA), Cyperus difformis (CYPDI), and Schoenoplectus mucronatus (SCPMU).
Discussion
Overall, the experiments with A. plantago-aquatica were less reliable due to the high variability between replicates.This was because of relevant differences in biomass accumulation and growth rate naturally occurring between individuals of this species (Sattin, personal communication), likely due to the extremely small seed size, with a 1000 seed weight lower than 0.5 g [27].The efficacy of penoxsulam was affected by the previously evolved herbicide resistance selected by herbicides
Discussion
Overall, the experiments with A. plantago-aquatica were less reliable due to the high variability between replicates.This was because of relevant differences in biomass accumulation and growth rate naturally occurring between individuals of this species (Sattin, personal communication), likely due to the extremely small seed size, with a 1000 seed weight lower than 0.5 g [27].The efficacy of penoxsulam was affected by the previously evolved herbicide resistance selected by herbicides having the same SoA, i.e., ALS-inhibitors; several populations of the three species can indeed be considered to be cross-resistant to penoxsulam, azimsulfuron, and bensulfuron-methyl or at least as shifting towards this situation.However penoxsulam generally showed a higher efficacy than SUs.Good control levels were observed even on some SU-resistant A. plantago-aquatica, C. difformis, and S. mucronatus populations, confirming what was previously reported by Tabacchi et al. [28].These SU-resistant populations are probably characterized by point mutations in the ALS gene that confer resistance to SUs but not to penoxsulam due to the partially different binding-site of triazolopyrimidines sulphonamides.Different point mutations or even different amino acid substitutions can confer different patterns of cross-resistance to the different groups of ALS-inhibitors (e.g., sulfonylureas (SU), imidazolinones (IMI), triazolopyrimidines sulphonamides (TP), pyrimidinylbenzoates (PB), and sulfonylaminocarbonyltriazolinones (SCT)), as widely reported for several weed species by Tranel et al. [29].Calha et al. [30] described cross-resistance to the SUs azimsulfuron, bensulfuron-methyl, cinosulfuron, and ethoxysulfuron, but not to imazethapyr (IMI) for A. plantago-aquatica populations collected in rice paddy fields in Portugal.Different patterns of cross-resistance to ALS-Inhibitors were also reported for Italian, North American, and Spanish populations of C. difformis [5,7,31,32], ranging from populations that are resistant only to SU to others cross-resistant to the SUs bensulfuron-methyl, halosulfuron-methyl, orthosulfamuron, imazethapyr (IMI), bispyribac-sodium (PB), penoxsulam (TP), and propoxycarbazonesodium (SCT) [7].Similarly, previous studies reported S. mucronatus populations having different patterns of cross-resistance in Italy, USA and Chile [8,31,33]).Two point mutations of ALS gene (Pro197 to His and Trp574 to Leu) are more frequently reported as endowing resistance to ALS-inhibitors but with a different cross-resistance pattern.Pro197 to His is often associated to resistance to SU but low or moderate to IMI or TP, while Trp574 to Leu confers broad cross-resistance to different ALS-inhibitor groups [8,[33][34][35][36].However, Pro197 to His has recently been detected in an American C. difformis population that is characterized by a cross-resistance to bispyribac-sodium (PB), halosulfuron (SU), imazamox (IMI), and penoxsulam (TP) [32], so the level of homozygosis for this point mutation in a given individual can probably influence the level and pattern of resistance to ALS-inhibitors.Since different levels of ploidy (diploid and tetraploid) have been reported for C. difformis [36][37][38], the existence of multiple copies of ALS genes could also contribute to modify the pattern and level of resistance conferred by a specific point mutation.
Different point mutations, or different amino acidic substitution for the same mutation site endowing herbicide resistance, can be present within the same population or in a certain cropping area [39,40], therefore resistance level and pattern at the population level depend on the frequency of the various resistant genotypes at that moment.The difference in sensitivity to penoxsulam of the SU-resistant populations included in this study could be related to the diverse presence and frequency of point mutations endowing different resistance patterns, such as Pro197 to His and Trp574 to Leu.Populations presenting relevant cross-resistance to SU and penoxsulam (i.e., ALSPA 0363, CYPDI 0223, and SCPMU 9719) probably have a high frequency of genotypes with mutation Trp574 to Leu, and this was reported for population SCPMU 9719 in a previous study [8].Populations that are highly SU-resistant but susceptible to penoxsulam (i.e., ALSPA 0364, CYPDI 0433, SCPMU 0267 and 0371) are instead mainly composed of genotypes with mutation Pro197 to His.The presence and frequency within a given population of point mutations endowing ALS resistance is a consequence of the previous exposure to ALS-inhibitors but it is also continuously evolving according to the selection that is caused by current herbicide use and other control tools.Genotypes with point mutations conferring resistance
Figure 1 .
Figure 1.Sensitivity of populations of Alisma plantago-aquatica (ALSPA) to three ALS inhibitors, azimsulfuron (A), bensulfuron-methyl (B), applied at full label dose corresponding to 20 and 60 g ai ha −1 respectively and penoxsulam (C), applied at half the full label dose, corresponding to 20 g ai ha −1 .Population 0363 is an ALS-resistant check.Values of survival (black bars) and biomass (grey
Figure 1 .
Figure 1.Sensitivity of populations of Alisma plantago-aquatica (ALSPA) to three ALS inhibitors, azimsulfuron (A), bensulfuron-methyl (B), applied at full label dose corresponding to 20 and 60 g ai ha −1 respectively and penoxsulam (C), applied at half the full label dose, corresponding to 20 g ai ha −1 .Population 0363 is an ALS-resistant check.Values of survival (black bars) and biomass (grey bars) are expressed as percentage of the untreated control.Vertical bars represent standard errors.Due to low germination obtained for population 0363, only the screening with bensulfuron-methyl was conducted.
Figure 2 .
Figure 2. Sensitivity of populations of Cyperus difformis (CYPDI) to three ALS inhibitors, azimsulfuron (A), bensulfuron-methyl (B), applied at full label dose corresponding to 20 and 60 g ai ha −1 , respectively, and penoxsulam (C), applied at half the full label dose, corresponding to 20 g ai ha −1 .Populations 0013 and 0223 are ALS-resistant checks.Values of survival (black bars) and biomass (grey bars) are expressed as percentage of the untreated control.Vertical bars represent standard errors.
Figure 2 .
Figure 2. Sensitivity of populations of Cyperus difformis (CYPDI) to three ALS inhibitors, azimsulfuron (A), bensulfuron-methyl (B), applied at full label dose corresponding to 20 and 60 g ai ha −1 , respectively, and penoxsulam (C), applied at half the full label dose, corresponding to 20 g ai ha −1 .Populations 0013 and 0223 are ALS-resistant checks.Values of survival (black bars) and biomass (grey bars) are expressed as percentage of the untreated control.Vertical bars represent standard errors.
Figure 3 .
Figure 3. Sensitivity of populations of Schoenoplectus mucronatus (SCPMU) to three ALS inhibitors, azimsulfuron (A), bensulfuron-methyl (B), applied at full label dose corresponding to 20 and 60 g ai ha −1 , respectively, and penoxsulam (C), applied at half the full label dose, corresponding to 20 g ai ha −1 .Populations 9719 and 0267 are ALS-resistant checks.Values of survival (black bars) and biomass (grey bars) are expressed as percentage of the untreated control.Vertical bars represent standard errors.
Figure 3 .
Figure 3. Sensitivity of populations of Schoenoplectus mucronatus (SCPMU) to three ALS inhibitors, azimsulfuron (A), bensulfuron-methyl (B), applied at full label dose corresponding to 20 and 60 g ai ha −1 , respectively, and penoxsulam (C), applied at half the full label dose, corresponding to 20 g ai ha −1 .Populations 9719 and 0267 are ALS-resistant checks.Values of survival (black bars) and biomass (grey bars) are expressed as percentage of the untreated control.Vertical bars represent standard errors.
Figure 4 .
Figure 4. Estimation of ED50, ED80, and ED90 (ED50 black bars, ED80 white bars and ED90 grey bars) of penoxsulam for plant survival (S) of populations of Alisma plantago-aquatica (ALSPA).Red bars indicate populations resistant to SUs according to the preliminary screenings.Population 0363 is an ALS-resistant check.Vertical bars represent 95% Confidence Intervals.Recommended field dose of penoxsulam (40 g ai ha −1 ) is represented by the horizontal black line.
Figure 4 .
Figure 4. Estimation of ED 50 , ED 80 , and ED 90 (ED 50 black bars, ED 80 white bars and ED 90 grey bars) of penoxsulam for plant survival (S) of populations of Alisma plantago-aquatica (ALSPA).Red bars indicate populations resistant to SUs according to the preliminary screenings.Population 0363 is an ALS-resistant check.Vertical bars represent 95% Confidence Intervals.Recommended field dose of penoxsulam (40 g ai ha −1 ) is represented by the horizontal black line.
Figure 6 .
Figure 6.Estimation of ED50, ED80, and ED90 (ED50 and GR50 black bars, ED80, and GR80 white bars and ED90 and GR90 grey bars) of penoxsulam for plant survival (S) and fresh biomass reduction (B) of populations of Schoenoplectus mucronatus (SCPMU).Red bars indicate populations resistant to SUs according to the preliminary screenings.Populations 9719 and 0267 are ALS-resistant checks.Due to high variability, GR90 of populations 0267 0472, 0473, 0474, and 0475 are not reported.Vertical bars represent 95% Confidence Intervals.Recommended field dose of penoxsulam (40 g ai ha −1 ) is represented by the horizontal black line.
Figure 6 .
Figure 6.Estimation of ED 50 , ED 80 , and ED 90 (ED 50 and GR 50 black bars, ED 80 , and GR 80 white bars and ED 90 and GR 90 grey bars) of penoxsulam for plant survival (S) and fresh biomass reduction (B) of populations of Schoenoplectus mucronatus (SCPMU).Red bars indicate populations resistant to SUs according to the preliminary screenings.Populations 9719 and 0267 are ALS-resistant checks.Due to high variability, GR 90 of populations 0267 0472, 0473, 0474, and 0475 are not reported.Vertical bars represent 95% Confidence Intervals.Recommended field dose of penoxsulam (40 g ai ha −1 ) is represented by the horizontal black line.
-indicates no precise information available. | 2018-11-09T18:27:52.745Z | 2018-10-08T00:00:00.000 | {
"year": 2018,
"sha1": "4da5d23af3aa9cb75dab2c4dc9bb62e8d2d8a763",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/8/10/220/pdf?version=1538991853",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4da5d23af3aa9cb75dab2c4dc9bb62e8d2d8a763",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
16122286 | pes2o/s2orc | v3-fos-license | Symmetric-Bounce Quantum State of the Universe
A proposal is made for the quantum state of the universe that has an initial state that is macroscopically time symmetric about a homogeneous, isotropic bounce of extremal volume and that at that bounce is microscopically in the ground state for inhomogeneous and/or anisotropic perturbation modes. The coarse-grained entropy is minimum at the bounce and then grows during inflation as the modes become excited away from the bounce and interact (assuming the presence of an inflaton, and in the part of the quantum state in which the inflaton is initially large enough to drive inflation). The part of this pure quantum state that dominates for observations is well approximated by quantum processes occurring within a Lorentzian expanding macroscopic universe. Because this part of the quantum state has no negative Euclidean action, one can avoid the early-time Boltzmann brains and Boltzmann solar systems that appear to dominate observations in the Hartle-Hawking no-boundary wavefunction.
Introduction
Even if physicists succeed in finding a so-called 'Theory of Everything' or TOE that gives the full set of dynamical laws for our universe, it appears that that will be insufficient to explain our past observations and to predict new ones. The reason is that each set of dynamical laws, at least of the kind we are familiar with, permits a wide variety of solutions, most of which would be inconsistent with our observations. We need a set of initial conditions and/or other boundary conditions to restrict the possible solutions to fit what we observe. In a quantum description of the universe with fixed dynamical laws (the analogue of the Schrödinger equation for nonrelativistic quantum mechanics), we need not only these dynamical laws but also the quantum state itself (cf. [1]). (We also need the rules for extracting observational probabilities from the quantum state [2,3,4,5] for solving the measure problem in cosmology, which is another extremely important issue, but I shall not focus on that in this paper.) To put it another way, our observations strongly suggest that our observed portion (or subuniverse [6] or bubble universe [7,8] or pocket universe [9]) of the entire universe (or multiverse [10,11,12,13,14,15,16,17] or metauniverse [18] or omnium [19] or megaverse [20]) is much more special than is implied purely by the known dynamical laws. For example, it is seen to be enormously larger than the Planck scale, with small large-scale curvature, and with approximate homogeneity and isotropy of the matter distribution on the largest scales that we can see today.
It especially seems to have had extraordinarily high order in the early universe to enable its coarse-grained entropy to increase and to give us the observed second law of thermodynamics [21,22,23]. The known dynamical laws do not imply these observed conditions.
However, Leonard Susskind [41] (cf. [42,43,44]) has made the argument, which I have elaborated [45], that in the no-boundary proposal the cosmological constant or quintessence or dark energy that is the source of the present observations of the cosmic acceleration [46,47,48,49,50,51,52] would give a very large Euclidean 4-hemisphere as an extremum of the Hartle-Hawking path integral that would apparently swamp the extremum from rapid early inflation by amplitude factors of the order of e 10 122 . Therefore, to very high probability, the present universe should be very nearly empty de Sitter spacetime, which is certainly not what we observe. Even if we restrict to the very rare cases in which a solar system like ours occurs, the probability in the Hartle-Hawking no-boundary proposal seems to be much, much higher for a single solar system in an otherwise empty universe than for a solar system surrounded by other stars such as what we observe.
The tunneling proposals have also been criticized for various problems [53,40,54,55,56,57]. For example, the main difference from the Hartle-Hawking no-boundary proposal seems to be the sign of the Euclidean action [35,36]. It then seems problematic to take the opposite sign for inhomogeneous and/or anisotropic perturbations without leading to some instabilities, and it is not clear how to give a sharp distinction between the modes that are supposed to have the reversed sign of the action and the modes that are supposed to retain the usual sign of the action. Vilenkin and his collaborators have emphasized [35,39,40] that the instabilities do not seem to apply to his particular tunneling proposal, which does not just reverse the sign of the Euclidean action. However, Vilenkin (with Garriga) admits [40] that "both wavefunctions are far from being rigorous mathematical objects with clearly specified calculational procedures. Except in the simplest models, the actual calculations of ψ T and ψ HH involve additional assumptions which appear reasonable, but are not really well justified." Therefore, at least unless and until any of these proposals can be made rigorous and can be shown conclusively to avoid the problems attributed to them, it is worth searching for and examining other possibilities for the quantum state of the universe or multiverse. In a previous paper [58], I proposed a 'no-bang' quantum state which is the equal mixture of the Giddings-Marolf states [59] that are asymptotically single de Sitter spacetimes in both past and future and are regular on the throat or neck of minimal three-volume. However, it does not appear to work if one adopts my proposal of volume averaging [2] to help solve the late-time aspect of the Boltzmann brain problem.
The Boltzmann brain problem [42,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,59,75,76,77,78,79,80,81,82,83,84] is the problem that many cosmological theories seem to predict that our observations would be highly improbable in comparison with much more disordered observations of Boltzmann brains that these theories predict should enormously dominate over ordinary observers. Boltzmann brains are observers that appear from thermal or vacuum fluctuations. The probability of a Boltzmann brain per four-volume is extremely tiny (say roughly e −10 42 [66,68,70]), but if the universe lasts for an infinite time, and especially if its threevolume grows asymptotically exponentially, and if there are only a finite number of ordinary observers per comoving three-volume, then per comoving volume the Boltzmann brains will dominate and make our ordered observations very atypical and improbable relative to the much more disordered typical Boltzmann brain observations. (The dominance by Boltzmann brains at very late times, which might occur in any universe that lasts forever, I call the late-time Boltzmann brain problem; the Hartle-Hawking no-boundary proposal appears to suffer from what might be called an early-time Boltzmann brain problem, that at all times Boltzmann brains seem to dominate over ordinary observers [42,43,44,41,45].) Originally I proposed a solution to the Boltzmann brain problem in which the universe might be likely to decay before Boltzmann brains would dominate [62,64,66,68,70], but this seemed to require fine-tuning of whatever physics might determine the decay rate (though see [85] for a possible anthropic explanation of this decay rate). Therefore, I turned to another possible solution, that one should go from volume weighting to volume averaging [2] to extract observational probabilities. This would eliminate the effect of the exponentially growing 3-volumes in the asymptotic future, though there still remains a much less rapid divergence on the weighting of Boltzmann brains from an infinite future lifetime of the universe, unless one went beyond 3-volume averaging to 4-volume averaging that would allow a possible anthropic explanation of a decaying universe [85]. However, if one goes from volume weighting to volume averaging to mitigate the late-time Boltzmann brain problem, the no-bang state then appears to suffer qualitatively from the same problem as the no-boundary state of being dominated by thermal perturbations of nearly empty de Sitter spacetime, so that almost all observers would presumably be Boltzmann brains. Since this would almost certainly make our observations very unlikely, the no-bang proposal apparently is observationally excluded if one uses volume averaging rather than volume weighting. (The no-boundary state appears to be excluded if either rule were used for extracting probabilities from the quantum state, since it has both an early-time and a late-time Boltzmann brain problem.) In this paper, instead of the mixed 'no-bang' state, I shall propose a pure quantum state in which the Giddings-Marolf seed state [59] (before group averaging over diffeomorphisms) consists of quantum fluctuations about a uniform superposition of Lorentzian macroscopic components that are each time symmetric about a bounce of extremal 3-volume, with the quantum fluctuations being in their ground state at that moment of time symmetry for the macroscopic 4-geometry. With both signs of the Lorentzian time away from this momentarily-static bounce, the 3-volume will expand, typically in an inflationary manner if the matter is dominated by a sufficiently large homogeneous component of a scalar inflaton field. This inflationary expansion will then produce parametric amplification of the inhomogeneous and anisotropic modes in the usual manner to give density fluctuations at the end of inflation that then grow gravitationally to become nonlinear and produce the structure that we observe.
A slight aesthetic disadvantage of the symmetric-bounce quantum state in comparison with the no-boundary state is that in the symmetric-bounce proposal, the inhomogeneous fluctuations are put into their ground state at the bounce by a part of the proposal that is logically separate from the part of the proposal that gives the behavior of the homogeneous modes, whereas in the no-boundary proposal the behavior of both the inhomogeneous and homogeneous modes come out together from the same part of that proposal, that the histories that contribute to the path integral are regular on a complete complexified Euclidean manifold with no boundary other than the one on which the wavefunction is evaluated. However, this seems to be a small price to pay for avoiding the huge negative Euclidean actions of many nearly-empty de Sitter histories in the no-boundary proposal that make nearly empty spacetime much more probable than a nearly Friedmann-Robertson-Walker spacetime with high densities at early times that would fit our observations much better. To avoid making our observation of distant stars extremely improbable, as it appears to be in the no-boundary proposal, it seems well worth giving up the simple no-boundary unified description of the behavior of both the homogeneous inflationary modes and the inhomogeneous fluctuation modes.
Homogeneous modes with an inflaton and a cosmological constant
First, let us focus on the behavior of the homogeneous, isotropic modes of the symmetric-bounce quantum seed state. That is, take each quasiclassical component of the macroscopic spacetime geometry, without the quantum fluctuations, to be a Friedmann-Robertson-Walker (FRW) model driven by homogeneous matter fields. For concreteness and simplicity, consider the case of a positive cosmological constant Λ = 3/b 2 and a single inflaton that is a homogeneous free scalar field φ(t) of mass m, and take the FRW model to be k = +1 so that the spatial sections are homogeneous, isotropic 3-spheres of radius a(t). Then the macroscopic spacetime metric can be taken to be Using units in whichh = c = 1, but writing G explicitly, one can write the Lorentzian action as (cf. [86]) where b ≡ 3/Λ is the radius of the throat of pure de Sitter with the same value of the cosmological constant, n ≡ mN is a rescaled lapse function that is dimensionless if t is taken to be dimensionless, λ ≡ Λ/(3m 2 ) ≡ 1/(mb) 2 is a dimensionless measure of the cosmological constant in units given by the mass of the inflaton, r ≡ e α ≡ ma and ϕ ≡ 4πG/3φ are dimensionless forms of the scale factor and inflaton scalar field (leaving G ≡ m −2 Pl to have the dimensions of inverse mass squared or of area), an overdot represents a derivative with respect to t, the DeWitt metric [87] on the minisuperspace is the 'potential' on the minisuperspace is the rescaled lapse function is ν ≡ nV = mNV , and the conformal minisuperspace metric is To get some reasonable numbers for the dimensionless constants in these equations, take Ω Λ = 0.72 ± 0.04 from the third-year WMAP results of [50] and H 0 = 72 ± 8 km/s/Mpc from the Hubble Space Telescope key project [88], and drop the error uncertainties to get [89,90] from the measured fluctuations of the cosmic microwave background to get that the prefactor of the action is (3π/4)(m Pl /m) 2 ≈ 1.0 × 10 12 , and the dimensionless measure of the cosmological constant is λ ≡ Λ/(3m 2 ) ≡ 1/(mb) 2 ≈ 5.0 × 10 −111 . Thus λ may be taken to be extremely tiny, and for histories in which α and/or ϕ are of the order of unity or greater, the action will be very large and so should give essentially classical behavior, at least for the homogeneous, isotropic part of the geometry.
The constraint equation and independent equation of motion can now be written as 1 Na for general lapse function from the second form of the action above, from the third form of the action with n = 1, anḋ for the fourth form of the action above with n = 1, which will henceforth be assumed.
Although it is a redundant equation, one may readily derive from Eqs. (8) thaẗ when n = 1. Then when neither side of the constraint (first) equation part of Eqs. (8) vanishes (e.g., when V = 0), and whenφ = 0, one may define f ′ ≡ df /dϕ =ḟ/φ and reduce Eqs. (8) to the single second-order differential equation (cf. [86]) Alternatively, when V = 0 (or equivalentlyα 2 =φ 2 ), but whenα = 0 instead oḟ ϕ = 0, one can write Yet another way to get the equations of motion is to note that the fifth form of the action from Eq. (2) gives the trajectories of a particle of mass-squared V in the DeWitt minisuperspace metric [87] ds 2 , and the sixth form of the action gives timelike geodesics in the conformal minisuperspace metric dŝ 2 = V ds 2 . When one goes to the gauge ν = 1, then (dŝ/dt) 2 = −1, so that along the classical timelike geodesics of dŝ 2 , the Lorentzian action is S = − dt = − √ −dŝ 2 , minus the proper time along the timelike geodesic of dŝ 2 . However, one must note that the conformal metric dŝ 2 = V ds 2 is singular at V = 0, that is at whereas there is no singularity in the DeWitt metric ds 2 or the spacetime metric along this hypersurface (line) in the two-dimensional minisuperspace (α, ϕ) under consideration. The second-order differential equations (10) and (11) also break down at V = 0 and must be supplemented by the continuity ofα and ofφ (in a gauge in which n = 0 is continuous there) across the V = 0 hypersurface (line).
Symmetric-bounce proposal for the homogeneous modes
My symmetric-bounce proposal for the homogeneous modes, which are represented classically by the trajectories in the (α, ϕ) minisuperspace, is that one takes the set of all Lorentzian symmetric bounce trajectories, those that haveα =φ = 0 somewhere along the classical trajectory. By the definition Eq. (4) of the potential V (α, ϕ) and by the constraint Eq. (8), this point of the trajectory will have V = 0 The classical trajectory that hasα =φ = 0 at (α, ϕ) = (α bounce (ϕ b ), ϕ b ) for some value of ϕ b ≡ ϕ bounce will be time symmetric about this bounce point, so if one sets t = 0 there and uses a time-symmetric lapse function, A generic trajectory in the (α, ϕ) minisuperspace can be labeled by the location at which it crosses some hypersurface (e.g., at its value of ϕ on a hypersurface of fixed α) and by its direction there (e.g., its value of α ′ = dα/dϕ), since once the direction is fixed, the constraint equation determines the values of bothα and ofφ. Thus the generic minisuperspace trajectories form a two-parameter family. However, the symmetric-bounce trajectories may be labeled by the single parameter ϕ b of the value of ϕ that it has on the hypersurface α = α bounce (ϕ), since at that point on a symmetric-bounce trajectory, the values ofα and ofφ are both determined to be zero. Therefore, in terms of the classical measure [91] on the two-dimensional space of minisuperspace trajectories, the symmetric-bounce trajectories are a set of measure zero. This restriction on the classical phase space of trajectories is precisely analogous to the restriction of the no-boundary state on the set of classical trajectories [34], though the details of the restriction are slightly different (precisely real classical trajectories that have symmetric bounces for the symmetric-bounce state).
However, since I am proposing that the quantum state is a superposition of initially quasiclassical components that give a one-parameter set of classical trajectories, to make the proposal definite I do need to give the coefficients in the quantum quantum superposition or the measure for the classical trajectories, analogous to the weighting by the exponential of minus the (negative) Euclidean action for the no-boundary proposal and by essentially the exponential of the Euclidean action for the tunneling proposal. I shall propose that the one-parameter set of classical trajectories are uniformly distributed over the symmetric-bounce hypersurface (α bounce (ϕ b ), ϕ b ), with no weighting by the exponential of either minus or plus the Euclidean action. Thus my symmetric-bounce quantum state has a measure that is basically the geometric mean of the no-boundary and tunneling proposals. For such a uniform measure, µ(ϕ b )dϕ b , I shall take the magnitude of the metric induced on this hypersurface by the DeWitt minisuperspace metric [87] given by Eq. (3), after dropping the constant factor 3π/(2Gm 2 ). That is, I shall take The coefficients in the continuum quantum superposition I shall take to be the real positive square roots of this measure. I should like to emphasize that, like all other proposals for the quantum state of the universe, this is just a proposal and is not derived from previously accepted principles.
The symmetric-bounce proposal specifies the form of the quantum state at the bounce, but, unlike some other proposals such as the symmetric initial condition [92], it does not impose any requirement that the wavefunction be normalizable over the entire superspace. Indeed, even for the minisuperspace of the homogeneous isotropic modes of the scale factor variable α and the inflaton field variable ϕ, the symmetric-bounce wavefunction propagates unabated to arbitrarily large α and so is not normalizable, that is, it is not square-integrable over the (α, ϕ) space with the area element induced from the DeWitt metric [87].
Because the symmetric-bounce hypersurface (α bounce (ϕ b ), ϕ b ) becomes asymptotically null sufficiently rapidly with |ϕ b | for large |ϕ b |, so that µ(ϕ b ) ∼ |ϕ b | −3/2 for large |ϕ b |, the total measure µ(ϕ b )dϕ b integrated over all ϕ b from minus infinity to plus infinity is finite. It is dominated by the regions where 82 for λ ≈ 5.0 × 10 −111 as estimated above. Here I shall ignore one-loop quantum corrections [93,94,95], partly because of the fact that if they are important, unknown higher-loop effects are likely also to be important. Such quantum corrections should be unimportant when the energy density is much less than the Planck density, e.g., for ϕ 2 ≪ G −1 m −2 ∼ 10 12 . The energy density at the bounce is less than the Planck value for over 99.9% of the measure of the symmetric bounce trajectories with ϕ 2 b > 1. The symmetric-bounce homogeneous spacetimes, labeled by the value of ϕ b where each of them has its symmetric bounce on the symmetric-bounce hypersurface (α bounce (ϕ b ), ϕ b ), may be divided into five classes depending on which spacelike or timelike segment of the symmetric-bounce hypersurface at which each of them has its symmetric bounce. These segments are divided by the points at which the symmetric-bounce hypersurface becomes null in the DeWitt metric of Eq. (3) and crosses from being spacelike to timelike or from timelike to spacelike. These Then one may define Segment 1 to be the spacelike part of the symmetric-bounce hypersurface with ϕ b < −ϕ 2 , Segment 2 to be the timelike part with −ϕ 2 < ϕ b < −ϕ 1 , Segment 3 to be the spacelike part with −ϕ 1 < ϕ b < ϕ 1 , Segment 4 to be the timelike segment with ϕ 1 < ϕ b < ϕ 2 , and Segment 5 to be the spacelike segment with ϕ 2 < ϕ b . Under the symmetry ϕ → −ϕ, Segments 1 and 5 are interchanged, Segments 2 and 4 are interchanged, and Segment 3 is interchanged with itself. Therefore, without loss of generality, one may take ϕ b ≥ 0 and consider only Segments 3, 4, and 5. One may estimate that for λ ≈ 5.0 × 10 −111 , Segments 1 and 5 each have measure ≈ (1/2)B(1/4, 3/2) ≈ 1.748, Segments 2 and 4 each have measure ≈ (2/3)λ −3/4 ≈ 3.5 × 10 82 , and Segment 3 has measure ≈ (π/2)λ 1/4 ≈ 4.2 × 10 −28 .
Symmetric-bounce homogeneous spacetimes that bounce on Segment 3 thereafter move along timelike trajectories ever upward in the (α, ϕ) minisuperspace and hence expand forever. Their dynamics are always dominated by the positive cosmological constant and behave very nearly like empty de Sitter universes. In my proposed measure, their measure is only ∼ 10 −28 that of Segments 1 and 5 and only ∼ 10 −110 that of Segments 2 and 4, so these nearly empty spacetimes do not seem to contribute much to the measure for observations, unlike their contribution to the Hartle-Hawking no-boundary quantum state [41,42,43,44,45].
Symmetric-bounce spacetimes that bounce on Segment 2 or 4, with λ 2 ≈ ϕ 2 1 < ϕ 2 b < ϕ 2 2 ≈ 1, except for ϕ 2 b sufficiently close to 1, generally have a period of expansion during which the scalar field oscillates rapidly relative to the expansion. When averaged over each oscillation, the mean value ofφ 2 is nearly the same as that of ϕ 2 (in a gauge with n = 1, which I shall assume unless stated otherwise), which is equivalent to saying that the pressure exerted by the scalar inflaton averages to near zero over each oscillation. Then the scalar field acts essentially like pressureless dust, with a total rationalized dimensionless 'mass' that is nearly constant: where a = r/m is the physical scale factor and is the energy density of the scalar field with our choice of n = mN = 1 to make our time coordinate t dimensionless (and with d/dt being denoted by an overdot). Thus the dimensionless M is 4Gm/(3π) times the integral of the energy density ρ over the volume 2π 2 a 3 of the 3-sphere of physical scale factor a and of dimensionless scale factor r ≡ e α ≡ ma. The approximate constancy of M during the 'dust' regime results from the fact that the integral of is approximately zero over each oscillation of the scalar field. Then during such a 'dust' phase, the dimensionless scale factor r = ma evolves according toṙ with M very nearly constant. As a function of the dimensionless scale factor r ≡ ma at fixed M, the right hand side has a minimum at r = [M/(2λ)] 1/3 that is positive if 27λM 2 > 4, so when this condition holds, the universe will expand forever from any initial r if M stays constant. However, this sufficient (but not necessary) condition for expansion forever does not hold for any ϕ 2 b ≪ 1 for which M stays nearly constant after the bounce, at which one has since obviously the right hand side of Eq. (17) is zero at the bounce. That is, although 27λM 2 > 4 with constant M is sufficient for the universe to expand forever in our simple k = +1 FRW model with a cosmological constant and a massive scalar field that acts like dust, it is not necessary. Conversely, 27λM 2 < 4 is necessary but not sufficient for recollapse. If 27λM 2 < 4 does hold, one also needs that r be at an allowed value (one givingṙ 2 ≥ 0) less than the minimum of the right hand side of Eq. (17), which is equivalent to 2λr 3 < M. Thus this model k = +1 FRW Λ-dust model will recollapse (assuming M stays constant) if and only if Using Eq. (18), which leads to a nearly constant M ≈ M b when ϕ 2 b ≪ 1, we see that our k = +1 FRW Λ-scalar model with the symmetric-bounce initial condition will recollapse if and only if This is the part of Segments 2 and 4 with larger values of ϕ 2 b , plus a bit into Segments 1 and 5. For λ ≪ ϕ 2 b , well into the interior of this open set of values of ϕ b , the evolution will have λr 2 ≪ M/r during the evolution, so the dimensionless collapse time ∆t with n = 1 will be approximately (π/2)r b ≈ π/(2ϕ b ). For ϕ 2 b large enough to give a density sufficient for nucleosynthesis (e.g., at the density our universe had at an age of a few minutes), the lifetime in proper time would be of the order of minutes, far too short for the evolution of stars and observers that depend upon stars. Although Segments 2 and 4 dominate the measure given by Eq. (13) by factors of the order of 10 82 , they do not do so by factors anywhere near the inverses (say ∼ e 10 42 ) of the exponentially tiny relative probabilities of forming Boltzmann brains, so the resulting symmetric-bounce universes will presumably have extremely tiny probabilities for observers and should contribute negligibly to observational probabilities. (This is unlike the case of the Hartle-Hawking no-boundary proposal, where factors from the negative Euclidean action, say ∼ e 10 122 , can be much greater than the inverses of the relative probabilities to form Boltzmann brains or even Boltzmann solar systems.) The symmetric-bounce initial conditions that lead to recollapse actually extend past ϕ 2 b = ϕ 2 2 ≈ 1 into Segments 1 and 5, but there the dust approximation that M is nearly constant breaks down. It is difficult to give a good approximate closedform treatment for ϕ 2 b ∼ 1, but for ϕ 2 b a few times unity, one enters the slow-roll inflationary regime where M grows greatly during a period of inflation that can be estimated fairly accurately under the approximation that ϕ 2 b ≫ 1.
Approximate solutions for the inflationary regime
Let us now focus on the regime in which the initial (at the bounce) value of ϕ 2 ≡ 4πGφ 2 /3, that is ϕ 2 b , is at least somewhat large compared to unity, so that the evolution away from the symmetric bounce starts with a period of slow-roll inflation that includes at least several e-folds of expansion. In this Section, we want to set up some theoretical analysis before turning in the next Section to a numerical calculation of how many e-folds of inflation occur, as a function of ϕ b , and also of the ϕ b -dependence of the asymptotic value, in the 'dust' regime that follows the inflationary regime, of the total rationalized dimensionless 'mass' M given by Eq. (14).
Without loss of generality, assume that the value of ϕ at the bounce, ϕ b , is positive, so when it is greater than ϕ 2 ≈ 1, the FRW spacetime starts on Segment 5 with r b ≈ 1/ϕ b . During the slow-roll inflationary regime with ϕ ≫ 1, we have ϕ 2 ≫φ 2 . Since during inflation we haveφ 2 + ϕ 2 > ∼ 1 ≫ λ, we can neglect the cosmological constant term λ during inflation and take the inflationary equations to be Eqs. (7) or (8) with λ dropped: in terms of the dimensionless scale factor r ≡ e α ≡ ma, oṙ in term of the logarithm α = ln(ma) of the scale factor, and in terms of the dimensionless form ϕ ≡ 4πG/3φ of the inflaton scalar field φ. From Eqs. (21), one can readily derive, as an alternative to the redundant Eq. (9) when λ is neglected, thatr = r(ϕ 2 − 2φ 2 ).
We shall define the inflationary period as that first period immediately after the symmetric bounce when λ is negligible (so as not to consider inflation by the cosmological constant) and whenr > 0, so that the scale factor of the universe is accelerating with respect to cosmic proper time. This is equivalent, with λ negligible, to the first period during which 2φ 2 < ϕ 2 . Let us define N (or N(ϕ b ), since it depends on the initial value ϕ b at the bounce) to be the number of e-folds of the inflationary period, the change in the logarithm α of the scale factor during the inflationary period that with λ neglected and that ends at α = α e (ϕ b ) where ϕ has first dropped to the then-positive value of − √ 2φ: It also is convenient to define a shifted scale-factor logarithm which increases monotonically from β b = 0 at the bounce to β e = N at the end of the inflationary period. Then the ϕ b -dependent number of e-folds of inflation may be defined to be N(ϕ b ) = β e (ϕ b ). N(ϕ b ) will be large if ϕ b ≫ 1, which is what we shall assume, though many of the results below turn out to be quite accurate even if ϕ b is as small as 3. Now I shall give a sequence of increasingly better approximations for the early phase of inflation, followed by numerical calculations of N(ϕ b ) and of the aftermath of inflation, such as the asymptotic value of the total rationalized dimensionless 'mass' M given by Eq. (14).
The simplest approximation is for the period when ϕ remains very nearly the same as its initial value ϕ b and whenφ is negligible in comparison. Then the first of Eqs. (21) which gives de Sitter spacetime at this level of approximation. However, this level of approximation does not remain good indefinitely, since the second of Eqs. (21) implies that ϕ gradually decreases. For ϕ b t ≫ 1 but still ϕ 2 ≫φ 2 (so that several e-folds of inflation have occurred but one is not yet near the end of inflation), one is in the flat (e −2α ≪ ϕ 2 +φ 2 ) slow-roll (φ 2 ≪ ϕ 2 ) regime where the first of Eqs. (21) or (22) now becomesṙ ≈ rϕ orα ≈ ϕ, so that the second of Eqs. (21) or (22) becomesφ + 3ϕφ + ϕ ≈ 0, which has the attractor solution [96] Then one gets Since inflation ends when ϕ drops down to − √ 2φ, which by the slow-roll approximation (no longer valid near the end of inflation but giving the right order of magnitude) is √ 2/3, which is much less than ϕ b that we are assuming is much larger than unity, we get as the leading approximation for the number of e-foldings of inflation that N(ϕ b ) ∼ 1.5ϕ 2 b . However, we shall find below that there is also a term logarithmic in ϕ b , as well as terms that are inverse powers of ϕ 2 b , plus a constant term that may be evaluated numerically.
If one looks at just the flat regime where r 2 (ϕ 2 +φ 2 ) ≫ 1 but does not impose the slow-roll conditionφ 2 ≪ ϕ 2 , one can see that Eq. (10) with U ≡ −α ′ ≡ −dα/dϕ becomes the autonomous first-order differential equation During slow-roll inflation with ϕ ≫ 1, the solution will exponentially rapidly approach the attractor solution This then gives where the const. term depends upon ϕ b . One can see that this formula leads to a (1/3) ln ϕ b term in N(ϕ b ), but the value of the constant term in N(ϕ b ) and of the terms that go as inverse powers of ϕ 2 b require the behavior both before the entry into the flat regime and after the exit from the slow-roll regime.
Next, let us go to a better approximation during the first stages of inflation, not assuming one has entered the flat regime where the spatial curvature term e −2α may be neglected. If one inserts the approximate solution for r(t) from Eq. (26) into the second one of Eqs. (21) and solves it to the leading nontrivial order in 1/ϕ b , one gets the better approximation for the scalar field that is (cf. [97]) Analogously, if one inserts the approximate solution for ϕ(t) from Eq. (27) into the first one of Eqs. (21) and solves it under the slow-roll approximation, one gets the better approximation for the dimensionless scale factor r = ma that is Both of these approximations are valid for all t ≪ ϕ b , both the regime in which the spatial curvature is not negligible and the early stages of the slow-roll regime in which ϕ has not rolled down very close to the bottom. One might have thought it would be yet a better improvement to take the argument of the hyperbolic functions in the expression for ϕ to be the same as they are given in the hyperbolic functions in the expression for r, namely ϕ b t−t 2 /6, but this would invalidate the fact that during the entire flat slow-roll regime,φ stays very close to −1/3. For 1 ≪ ϕ b t ≪ ϕ 2 b , so that one is in the early part of the flat slow-roll regime, one has and For an even better approximation during the early stages of the slow-roll regime, one can use Eq. (11) and the definition R ≡ e β = r/r Taking this expression into the flat regime for which β ≡ ln R ≫ 1 gives When this approximation for ϕ(β) in the flat slow-roll regime is inverted and matched to Eq. (31), one gets neglecting uncalculated terms going as higher inverse powers of ϕ 2 b and both calculated and uncalculated terms going as higher inverse powers of ϕ 2 . From this expression, one can see that at the end of inflation, and the number of e-folds of inflation is but, so far as I can see, the numerical constant in this expression cannot be determined by a closed-form expression but requires numerical integration to the end of inflation at ϕ = − √ 2φ, which is beyond the validity of the slow-roll approximation used above that applies for ϕ ≫ − √ 2φ.
Numerical results for the inflationary regime
Since the closed-form approximate expressions derived above do not apply near the end of the inflationary regime, I used Maple to get fairly precise numerical expressions of how many e-folds N(ϕ b ) of inflation occur (the increase in the logarithmic scale factor α = ln (ma) during the inflationary period that is defined as the initial period during which the second time derivative of the scale factor,ä, is positive), and of what the asymptotic value M ∞ (ϕ b ) of the dimensionless 'mass' M is, as functions of the initial value ϕ b of the dimensionless inflaton scalar field ϕ ≡ 4πG/3φ here written in terms of the physical inflaton scalar field φ. I integrated the equations of evolution from the bounce to the end of inflation for several values of ϕ b and found that for ϕ b > ∼ 10, I also found that at the end of inflation for ϕ b > ∼ 3, ϕ ≈ 0.4121 andφ ≈ −0.2914, about one-eighth of the way from its slow-roll value of −1/3 to zero. From this one can also deduce that at the end of inflation, The next question is the ϕ b -dependent value of M ∞ (ϕ b ), the asymptotic value of the total rationalized dimensionless 'mass' M = (ϕ 2 +φ 2 )r 3 = (8πG/3)mρa 3 , where ρ is the scalar field energy density. To make the definition precise, one could take M ∞ to be the value of M at infinite time if the cosmological constant is positive and if the solution expands forever, and to be the value of the dimensionless scale factor r = ma (or, more precisely, of r(1 − λr 2 ) if λ were not negligible as it is in practice) at the first maximum of r if the universe does not expand forever (which will necessarily be the case if the cosmological constant is not positive). However, in practice, the dimensionless cosmological constant λ ≡ Λ/(3m 2 ) ≈ 5.0 × 10 −111 is so tiny that it is insignificant during the numerical integrations of the inflationary regime, and for large ϕ b the maximum r ∼ exp (4.5ϕ 2 b ) before the universe would recollapse in the absence of a positive cosmological constant is so huge that one cannot take the numerical integrations that far. Therefore, I shall approximate M ∞ (ϕ b ) by the value M settles down toward in the 'dust' regime after the end of inflation but long before one needs to consider the effects of either λ or the spatial curvature e −2α .
Numerically, it is still a bit tricky to get precise values for M ∞ (ϕ b ), because M(t) oscillates along with ϕ (at twice the frequency and at the harmonics of that frequency, since M(t) depends only on ϕ 2 (t) andφ 2 (t)), with oscillation magnitudes of the basic frequency and its harmonics that decay only as inverse powers of the scale factor. However, one can derive that the following function eliminates the first several harmonics and after the end of inflation rapidly settles down very near its asymptotic value M ∞ (ϕ b ): M asym (t) = e 3α {(ϕ 2 +φ 2 ) + 3αϕφ + 9 32 9(ϕ 2 +φ 2 ) 2 − 8φ 4 +αϕφ(3ϕ 2 +φ 2 ) My numerical results gave One can see that M ∞ (ϕ b ) ≈ 0.7125M e , 71% of the value of M at the end of inflation, because of the decaying oscillations of M(t) after the end of inflation.
One can now use this formula along with the criterion of the rightmost inequality of Eq. (19) to deduce that for inflationary solutions starting on Segment 5 with λ = 5 × 10 110 , one needs ϕ b > ∼ 5.4646 or φ b > ∼ 2.6700 G −1/2 or N(ϕ b ) > ∼ 44.28 efolds of inflation to avoid eventual recollapse and instead have expansion forever in an asymptotic de Sitter regime. This is assuming that the simple inflaton-Λ model applied for all time. In a more realistic model in which the energy of the inflaton field converted to radiation shortly after the end of inflation, one would need a larger initial inflaton field value ϕ b and more e-folds of inflation to avoid eventual collapse. For example, if one had all the energy of the inflaton field convert to radiation right at the end of inflation and the universe evolve thereafter as a radiation-Λ model thereafter, one would need 16λM e r e > 1, which by using the formula above for M ∞ (ϕ b ) and the relation M ∞ (ϕ b ) ≈ 0.7125M e gives ϕ b > ∼ 6.6069 or φ b > ∼ 3.2282 G −1/2 or N(ϕ b ) > ∼ 65.03 to avoid eventual recollapse. It is rather remarkable that despite the extremely tiny value of λ, the critical initial values of the inflaton field φ b are within one-half an order of magnitude of being unity, essentially because of the very rapid growth of M ∞ (ϕ b ) with ϕ b .
Another asymptotic constant late in the 'dust' regime (but before either the cosmological constant term λ or the spatial curvature term e −2α becomes important) is the asymptotic value of a certain phase θ. At late times in the 'dust' regime, ignoring λ and e −2α , one can write ϕ =α cos ψ andφ = −α sin ψ to define an evolving phase angle ψ, and then the asymptotically constant phase is (There are more complicated formulas that I have derived for the asymptotically constant phase in the de Sitter phase and/or when the spatial curvature is not negligible, but I shall leave them for a later paper.) Preliminary numerical calculations suggest that the asymptotic value of θ, say θ ∞ (ϕ b ), is roughly 1.978 for large ϕ b , but I have not had time to confirm this and to investigate the dependence on ϕ b . For solutions of our system of a k = +1 Friedmann-Robertson-Walker universe with a minimally coupled massive scalar field and a positive cosmological constant that have a bounce at a minimal value of the scale factor and then expand forever in an asymptotically de Sitter phase, there will be an analytic map (not known explicitly, of course) from the initial values at the bounce of ϕ andφ, say ϕ b andφ b , to the asymptotic values M ∞ (ϕ b ,φ b ) and θ ∞ (ϕ b ,φ b ) (or more precisely, to M ∞ (ϕ b ,φ b ) and the complex constant is actually only defined modulo 2π, but for simplicity I shall continue to refer to θ ∞ (ϕ b ,φ b )). For the symmetric-bounce solutions, the solution-space is just one-dimensional (governed by the one parameter ϕ b ) rather than two-dimensional, with the restrictionφ b = 0, so both M ∞ and θ ∞ are then functions just of ϕ b . Hence for these symmetric-bounce solutions, in principle one gets a particular analytic relation θ ∞ = θ ∞,sb (M ∞ ).
For the complex solutions of the same minisuperspace system corresponding to the no-boundary proposal [34], one should get a slightly different analytic relation θ ∞ = θ ∞,nb (M ∞ ), though one would expect these two functions to approach the same values for very large M ∞ . In the no-boundary case, in which the one free parameter is the complex initial value of ϕ, say ϕ(0), both M ∞ (ϕ(0)) and θ ∞ (ϕ(0)) would be complex for generic complex ϕ(0), but one could choose a one-real-parameter contour in the complex-ϕ(0) plane that would make M ∞ (ϕ(0)), say, real. But it would still be the case that even for real M ∞ (ϕ(0)), the corresponding θ ∞ (ϕ(0)) would not be quite real, so θ ∞,nb (M ∞ ) would not be precisely real for real M ∞ as θ ∞,sb (M ∞ ) is for the symmetric-bounce solutions, as it always is for a real one-parameter set of Lorentzian spacetimes of the FRW form being assumed here. Therefore, it is a bit ambiguous what real Lorentzian solutions correspond to the no-boundary proposal, even asymptotically, since for the complex extrema obeying the no-boundary conditions, one cannot have the two asymptotic constants M ∞ and θ ∞ both real. One can of course make ad hoc choices, such as taking the real Lorentzian solutions that corresponds to real values of M ∞ and then to the real values θ ∞ = Re(θ ∞,sb (M ∞ )) that are the real parts of the complex values θ ∞,sb given by the no-boundary proposal for the real values of M ∞ . However, one does need to make some such ad hoc choice before getting precisely real Lorentzian solutions from the no-boundary proposal.
Inhomogeneous and/or anisotropic perturbations
The symmetric-bounce proposal for the quantum state of the universe is that the universe has inhomogeneous and anisotropic quantum perturbations about the set of classical inflationary solutions described above that are in their ground state at the symmetric bounce hypersurface. In particular, the quantum state of the perturbations on that hypersurface is proposed to be the same as that of the de Sitter-invariant Bunch-Davies vacuum [98] on a de Sitter spacetime with the same radius of the throat as that of the classical background symmetric-bounce inflationary solution at its throat.
Of course, once the massive scalar inflaton field starts to roll down its quadratic potential, the background spacetime will deviate from de Sitter spacetime, so that the quantum perturbations will no longer remain in a de Sitter-invariant state. One would expect the usual inflationary picture of parametric amplification that would result in each inhomogeneous mode leaving its initial vacuum state and becoming excited as the wavelength of that mode is inflated past the Hubble scale given by the expansion rate. In this way one would get the usual inflationary production of density perturbations arising from the initial vacuum fluctuations.
This part of the story is similar to the Hartle-Hawking no-boundary proposal [24,25,26,27,28,29,30,31,32,33,34], which also predicts that the inhomogeneous and anisotropic quantum perturbations start off in the de Sitter-invariant Bunch-Davies vacuum (and admittedly predicts this in a slightly less ad hoc way than it is proposed in my symmetric-bounce proposal). However, the main difference is that the symmetric-bounce proposal has the more uniform weighting given by Eqs. (13) for the different values of ϕ b and hence of the dimensionless bounce radius r b = 1/ϕ b , rather than being weighted by the exponential of twice the negative action of the Euclidean hemisphere as in the no-boundary proposal. It is this exponential weighting of the no-boundary proposal that apparently leads to the probabilities being enormously dominated by the largest Euclidean hemispheres, those of empty de Sitter spacetime, and hence for observational probabilities dominated by earlytime Boltzmann brains (or Boltzmann solar systems, if one excludes the possibility of observers existing without an entire solar system) [41,42,43,44,45]. By not having these Euclidean hemispheres and their enormously negative Euclidean actions, the slightly more ad hoc symmetric-bounce proposal can avoid the huge domination by empty or nearly-empty de Sitter spacetimes that seems very strongly at odds with our observations of significant structure far beyond ourselves, such as stars.
Conclusions
The symmetric-bounce proposal is that the quantum state of the universe is a pure state that consists of a uniform distribution (in a metric induced from the DeWitt metric on the superspace) of components (of different bounce sizes) that each have the quantum fluctuations initially (at the bounce) in their ground state at a moment of time symmetry for a bounce of minimal three-volume. The background spacetimes of this proposal (ignoring the quantum fluctuations) consist of a oneparameter family (at least for one inflaton field; if there are more, there would be as many parameters as bounce values of all the inflaton fields) of time-symmetric inflationary Friedmann-Robertson-Walker universes. For each member of this family, the quantum state of the inhomogeneous and/or anisotropic fluctuations are, at the bounce, the same as the de Sitter-invariant Bunch-Davies vacuum for a de Sitter spacetime with the same curvature as the background FRW universe at its bounce. The entire quantum state is a coherent superposition of all these FRW spacetimes with their quantum fluctuations, with weights given by the DeWitt metric for the bounce configurations.
This symmetric-bounce quantum state reproduces all the usual predictions of inflation but avoids the huge negative Euclidean actions of the Hartle-Hawking noboundary proposal that seems to make the probabilities dominated by nearly-empty de Sitter spacetime and make our observations of distant structures (e.g., stars) extremely improbable.
It is interesting that since the background inflationary FRW cosmologies for each macroscopic component of the symmetric-bounce quantum state are time symmetric about a bounce, there is actually no big bang or other initial singularity in this model. The classical background universes contract down to the bounce without becoming singular, and then they re-expand in a time-symmetric way. However, because the quantum fluctuations are in their ground state at the bounce, that is the moment of minimal coarse-grained entropy, so entropy grows away from the bounce in both directions of time. Any thermodynamic observer would sense that the arrow of time (given by the observer's memories and observations of the increase of entropy) is increasing away from the bounce, so it would regard the bounce as in its past. Thus one would get the observed time asymmetry of the universe without any of the background classical components having this asymmetry in a global sense. In Wheeleresque terms, the universe would have time-asymmetry without timeasymmetry. on this particular issue with Jim Hartle and Thomas Hertog. I am thankful to an anonymous referee for suggesting many improvements and added references, and to Bill Unruh and the University of British Columbia for hospitality while these corrections were made. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada. | 2009-08-14T23:27:50.000Z | 2009-07-10T00:00:00.000 | {
"year": 2009,
"sha1": "1cde621692b93d6286b6da50c65a1296d646b2ac",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0907.1893",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1cde621692b93d6286b6da50c65a1296d646b2ac",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
44281970 | pes2o/s2orc | v3-fos-license | Viral proteome size and CD8+ T cell epitope density are correlated: The effect of complexity on selection
Highlights • We analyze the relation between viral complexity and their adaptation to the host immune system.• Viruses with few proteins and low number of nucleotides remove more CD8+ T cell epitopes.• Within a virus, short proteins (with fewer amino acids) adapt better than long ones.• The relation between total size and adaptation is host specific.• Complexity limits genetic adaptation in the high-mutation rate strong selection regime.
Introduction
Evolution is driven by a combination of mutations and selection. The balance between the two is a function of the cost of mutations and the strength of selection. Mutations can have, on the one hand, an environmental advantage and, on the other hand, a fitness cost (Soderholm et al., 2006). A typical example of this dual effect is escape mutations from CD8+ T cells (CTLs) in viruses. While these mutations can lead to a higher survival probability, they often lead to a lower probability of producing functional virions. We here use bioinformatics tools to analyze the relation between the frequency of escape mutations and the organisms' complexity.
CTLs recognize virally infected cells through small (typically 8-10 amino acid long) peptides, denoted epitopes. These epitopes are presented in the binding groove of MHC class I molecules located on the surface of these cells (Williams et al., 2002). When an appropriate CTL encounters a host cell expressing such epitopes, the host cell is rapidly destroyed along with its hosted virus (Aebischer et al., 1991;Bowen and Walker, 2005;McMichael and Phillips, 1997). This leads to an evolutionary pressure on viruses to avoid this detection in order to survive and infect new cells. Peptide binding to MHC-I groove requires well-defined binding motifs. Only a few percent of the possible peptides have such a motif, limiting the number of possible epitopes in every protein to approximately 1-2% of all possible nine-mers for a given HLA allele (Yewdell, 2006). The HLA polymorphism challenges viruses with a changing environment that may result in back-and-forth (toggling) escape mutations (Delport et al., 2008) and thus limit the fixation of mutation. However, mutations in the cleavage sites of the highly conserved proteasome or in positions that binds to conserved HLA motifs can lead to a removal of epitopes at the population level. Thus, in principle, a very limited number of properly positioned mutations can completely hide a viral protein from CTLs.
Many viruses indeed acquire escape mutations in epitopes presented to CTLs (Bowen and Walker, 2005;McMichael and Phillips, 1997;Agranovich et al., 2011;Alcami, 2003;Poppema et al., 1998;Timm et al., 2004;Yates et al., 2007). These mutations have the obvious advantage of reducing the probability that a CTL would kill the virus. The balance between the fitness cost and the advantage obtained by escape mutation leads to a non-uniform epitope density distribution among different viral proteins. We have recently shown, for example, that proteins expressed early in the viral life cycle have a lower epitope density than proteins expressed late in the viral life cycle (Agranovich et al., 2011;Vider-Shalit et al., 2009a, and that proteins with a low copy number have more epitopes than proteins with a high copy number (Maman et al., 2011a). Here, we study the relation between the accumulation of escape mutations and the viral complexity (as shall be further defined). Specifically, we test whether the (dis)advantage of a given mutation is determined by the mutation itself or whether it is related to the complexity of the entire protein or perhaps even the entire organism.
The relation between complexity and selection was initially studied by the pioneering work of Fisher in 1930(Fisher, 1930. Fisher proposed that as the dimensionality of the phenotype increases, the probability of a mutation being beneficial decreases due to its pleiotropic effects and different dimensions of the phenotype. The phenotype dimensionality is defined by the number of organism's parts (phenotypic characters -denoted hereafter as n). Kimura and Orr then expanded Fisher's work and showed that Fisher underestimated the cost of complexity by not incorporating the lower fixation probability of mutations with a limited phenotypic effect following the effect of stochastic drift (Kimura, 1983;Orr, 2000). Orr (2000) further showed that the average fitness increase rate is inversely proportional to n for small and medium n and much faster for large n. In other words, the adaptation rate of complex organisms is lower than the one of simpler ones. Welch and Waxman (2003) examined the robustness of Orr's model by introducing different mechanisms (such as varying magnitude of mutations, modularity (Wagner, 1996;Wagner and Altenberg, 1996;Baatz and Wagner, 1997) and a constant mean mutational chance per phenotypic character). They showed that the relation between the complexity and adaptation rate is robust to most variations of the model. Gillespie (1994Gillespie ( , 1984Gillespie ( , 1983) extended Fisher's model and proposed the mutational landscape model. Orr (2003Orr ( , 2002 further extended his work and found different patterns that characterize the adaptation of DNA sequences. His model was tested using single stranded DNA viruses (Rokyta et al., 2005). These studies were done in the regime of weak selection and low mutation rate.
In contrast with the above mentioned models, viral escape from immune recognition is characterized by strong selection and a high mutation rate. Most viruses budding from a given cell are destroyed and most infected cells can be cleared extremely fast in the presence of an immune response, since CTLs can induce apoptosis in infected cells within minutes Macken and Perelson, 1984) (for reviews see Yates et al., 2007;. The mutation rate of viruses can reach 1.eÀ4 mutations per base pair per replication (Sanjuan et al., 2010). We here show, using bioinformatics measurements, a direct relation between organisms' complexity (as defined by their proteins' length and their number) and their epitope density. The ratio between the epitope density and the expected epitope density based on the amino acid composition of each protein are used to estimate the accumulation of escape mutations. Viruses with a low number of proteins accumulate more escape mutations per protein than large ones, and short proteins accumulate more escape mutations than long ones, even in steady state.
Protein length and number are obviously a simplistic proxy for complexity and ''phenotype dimensionality''. However, in viruses where the number of proteins is highly limited, such an approximation is probably reasonable. Note that large viruses may have developed other alternatives, such as specific proteins that down-modulate MHC or MHC loading. In other words, the cost of removing mutations may be too high for them, leading them to alternative pathways to modulate the immune response.
Evolution rate assessment using theSIR Score
The Size of Immune Repertoire (SIR) score for a given HLA allele (an allele that encodes for a human MHC class I molecules) is an estimate of the average normalized CTL epitope density of a given protein in this allele. Specifically, the SIR score of an amino acid sequence for a given HLA allele is the ratio between the predicted CTL epitope density in this sequence and the epitope density expected in a random sequence. (See Methods for detailed description). It is based on multiple Bioinformatic algorithms used to compute all stages of epitope processing and presentation, and was tested to be precise in multiple previous studies (Vider-Shalit et al., 2009a,b, 2007Maman et al., 2011a,b;Vider-Shalit and Louzoun, 2010;Kovjazin et al., 2011).
In order to unify the score over all alleles, the SIR score of a protein sequence in a population is defined as the weighted average SIR score for all HLAs, weighted by the HLA allele frequency in that population. An average SIR score of less than 1 represents a sequence with less epitopes than expected; conversely, an average SIR score of more than 1 represents a sequence with more epitopes than expected. A schematic description of the SIR score is given in Fig. 1.
The SIR score of a virus is then defined as the average SIR score of all its proteins. Note that large and small proteins have equal weights in this analysis. Fig. 1. Algorithm for the SIR score computation. Each viral protein is divided into all nine-mers and the appropriate flanking regions (a). For each nine-mer a cleavage score is computed (b). We compute a TAP binding for all nine-mers with a positive cleavage score and choose only supra-threshold peptides (c). Using the MLVO algorithm, the MHC binding scores of all TAP binding and cleaved nine-mers are computed (d). Nine-mers passing all these stages are defined as epitopes. We then compute the number of epitopes per protein per HLA allele (e).
Correlation between the number of proteins and the epitope density
The complexity of an organism can be measured by its protein number. Thus, if for complex organisms the fixation probability of escape mutation is lower, we expect a positive correlation between the average SIR score of an entire virus and the number of proteins in the virus. In other words, we expect viruses with fewer proteins to have a lower average epitopes density (ratio between epitope numbers and Protein lengths).
We have analyzed the proteins of all viruses (human and nonhuman) with at least 4 proteins with a RefSeq identifier (Pruitt et al., 2007). A full list of the studied viruses and proteins is given in the Supplementary Material.
A positive correlation was observed between the average SIR score of all proteins in a virus and the number of proteins in the virus, in viruses infecting humans (Spearman R = 0.23, p < 0.0016) (Fig. 2).While in general most of the viruses have average SIR scores lower than 1 (T test p value 9.2eÀ6), viruses with a small number of proteins (less than 40) are more biased toward a lower scores (average 0.87) than viruses with a large number of proteins (average 0.97).
RNA viruses tend to undergo more mutations and probably generate more escape mutations. Also, their genome size is relatively smaller as compared to DNA viruses. A simple hypothesis could have been that the protein number effect is induced by the difference between RNA and DNA viruses. We have thus performed a regression analysis on the virus type (DNA vs. RNA) and the protein number (data not shown). We found that indeed RNA viruses had a significantly lower SIR score than DNA viruses (p < 0.001). However, even the virus type was incorporated, the regression coefficient of the SIR score on the number of proteins was positive and significant (p < 0.001).
In order to test the assumption that the observed correlation is indeed a result of selection against epitope presentation, we have performed a similar analysis in viruses infecting non-human hosts. Such viruses, which have never met human HLA alleles, are not expected to accumulate escape mutations with respect to these human HLA alleles. Indeed, the Spearman correlation of the protein number and the SIR score in viruses infecting non human hosts is practically null (Fig. S1, R = 0.08, p = 0.44). Note that since there is some similarity between human and non-human MHC molecules, peptides that are epitopes for non-human MHC molecules are sometimes also epitopes for human MHC molecules. Moreover, since the proteasome is highly conserved among species, human and non-human hosts share a similar pool of cleaved peptides that can serve as ligands for MHC-I binding.
Many of the studied viruses are quite similar (e.g. HIV I and HIV II). The significant correlation may be the result of the similarity between viruses (adding more degrees of freedom than there actually are). In order to exclude this possibility, we repeated the same analysis grouping all viruses from the same family (e.g., all HPVs, all Herpesviruses, all Influenza viruses). The result is an even clearer correlation between the SIR score and the Log protein number (Spearman R = 0.36, p < 0.001) (Supplementary Material, Fig. SS4). Again, performing a similar analysis on non-human viruses yields no significant correlation (p = 0.44).
Correlation between protein length and epitope density
Another measurement that could reflect complexity is the protein length. While the number of proteins represents the complexity of different viruses, the protein length is correlated to the complexity within the virus. Obviously, a viral protein may include several domains and have several functions, and protein length is not completely equivalent to the complexity. However, as a first approximation, we can expect a correlation between protein length and complexity within a virus. We therefore tested the correlation between protein length and the epitope density in a virus specific manner.
If indeed selection is more active in shorter proteins than in longer proteins, we expect again a positive correlation between the proteins 0 length and their epitope density. If this correlation is indeed due to selection, it should only be observed more in viruses infecting human hosts, than in viruses not infecting human hosts.
We tested the correlation between SIR score and protein length for each virus and plotted the distribution of the correlation coefficient for human viruses (Fig. 3)). In viruses infecting human hosts, this correlation is positive for most cases (about 80% of cases), and the average Spearman correlation is 0.25 and is significantly higher than 0 (one sample T test with the average correlation per virus, p = 3.eÀ15).
A similar test on non-human virus produced a smaller, yet significant deviation from zero (average R = 0.1, p = 0.025, Fig. S2). When comparing the average correlation in human and non-human viruses, non-human viruses have a much lower average correlation (two sample T test with the average correlation per virus T test, p < 0.01). Again, the presence of some correlation between protein length and SIR score in non-human viruses is probably the result of the partial similarity between the MHC binding motifs among mammals. Note that for some viruses random fluctuations or other elements affecting the epitope density could induce negative correlations. Therefore, although not the ultimate factor, the organism ''size'' is a major factor affecting the selection against epitope presentation.
In order to ensure that the results are not due to a single family of viruses, we repeated the analysis for each family separately. This resulted in similar results (Fig. S3). One can see that for most families, the correlation coefficient is positive.
As we have shown previously, the selection acts mainly on viruses with a limited number of proteins (Fig. 2). Therefore, if indeed the correlation between the protein length and the SIR score is induced by selection, we expect the SIR-length correlation coefficients to be correlated with the number of proteins in the virus. Fig. 2. Number of proteins in a virus versus its average SIR score. Thex axis is the protein number for each virus. The y axis is the average SIR score. Each dot is a virus infecting a human host. Viruses with a low number of proteins have on average a low SIR score, while viruses with a large number of proteins have an SIR score of slightly less than 1. Note that some large viruses show a limited extent of selection. Furthermore, some small viruses have an SIR score higher than 1, as is expected from the low number of proteins in such viruses and the random variability in the epitope density. (Pearson, R = 0.17, p < 0.015; Spearman R = 0.23, p < 0.0016). Epitope prediction was done by the MLVO algorithm.
Indeed, viruses with a high number of proteins have no correlation between the protein length and the SIR score, while practically all small viruses have such a correlation. Specifically, the correlation between the number of proteins in the virus and the SIRlength correlation coefficients is significant in human viruses (R = À0.27, p < 1.eÀ5) (Fig. 4). No such correlation exists in non-human viruses (R = 0.08, p = 0.44). Thus, through multiple measures within and between viruses, one can clearly see a correlation between the SIR score and the protein length.
To summarize, these results suggest that the relation between the organism's complexity and the fixation of advantageous mutations extends from the single protein to the full organism. The difference between viruses infecting humans and viruses infecting other species suggests that these results are indeed due to selection against epitope presentation on human HLA molecules and not to properties of short and long proteins in general or generic features of small and large viruses. One could argue that this effect is due to the similarity at the sequence level between proteins in the group of viruses. However, even when all viruses belonging to the same group are clustered to a single point, the relation between the number of proteins and the average SIR score can be clearly observed.
Given the correlation between the SIR score, and both the protein length and their number, we hypothesized that their combination is even more correlated with the SIR score. To test that, we computed the correlation of the SIR score with the total proteome size for each virus (the sum of the virus protein lengths). As expected, this combination resulted in an even higher correlation (R = 0.41, p < 1.eÀ10) (Fig. 5).
Discussion
We have here shown, using bioinformatics tools and large scale genetic data sets that selection for escape mutations (specifically, mutations that remove CTL epitopes) in viruses is mainly focused on short proteins and small viruses. Such results are expected given that in large proteins/viruses the removal of epitopes has a fitness cost, but no significant survival advantage. This is an extension of previous models on the incompatibility between the fitness cost and the phenotypic advantage of each mutation in complex organisms (Orr, 2000;Wagner and Altenberg, 1996). As the organism becomes more complex, the probability that a mutation should increase the organism's fitness decreases, while the cost of each mutation stays constant.
A positive correlation was here described between the epitope density (as measured by the SIR score using the MLVO MHC-I binding prediction algorithm) of each protein and the protein length. A similar correlation was observed between the average epitope density in a full virus and the number of proteins in the virus. The results were also validated using the classical, yet less precise, BIMAS algorithm for MHC binding (Fig. S5, Table S1). These correlations were observed in viruses infecting humans, and to a much lesser extent in viruses infecting non-human hosts.
It is important to mention that the proteasome and the TAP channels which are highly conserved among species and human MHC also show some level of similarity to non-human MHC, and hence a correlation was seen in non-human viruses as well. However, the stronger correlation in human viruses shows that the reduction in the epitope number is indeed the result of immuneinduced selection against epitope presentation. 4. Number of proteins in virus versus the correlation coefficient of SIR score and protein length. The x axis is the coefficient of the SIR score -protein length correlation and the y axis is the number of proteins. Viruses with a small number of proteins have a higher SIR score-protein length correlation than viruses with large number of proteins. (R = À0.27, p < 1.eÀ5). This can be very clearly seen by the large number of small viruses with high correlation coefficients (lower right part of the distribution), and the absence of a parallel distribution of small viruses with negative correlation coefficients. Epitope prediction was done by the MLVO algorithm. Fig. 5. Virus proteome size versus its averaged SIR score. For each virus, the lengths of all of its proteins were summed and tested for correlation with the SIR score. The x axis is the total number of A. A in a virus (the sum of the protein lengths in the virus), and the y axis is the average SIR score for the same virus. One can see that this correlation is stronger than the correlation of the SIR score with either protein length or number of proteins alone (Spearman R = 0.41, p < 1.eÀ10). Epitope prediction was done by the MLVO algorithm.
In order to avoid the destruction of its host cell, the virus evolves to remove a large fraction of the epitopes. Removing a limited part of the epitopes has a very limited advantage and can have a high fitness cost. Thus, even if the cost per mutation is larger in small viruses (and the more so if it is constant), the increase in the number of total required mutations makes it harder for large viruses to adapt. Thus, the very clear negative correlation between the number of proteins and the accumulation of escape mutations may be a result of the ''all or none'' selection force affecting viruses.
An alternative explanation for the negative correlation between selection against epitopes and the number of proteins is that viruses with a large number of proteins have a higher probability of expressing immune regulatory proteins and hence are less threatened by CTL recognition. However, it seems that this is not the main factor that determines selection against epitopes, since some of the small viruses do express immune-regulatory proteins (HIV Vigerust et al., 2005;Piguet and Trono, 2001;Piguet et al., 2004, HCV (Zimmermann et al., 2008Kim et al., 2012), and they, as other small viruses, have a low epitope density.
Among all of the peptides that presented on the MHC-I molecule, only a small fraction will eventually induce T cell response (i.e. Immunodominant epitopes (Yewdell, 2006).
Although most presented peptides will probably not induce a T cell response, the systematic removal of these peptides will lower the probability of appearance of immunodominant epitopes. Therefore a lower number of presented peptides may account for a stronger selection against T cell response.
Note that if indeed a small number of presented peptides are immunodominant, and many of the peptides computed to be presented in the current analysis do not induce a T cell response, then the observed decrease of the average epitope density to 70% of its expected value in small viruses, may actually represent a removal of all immunodominant epitopes. In such a case, the effect of the viral complexity may actually be much more significant that we present here.
From a theoretical point of view, immunodominance is also expected to increase the effect of the viral complexity on the accumulation of escape mutations. Assume that only very few presented peptides can induce a strong immune response. In such a case, for a small virus it is enough to mutate a few epitopes and completely escape from the immune system. For large viruses, it may be impossible to remove enough epitopes to prevent an immunodominant response, and as such gain nothing from escape mutations.
Although epitope density is used as a measure for selection, the relation between these two is not straight forward. In previous studies we have demonstrated that inherent characters of a protein influence its epitope density regardless to the presence of selection (Maman et al., 2011a). For example, hydrophobic proteins naturally have more epitopes that hydrophilic proteins, due to the nature of the MHC-I binding groove. Therefore, viruses that have an hydrophobic proteins, might have higher SIR score even though they are under strong selection for epitope removal. One striking example for this is the Human coronavirus that have low number of proteins but relatively high average SIR score (1.46) (Fig. 2, Table S1).
More generally, we have shown in the past that Viruses infecting humans have less epitopes that viruses infecting non-human hosts on human HLA alleles. Within the human viruses, there are multiple factors affecting the epitope density. We have here shown that the complexity of the virus is one of the major elements shaping this density, but not the only one.
The genetic complexity as represented by the number of proteins and their length does not completely capture the phenotypic complexity, which is much more complex to define and measure.
However, the presence of such a clear correlation suggests that these measures are at least related to the organism complexity.
Most recent studies on evolution have been done in a regime of weak selection and low mutation rate (Rokyta et al., 2005(Rokyta et al., , 2006Fudenberg et al., 2006;Lande, 2009) (for a review see Orr, 2005). One could have assumed that in viruses, with the extreme regime of high levels of both selection and mutation rates, the relation between complexity and adaptation would be lost, and viruses would be able to optimally adapt their sequence to avoid detection. We have here shown that even in such extreme cases, a balance between complexity and adaptation exists.
These results have implications far beyond the specific issue of escape mutations. We have shown that beyond 40 proteins, viruses fail to adapt their genome to the host immune system. Actually, beyond 40 proteins, there is practically no adaptation. One can therefore ask, how can much more complex organisms, with a much lower mutation rate (1.eÀ4 vs 1.eÀ9), a much longer life-cycle (hours vs. years), and a much smaller population (thousands to billions per species for most advanced species vs. more than 1.e10 in each different host for many viruses) evolve to adapt to their environment.
The simple answer may be modularity. We have previously shown in the case of herpesviruses and bacteria (Vider-Shalit et al., 2007;Maman et al., 2011c) that while most proteins do not avoid detection, limited groups of proteins, such as Herpesvirus latent protein, or Type III secretion system effectors of gram-negative bacteria do accumulate escape mutations (Vider-Shalit et al., 2007;Maman et al., 2011c). The same thing may be true for the evolution of advanced species: while the full genome (or even groups of tens to hundred ore genes) is way too complex to adapt, limited gene groups may adapt to their environment.
SIR score
We have analyzed the ratio between the number of epitopes presented in viral proteins and the number of epitopes in random proteins with the same length and typical viral amino acid composition. This ratio was defined as the Size of Immune Repertoire (SIR) score. The epitope number was computed using three algorithms: a proteasomal cleavage algorithm (Ginodi et al., 2008), a TAP binding algorithm developed by Peters et al. and the MLVO MHC binding (Vider-Shalit and Louzoun, 2010) algorithms. The algorithms' quality was systematically validated using epitope databases and was found to induce low FP and FN error rates. Different alleles present different set of epitopes. Thus, the analysis is first performed at the single allele level. For instance, if a sequence from a viral protein X has 4 epitopes that can bind the groove of the HLA allele A⁄0201 and a random sequence with a similar length and a typical viral amino acid distribution is expected to have 10 HLA A⁄0201 epitopes, then the SIR score of X for HLA A⁄0201 would be 0.4 (4/10). We have computed epitopes for the 39 most common HLA alleles and weighted the results according to the allele frequency in the Caucasian population (Newell et al., 1996). The computation of the SIR scores can be performed through our web-server at http://peptibase.cs.biu.ac.il/index.html.
Cleavage score
Given a peptide with N-and C-terminal flanking residues FN and FC and residues P 1 ,. .P i ,.. P n , where P i represents any residue 1, and n represents C and N positions, the following score was defined: S 3 ðP i Þ þ S 4 ðP n Þ þ S 5 ðFCÞ: A peptide with a high score, S, has a high probability of being produced, while a low score corresponds to a low probability of production. The appropriate values for S 1 to S 5 were learned using a simulated annealing process. The algorithm was validated to give a rate of false positives of less than 16% and a rate of false negatives of less than 10% (Ginodi et al., 2008).
MHC binding analysis using multi-label vector optimization (MLVO)
The MLVO algorithm (Vider-Shalit and Louzoun, 2010) for MHC binding prediction finds a classifier (w) using three label types that are combined into a single constrained optimization problem. The method finds the optimal combination of binary classification of peptides known to bind or not to bind the MHC molecule, a linear regression based on the measured affinities of peptides with a known IC50 or EC50 binding concentrations and a guess (often based on information on similar alleles). Solving this optimization problem results in a Position Weight Matrix for each HLA alleles. These matrices estimate the contribution of each amino acid at each position to the total binding strength. The accuracy of MHC binding prediction for the vast majority of MHC-I alleles in the MLVO is over 0.95 (with AUC of over 0.98). As in all other cases, the SIR results presented are an average weighted over alleles of the ratio between the computed epitope density and the one expected in a random sequence. The SIR scores of the viral proteins in this study are presented in Supplementary Material Table S1.
Thresholds
The MHC binding prediction algorithm provides a binding score for each nine-mer. In order to produce an epitope list, a cutoff should be applied to these scores for each allele. The way the cutoff is determined is based on the competition for the presentation on a limited number of MHC molecules. For example, an allele such as B⁄2705 is expected to present a very large number of epitopes from self proteins. Thus a viral protein with a large number of epitopes would have to compete with a similarly high number of epitopes in human proteins. While this approach may lead to the exclusion of some real viral epitopes, it should not affect the ratio between the number of computed epitopes in human and non-human viruses.
Epitope computation server
We have designed a CTL epitope SQL based library webserver (http://peptibase.cs.biu.ac.il). This website provides detailed CTL epitope libraries for the human and mouse genomes as well as for most fully sequenced viruses. It also allows users to upload a file and produce an epitope library. All viral proteins in this study were analyzed for their epitope using this webserver.
Statistics
All comparisons were performed using two-sided unequal variance T tests. The correlation between length and SIR score was computed using a Spearman Correlation since the distribution of the protein lengths is approximately log normal (Supplementary Material Fig. S6) and not normal. | 2018-04-03T05:44:51.299Z | 2013-08-15T00:00:00.000 | {
"year": 2013,
"sha1": "90cb2ad1e63ce541962d14a94d60a5e045ee0f2d",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.meegid.2013.07.030",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "29292b5b7ab624d09cba00d6c64c7288d68b9b17",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
5798806 | pes2o/s2orc | v3-fos-license | The mass and dynamical state of Abell 2218
Abell 2218 is one of a handful of clusters in which X-ray and lensing analyses of the cluster mass are in strong disagreement. It is also a system for which X-ray data and radio measurements of the Sunyaev-Zel'dovich decrement have been combined in an attempt to constrain the Hubble constant. However, in the absence of reliable information on the temperature structure of the intracluster gas, most analyses have been carried out under the assumption of isothermality. We combine X-ray data from the ROSAT PSPC and the ASCA GIS instruments, enabling us to fit non-isothermal models, and investigate the impact that this has on the X-ray derived mass and the predicted Sunyaev-Zel'dovich effect. We find that a strongly non-isothermal model for the intracluster gas, which implies a central cusp in the cluster mass distribution, is consistent with the available X-ray data and compatible with the lensing results. At r<1 arcmin, there is strong evidence to suggest that the cluster departs from a simple relaxed model. We analyse the dynamics of the galaxies and find that the central galaxy velocity dispersion is too high to allow a physical solution for the galaxy orbits. The quality of the radio and X-ray data do not at present allow very restrictive constraints to be placed on H_0. It is apparent that earlier analyses have under-estimated the uncertainties involved. However, values greater than 50 km/s/Mpc are preferred when lensing constraints are taken into account.
INTRODUCTION
The masses of galaxy clusters can be determined in three main ways: using the velocity dispersion of the galaxies, the pressure gradient in the hot intracluster gas derived from X-ray imaging and spectroscopy, and by analysing the lensing of background galaxies by the cluster potential. Each of these approaches involve assumptions and are vulnerable to various systematic errors. It is therefore useful to compare the results of the different techniques. The X-ray and lensing approaches are generally considered the most reliable. Results from them have now been compared for a number of clusters (Fort & Mellier 1994). The agreement is often reasonable, but there are a few spectacular exceptions, of which Abell 2218 (hereafter A2218) is the most well studied example.
A2218 is an optically compact (core radius ≈ 1 ′ ; Dressler 1978) cluster of galaxies, located at a redshift of 0.171 (Kristian, Sandage & Westphal 1978), and classified as richness class 4 (Abell, Corwin Jr & Olowin 1989). The cluster appears well relaxed, with the majority of the galaxies centred around the sole cD galaxy. However, detailed photo-metric studies (Pello-Descayre et al. 1988;Pello et al. 1992) suggest the existence of a second, smaller galaxy concentration, displaced from the cD by 67 ′′ . Spectroscopic study , performed on the central region (< 4 ′ ) of A2218, has provided redshift information for 66 of the objects within the core and shown that the average velocity dispersion is 1370 km s −1 .
A succession of X-ray telescopes have allowed the properties of the hot gas within A2218 to be established. With the Einstein IPC & HRI (Perrenod & Henry 1981) and ROSAT PSPC (Siddiqui 1995), the emission was found to be smooth (on scales ∼ 1 ′ ), azimuthally symmetric and centred on the cD galaxy. Fitting a polar profile of the surface brightness with a King model gave a core radius of 58 ′′+16 −16 and a β-value of 0.63 (Boynton et al. 1982). Integrated spectral analyses with Ginga gave a gas temperature of 6.72 +0.5 −0.4 keV and metallicity of 0.2 +0.2 −0.2 Z⊙ (McHardy et al. 1990). By virtue of Ginga's bandwidth, this determination is commonly accepted as the most accurate estimate of the mean gas temperature. Most recently, deep observations with the ROSAT HRI (Markevitch 1997) have shown the presence of significant X-ray substructure within the cluster core, sug-gesting that the cluster may have undergone a recent merger event. This may account for the absence of any signs of a cooling flow in the cluster (Arnaud 1991;White 1996).
Several previous comparisons (Miralda-Escude & Babul 1995;Kneib et al. 1995;Kneib et al. 1996;Natarajan & Kneib 1996) between strong lensing and X-ray analyses have found a factor of 2 discrepancy in the gravitating masses predicted by the two methods. Suggested explanations have centred upon the assumption of hydrostatic equilibrium for the cluster gas, the possibility that magnetic fields may provide significant pressure support to the gas and the presence of substructure within the cluster.
The Sunyaev-Zel'dovich decrement associated with A2218 has been extensively studied (Jones et al. 1993;Birkinshaw & Hughes 1994;Saunders 1996). These results have been used, in conjunction with X-ray data, to constrain the Hubble constant (Silk & White 1978;McHardy et al. 1990;Birkinshaw & Hughes 1994). These analyses have been made in the absence of reliable information about temperature variations in the intracluster gas and have therefore been forced to make simplifying assumptions, such as that of isothermality (McHardy et al. 1990;Birkinshaw & Hughes 1994;Kneib et al. 1995). This assumption is without a strong theoretical foundation and conflicts with the results of most cosmological simulations (Navarro, Frenk & White 1995;Navarro, Frenk & White 1996;Tormen, Bouchet & White 1996), which show temperature declining with radius, and mass distributions which have a central cusp. The question then arises as to whether simplifying assumptions have significantly biased the conclusions of previous X-ray analyses. For example, is the apparent discrepancy between the X-ray and lensing masses unavoidable or does it arise simply from the use of inappropriate assumptions in the X-ray analysis?
Motivated by the desire to avoid such restrictive assumptions, we have carried out an X-ray analysis which combines the capabilities of ROSAT and ASCA. The limited spectral bandwidth and resolution of the ROSAT PSPC is compensated for by the superior spectral properties of ASCA. Conversely, the poor spatial performance of ASCA is complemented by the higher spatial resolution of ROSAT. This approach has never before been applied to A2218.
The central aim of this paper is to improve our understanding of A2218 by comparing the results of our X-ray analysis with lensing, SZ and galaxy velocity studies. It also serves as a case study on the possible dangers of assuming an isothermal gas, when one has no information to the contrary. Throughout the paper we assume an Einstein-de Sitter cosmology with Ω=1, q0=0.5 and H0 = 50 km s −1 Mpc −1 , except where otherwise stated.
X-RAY ANALYSIS
The objective of the analysis is to use spatially and spectrally resolved X-ray data to constrain models of the distribution of gas properties in the cluster. For an in-depth discussion of the procedures covered in this section, see Cannon (1997).
Spectral-image modelling
We work with X-ray spectral images, which constitute blurred records of the spectral properties of the cluster projected along the line of sight. Since information about the disposition of material perpendicular to the plane of the sky is not available, it is necessary to make some assumption about the geometry of the source. We assume that the cluster is spherically symmetric. In practice, A2218 is slightly elliptical, with an axis ratio of 0.8 (Siddiqui 1995). However this modest ellipticity should not introduce any serious errors into our derived masses (Fabricant, Rybicki & Gorenstein 1984).
It is important to allow for the spatial and spectral blurring introduced by the telescope, as described by the instrument point spread function (psf) and energy response matrix. We adopt a forward fitting approach (Eyles et al. 1991;Watt et al. 1992), in which the properties of the gas are parameterised as analytical functions of cluster radius. The emission from each spherical shell in the cluster is computed using a Raymond & Smith (1977), hereafter RS, hot plasma code. After correcting for cluster redshift, the spectral emissivity profiles are folded through the instrument spectral response, projected along the line-of-sight, rebinned into an xy grid and blurred with the psf. This produces a predicted spectral image which can be directly compared to the observed data, using a maximum-likelihood statistic. Iteratively adjusting the model parameters results in a best-fit to the data.
Using analytical forms for the radial distribution of gas properties has the advantage of regularising the solution (i.e. suppressing instabilities in the deprojection and deblurring processes), however one runs the risk that the solution may be dictated by the mathematical function imposed. This can lead to overconfidence in derived results, as acceptable alternatives which might fit the data are ruled out by the limitations of the available models. The commonly employed restriction of isothermality is an extreme example of this. We attempt to avoid this problem by using a range of radial functions. This is particularly important for the temperature and we use not only a number of parametric forms for Tgas(r), but also an alternative approach in which Tgas(r) is determined indirectly, by fitting a model for the mass distribution, as discussed below. The gas density profile is much more readily determined by the X-ray data, so we have fitted only two radial forms.
Assuming that the intracluster gas is in hydrostatic equilibrium in the potential well of the cluster, the total gravitating mass within radius r, from the centre of the cluster, is related to the gas temperature and density by: where ρgas(r) is the gas density, Tgas(r) the gas temperature, µ the mean molecular weight and mp the proton mass.
Gas density
The gas density is well constrained by the X-ray surface brightness, since the PSPC is largely insensitive to variations in temperature for T> 3 keV. Surface brightness profiles are generally well fitted by core-index type models (King 1962;King 1972): where ρgas,0 is the central gas density normalisation (amu cm −3 ), rc the core radius (arcmin) and αρ the density index (unitless). The main deviations from this form occur at small radii, where cooling flows give rise to surface brightness cusps in many clusters, though not in A2218. Recent N-body studies (Navarro, Frenk & White 1995;Navarro, Frenk & White 1996;Tormen, Bouchet & White 1996) have achieved good fits to dark matter (DM) and gas profiles in simulated clusters with an alternative description. The profiles are found to steepen progressively, from ρgas(r) ∝ r −1 in the core, to r −3 near the virial radius, following the form where x = r/rs, rs being the scale radius (arcmin). Both of the above analytical forms have been fitted to the X-ray data for A2218.
Gas temperature
The gas temperature distribution is less well determined, since this requires a combination of spatial and spectral resolution which has not generally been available in the past. We consider a variety of simple models: a linear temperature ramp (LTF), where Tgas,0 is the gas temperature (keV) at the cluster centre, β the temperature gradient (keV arcmin −1 ) and r the radius (arcmin); a King-type temperature description (KTF), where rT is the temperature core radius (arcmin) and β the temperature index (unitless); and a polytropic temperature description (TTF), where rc is the gas density core radius (arcmin) and γ, the polytropic index (unitless), is fitted as a free parameter varying between isothermality (γ=1) and adiabaticity (γ=5/3).
Gravitating mass
An alternative to fitting ρgas(r) and Tgas(r) is to fit ρgas(r) and Mgrav(r). The corresponding temperature profile can then be inferred, via Equation 1. We use several alternative forms, motivated by the distribution of galaxies in clusters (Rood et al. 1972), and by the results of N-body simulations. These include: a core-index description (DMF), ρDM(r) = ρDM,0 1 + (r/rc) 2 −α DM where ρDM,0 is the central dark matter density normalisation (amu cm −3 ), rc the core radius (arcmin) and αDM the density index (unitless); a model based upon the simulations of Navarro, Frenk & White (1995) (DNF), where x = r/rs and rs is the scale radius (arcmin); and a Hernquist profile (DHF), where b = r/rs and rs is the scale radius (arcmin).
Fitting the models
Determination of the best-fit parameters for a cluster model proceeds in the way commonly employed for spectral fitting. The fit statistic employed is maximum likelihood, rather than chi-squared, since the data are generally strongly Poissonian. The fit and its local slope are determined at some initial position in the parameter space. This information is used to predict an improved set of model parameters and the fit statistic re-determined. The process is iteratively repeated until the statistic slope falls below a pre-determined value. One limitation of this method is, however, that the fitting tends to follow the local gradient in the statistic, until it encounters a minimum. Thus the fit can become trapped in a "valley", which it regards as the best-fit result, even though a more suitable combination of parameters may occur elsewhere. To avoid this, we randomly perturb models during analysis and force them to re-fit (to check if the same minimum is produced). Confidence regions can be derived for each best-fit parameter, by offsetting the parameter of interest from its bestfit value (both above and below the best-fit), and reoptimising the other parameters. The resulting increase in the fit statistic, from its optimum value, is used to determine what offset would need to be applied in order to create a userdefined change in the statistic. This defines the required confidence interval. We use the form of the maximum likelihood statistic introduced by Cash (1979), such that changes in the statistic have the same significance as changes in chisquared. Hence, for each parameter, an increase in the Cash statistic of 1 corresponds to a 68% confidence interval, and an increase of 2.71 to 90% confidence. The above process is repeated for each model parameter for which an error estimate is required.
Errors in physical quantities which are functions of radius (such as mass or temperature) are generally affected by several model parameters. We derive error envelopes for such quantities by taking the outer envelope of all of the curves generated by perturbing each free parameter to its upper and lower error bounds. Because these envelopes are derived using every parameter combination, each offset to their error bounds, the result is a conservative estimate of the statistical uncertainty.
Once the total gravitating mass distribution has been determined (using Equation 1, if the mass has not been modelled directly) the various mass components in the cluster can be separated. The gas mass profile is calculated from the fitted parameters. The galaxy mass profile can be constructed from the observed luminosity profile for the cluster, assuming a constant mass-to-light ratio. Subtraction of these components from the total mass profile then yields the dark matter profile.
ROSAT PSPC reduction
The aim of the ROSAT analysis is to obtain well constrained gas density parameters, which can then be utilised in the ASCA analysis.
The raw data, obtained on May 25th 1991, were reduced using the Starlink ASTERIX X-ray analysis package. Periods of high background were removed from the data, reducing the effective exposure time to 42 ksec but making the background subtraction substantially more reliable.
Subtraction of the X-ray background was accomplished by selecting data from an annulus (27 ′ − 33 ′ ), ignoring pixels covered by the detector support structure or containing point source emission. This background sample was then extrapolated to cover the whole field, using the PSPC energy-dependent vignetting function. Since the X-ray surface brightness profile for A2218 can be traced to a maximum radius of 12 ′ , the chosen background annulus is free of source emission. In the following analysis, the data are restricted to lie within 9 ′ to avoid possible systematic effects from uncertainties in the background subtraction at large radius.
The exposure-corrected, background-subtracted PSPC data were summed to provide an integrated cluster spectrum and split into concentric annuli, centred on the cluster core, within which spectra were extracted. Fitting these spectra gives an indication of the depth of the cluster potential well and the radial structure of the cluster gas density and temperature parameters (see Fig 1). However, any gradients present in these distributions will tend to be underestimated, due to the smoothing effects of the instrument psf and projection along the line-of-sight. Using annuli of width greater than the instrument psf minimizes the former effect.
In order to create a spectral image dataset, allowing a full spatial and spectral analysis, the data were formed into images of channel width 10, over the channel range 11-230 (approximately 0.2-2.3keV). This results in a data cube, from which specific regions can be selected and analysed. Within the cluster emission there is one bright source contaminating the data, located at a radius of 11.1 ′ from the cluster centre. In the PSPC analysis, the point source can be eliminated by ignoring the data collected in that region. However, this is not possible with ASCA, since the extended psf ensures that it is not discernable as a discrete source. To determine whether the source significantly affects our analysis, we model it using the PSPC data. The cluster emission is first fitted with the point source pixels removed. This model of the cluster emission is then subtracted from the original data, leaving behind just the point source, which is fitted with a power-law spectral model.
The fitted index, 1.25, indicates the softness of the source -the majority of the flux is emitted below 0.5keV. This is consistent with identification of the source as SAO17151, a bright star with a soft spectrum (Markevitch 1997). If the PSPC-determined model for the point source is subtracted from the ASCA data, the cluster fits are modified to the extent that a 1.5% difference in the total gravitating mass at 2Mpc results. This effect is negligible compared to other errors, so no attempt has been made to remove the source from the ASCA data.
ASCA GIS reduction
The ASCA analysis aims to constrain the gas temperature and metallicity profiles, using the PSPC derived gas density profile. A2218 was observed by the ASCA X-ray telescope on April 30th 1993. In this paper we use only data from the two gas imaging spectrometers (GIS2 and GIS3), since these have a wider field of view (∼ 50 ′ diameter) and greater high-energy detector efficiency (up to 10keV) than the CCD detectors (SIS0 and SIS1). An additional reason is that an accurate model for the large, asymmetrical and energy dependent psf is available for the GIS detectors, constructed from Cyg X-1 observations. These restrict analysis to a maximum radius of 18 ′ and an energy-range of 1.5-11 keV (Takahashi et al. 1994), which is not a significant limitation in the case of A2218.
Standard procedures for ASCA analysis, followed in this paper, are given by Day et al. (1995). The recommended screening criteria are applied to the raw data, removing data taken during times of high background flux. Subtraction of the X-ray background is complicated by the telescope psf. This has the effect of ensuring that no region of the detector is free from source flux. Hence, we extract an "average" background dataset from the publicly distributed set of blank-sky pointings (Day et al. 1995). These datasets suffer from mild point-source contamination, as sources are not completely averaged out, but are currently the best available solution.
The results of a naive annular spectral analysis of the ASCA data are shown in Fig 1. However, it is important to bear in mind the limitations of this approach. Crosstalk between annuli is significant (Takahashi et al. 1994), and the energy-dependent spreading of flux results in a distorted temperature profile, such that analysis of a simulated isothermal cluster would give a temperature which rises with radius. In practice, the temperature appears to decline with radius, indicating that a real gradient is present.
For 3D analysis, spectral-image datasets with contiguous energy bands of width 50 raw channels are created. Since the cluster centre is offset from the detector centre, data beyond a radius of 9 ′ are not fitted (the offset added to the radial extent of the source is similar to the maximum radius where psf calibrations apply). This restriction minimises the effects of poor calibration and high background near the detector edge. As both GIS instruments behave similarly, the datasets are fitted simultaneously. The Cash statistic has been used to identify best-fit models, but similar results for best-fit parameters and for comparison between the quality of fit of different models, is obtained using the χ 2 statistic.
RESULTS
We first compare the results of our analysis with published studies, to investigate whether the fitted models are consistent with earlier work on A2218.
Integrated spectral analyses of clusters produce "mean" quantities which are representative of the entire object. Assuming isothermality, McHardy et al. (1990) derived a gas temperature of 6.72 +0.5 −0.4 keV together with an iron abundance of 0.2 +0.2 −0.2 Z⊙ (using an RS emission code, where all other heavy element abundances were fixed at 0.5 Z⊙). This is in agreement with an earlier, much less well constrained examination (Perrenod & Henry 1981). More recently, Mushotzky & Loewenstein (1997) have derived T = 7.2 keV and Z = 0.18 ± 0.07 Z⊙, from an integrated ASCA spectrum.
Fitting an RS model to our ROSAT data results in a gas temperature of 4.7 +1.1 −0.9 keV and a hydrogen absorption column of 2.6 +0.2 −0.1 x10 20 cm −2 (with metallicity fixed at the Ginga value). This absorption agrees with the Stark level of 2.58 +0.18 −0.18 x10 20 cm −2 (Stark et al. 1992). The temperature is lower than the Ginga result of McHardy et al. (1990), but the energy range of the PSPC is not very suitable for determining the temperature of such hot gas. It has been found previously (Markevitch & Vikhlinin 1997) that PSPC results tend to be biased low for high temperature clusters. Fitting an integrated spectrum simultaneously to the GIS2 and GIS3 data gives a gas temperature of 6.73 +0.46 −0.44 keV and a metallicity of 0.20 +0.08 −0.08 Z⊙ (with the hydrogen column fixed at the PSPC value), in good agreement with the Ginga and Mushotzky & Loewenstein (1997) results.
The ability to extract and analyse spectra from independent regions of the cluster represents a significant advance over analysis of the integrated emission. Fig 1 shows the derived annular temperature profiles from both ROSAT and simultaneous GIS2/GIS3 analysis. The PSPC results are consistent with Siddiqui (1995), with a possible temperature drop visible in the central bin. However, the evidence for central cooling is statistically rather weak, and the derived central cooling time of ∼ 1.5x10 10 yr is comparable with the Hubble time, so a strong steady-state cooling flow appears to be ruled out.
If the hydrogen column is fitted, using the PSPC data, it is found to be consistent with the Stark value (Stark et al. 1992) throughout the cluster, apart from a slight rise in the centre. This may be due to matter deposited by an ear-lier, disrupted cooling flow (as noted by Siddiqui 1995). The ASCA analysis suggests that the metallicity may be slightly lower than the integrated value in the cluster centre with a shallow radial rise. However, all points are consistent with the McHardy et al. (1990) value of 0.2 Z⊙.
Analysis using spectral-image datasets allows extraction of the 3D gas density and temperature distributions within A2218. In the subsequent cluster analysis, both 'temperature models' (fitting for ρgas(r) and Tgas(r)) and 'mass models' (fitting for ρgas(r) and Mtot(r)) are used. Parameters representing the metallicity and cluster position are also fitted.
The best constrained parameters derived from PSPC analysis pertain to the shape of the gas density distribution. Comparable analyses have been carried out using Einstein (Perrenod & Henry 1981;Boynton et al. 1982;Birkinshaw & Hughes 1994), and ROSAT (Siddiqui 1995) data. These studies agree that when a King profile is assumed, A2218 is well modeled with a core radius, rc, of ∼ 1 ′ and an index, αρ, of ∼ 1 (equivalent to a β-value of 0.67). Higher resolution analysis, using the ROSAT HRI, has been performed by Squires et al. (1996) and Markevitch (1997). These studies detect the presence of smaller scale structure, with three surface brightness peaks visible within the central arcminute, none of which coincides with the central cD galaxy. This structure cannot be resolved with either the PSPC or GIS detectors, and indicates that the core of A2218 departs from a fully relaxed state.
Since ASCA is poor at constraining the gas density distribution for such a compact source, the PSPC fit values for core radius and index are carried over into the ASCA spectral-image analysis. To obtain the appropriate density parameters, a linear temperature ramp model is fitted to the entire PSPC data within a radius of 9 arcmin, with the temperature parameters fixed at those derived from an initial fit to the GIS data. This model is then re-fitted to the GIS data, complete to a radius of 9 arcmin, with the gas density core radius and index fixed, allowing a fit of the temperature parameters. In an iterative process, this model is alternately fitted to the PSPC and GIS datasets until no further change is observed in the parameter values. With consistency achieved, the "standard" gas distribution for A2218 is determined to be rc = 0.91 +0.03 −0.03 arcmin, αρ = 0.96 +0.02 −0.01 , in good agreement with the comparable results discussed above. If an NFW profile is assumed, the required scale radius, rs, is 10.27 +0.21 −0.20 arcmin. However, the NFW parameterisation is neither preferred nor disallowed by the PSPC data, so we only use a King parameterisation for the gas density distribution in the following analysis.
All cluster data
Fixing the gas density shape parameters at these King values, a range of models were fitted to the ASCA spectralimage data. On the basis of their Cash statistic, a set of models best-fitting the observed cluster data within a radius of 9 arcmin is selected (Table 1 lists the main parameters for each model) as representative of A2218. As can be seen from the Table, the isothermal model is a significantly poorer fit to the data. It is included for comparison with the other models. All of the temperature profiles obtained for this set (apart from that for the isothermal model) are plotted in Fig 2. Beyond the central arcminute (∼230 kpc) the profiles are in good agreement and quite non-isothermal. The typical 90% confidence envelope for an individual model (remember that these are conservative envelopes) generally encompasses the spread of these best-fit profiles (a single error envelope is included to illustrate this point). Within the central region, which lies within a single ASCA psf, a greater spread in temperature is allowed by the data. However, despite this divergence in temperature at small radius, all of the best-fit models, bar the isothermal model, have similar Cash statistics (see Table 1).
The abundance of heavy elements has been assumed, in all of the above models, to be constant over the cluster. However, the spectral capabilities of ASCA allow us to test this assumption. Allowing a linear metallicity gradient with the best-fit linear temperature ramp model gives a slope of 2.8 +4.5 −6.5 x 10 −2 Z⊙ arcmin −1 , a value consistent with uniform metallicity. The effect of this best-fit slope upon other model parameters is negligible, hence freezing the metallicity gradient at zero does not bias our analysis.
Central cluster data
We now examine the claim that lensing analyses require a larger cluster mass than is consistent with the X-ray data. The model which provides the greatest gravitating mass at the critical radius (∼85 kpc) is therefore selected for examination. This model, the DHF best-fit, using a Hernquist description for the DM distribution and a King description for the gas density profile (see Table 1), implies a high central gas temperature. In the following comparisons, we refer to this model as the "maximum-mass model" (MMM).
The MMM temperature profile features a factor of 2 rise within the central 1 ′ . This raises two questions: does such a steep temperature gradient raise physical problems (would it be convectively unstable?), and is it consistent with the X-ray spectral data observed within the central region? We will return to the first question in Section 5.
To address the latter question, GIS3 spectra integrated within r = 1 ′ , where the MMM temperature rises steeply, are compared with the predictions of the MMM and isothermal models in Fig 3. For clarity, only the energy range of 5-10 keV is displayed, since this is where the impact of very hot gas will be most apparent. Although the MMM provides a reasonable fit to the spectral imaging data as a whole, it does not follow that the data in the central regions need be consistent with the high model temperature. In practice, for both instruments (while the GIS2 spectra are not shown in Fig 3, they behave similarly to the GIS3 spectra) the MMM is a good match to the data. The isothermal model is, however, also consistent with the restricted dataset. The conclusion from this is that while the data do not rule out a central temperature rise, they do not require one either. The reason for this is that within the ASCA bandpass, plasmas with temperatures of 8 and 18keV do not have substantially different spectral signatures and are thus difficult to differentiate (this is analogous to the difficulty that the PSPC has in dealing with clusters hotter than 2-3keV). This problem is compounded by the limited spatial resolution of ASCA.
COMPARISON OF X-RAY AND LENSING MASSES
Deep optical observations of A2218 reveal a number of major arcs and a wealth of minor arclets. These features are the result of gravitational lensing, an effect which is independent of the physical state of the cluster gas. Instead, uncertainty lies in the characteristics of the background galaxies and the possibility of matter sub-clumps along the line-of-sight. Using the models fitted to A2218, we can derive projected gravitating mass profiles, suitable for comparison with the results of lensing analysis. This involves assuming a maximum outer radius for the cluster mass distribution. In the following analysis we take this to be the maximum radius of the data used in cluster fitting, which is 9 ′ (∼2 Mpc). The choice of projection radius has a minor impact upon the derived projected mass profiles (compared to other uncertainties) so long as the chosen radius is sufficiently large, ≥ 2Mpc. For example, the difference in projected mass within 2Mpc, between models with maximum radii of 2.0 and 3.0 Mpc, is 9%.
Since lensing analysis measures the matter distribution on both small (strong lensing occurs where the surface mass density is high) and large (weak lensing is theoretically observable to the edge of the cluster) scales, comparison with X-ray results is extremely informative. In Fig 4 the X-ray derived projected mass profiles for the MMM and isothermal model are plotted, together with the lensing results of Kneib et al. (1995), Loeb & Mao (1994) and Squires et al. (1996).
The two strong lensing points, plotted in Fig 4 at this radius, are consistent with a projected mass of ∼6x10 13 M⊙, Figure 2. Fitted 3D temperature profiles for the cluster gas. The MMM (solid) is plotted together with its 90% error envelope (dash-3dot) and the remaining best-fit models (dotted). All of the profiles feature gas temperatures which decrease by a factor of ∼ 2 or more within the region of analysis, and are consistent beyond the cluster core.
despite their use of different mass distributions (Kneib et al. 1995 assume a bipolar mass model while Loeb & Mao 1994 use an isothermal sphere distribution). The corresponding projected mass for the MMM is 5.3 +1.2 −4.1 x10 13 M⊙. The uncertainty associated with this mass is large enough to include the lensing derived value, so, within the calculated errors, the X-ray and strong lensing analyses are consistent, when the MMM is used. The corresponding mass from the isothermal model is 2.8 +0.2 −0.2 x10 13 M⊙ (see Fig 4), in agreement with the factor of 2 discrepancy reported by previous analyses (Loeb & Mao 1994;Miralda-Escude & Babul 1995;Kneib et al. 1995).
The bipolar mass model of Kneib et al. (1995) was used by these authors to extract a further two masses (see Fig 4), at the radius of the second mass clump (256 kpc) and at the maximum radius where giant arc constraints can be applied (383 kpc). The projected, enclosed gravitating mass at these radii for both the MMM and the isothermal model are given in Table 2. No error estimates are given by Kneib et al. (1995) for the lensing masses. However, it can be seen that both of these outer lensing points lie just outside the statistical error envelope of the MMM, unlike the mass derived at the critical radius. This is due to the MMM profile flattening at large radius. Note however that both of the outer predicted masses depend upon the bimodal model of Kneib et al. (1995), whereas the inner point can be derived in a model-independent fashion.
The Kneib et al. (1995) bimodal mass model has received additional support from detailed optical observations by Ebbels et al. (1996). In this analysis, the redshift of one of the faint arcs was determined for the first time and found to be in good agreement with the value predicted by Kneib et al. (1995).
The conclusion which can be drawn from this Xray/strong lensing comparison is that it is not clear that the results from the two approaches are inconsistent. We have shown that when a model such as the MMM is used, which includes a central mass cusp, the predicted masses at the critical radius agree within the X-ray statistical error envelope (see Table 2). If the gas is assumed to be isothermal, the previously noted discrepancies can be reproduced (see Fig 4).
Weak lensing
A statistical analysis of the weakly lensed arclets has been carried out by Squires et al. (1996), allowing the slope of the surface mass density to be mapped within a radius of ∼ 1Mpc. Fig 4 shows that, within errors, the magnitudes of the weak lensing points are consistent with the MMM. However, the trend indicated by these points is for a steeper profile, providing more gravitating mass at large radii than predicted by the MMM.
A significant difficulty with the weak lensing analysis, noted by Squires et al. (1996), is the procedure used for normalisation. This is done by defining a reference annulus at large radius within which the cluster mass contribution is assumed to be negligible. Since the annulus used by Squires et al. (1996) has an inner radius of 800 kpc, while the X-ray surface brightness profile can be traced to r > 2 Mpc, some shear signal from the cluster must actually be present within the reference annulus. Hence the recovered normalisation is an underestimate, such that the weak lensing mass profile Table 2. Projected gravitating masses from strong lensing and X-ray analysis are compared at the radii used by Kneib et al. (1995). At the critical radius the isothermal mass is a factor of 2 too low, although the discrepancy reduces greatly at larger radii. can only be regarded as a lower bound to the projected mass. The impact of this is examined further in Section 6.The correction for this effect has been estimated by Squires et al. (1996) to be a factor of ∼1.2-1.6 in projected mass. If the weak lensing points are adjusted to take account of this factor, this has two important implications for the MMM comparison. First, the normalisation of all data points increases, bringing the inner points into better agreement with the MMM profile while the outer points move further from consistency. Second, because this adjustment is a DC effect for the reconstructed projected surface mass density (of the cluster), it does not act equally on the radially integrated points. Hence, when the normalisation is raised, the slope of the weak lensing mass profile in Even when the normalisation adjustment is applied, this is insufficient to ensure full consistency with the outer strong lensing points of Kneib et al. (1995), which (see Table 2) lie above the values of 1.5x10 14 M⊙ and 2.5x10 14 M⊙ pre-dicted at the same radii by weak lensing. There is, however, a well understood effect whereby the weak lensing signal is suppressed at small radii due to contamination by cluster galaxies, reducing the derived weak lensing mass (Kaiser & Squires 1993;Squires et al. 1996). Correcting for this would bring the inner weak lensing points points into greater consistency with both the strong lensing and X-ray results.
Overall, then, this comparison indicates that at large radii, > 250 kpc, the weak lensing and X-ray analyses are reasonably consistent, though the weak lensing results tend to give more mass at large radii. Further work (observations of increased numbers of arclets over a wider field) is required to reduce the uncertainty in the weak lensing analysis and to move the reference annulus to larger radii, where the cluster mass contribution is lower.
GAS ENTROPY
In Fig 5 the derived gas entropy profiles for the best-fit cluster models are plotted. These indicate that beyond a radius of ∼ 120kpc the entropy increases with radius, as is expected for gas which is convectively stable. However, within this radius three of the best-fit models (including the MMM) exhibit a slight inward rise in entropy, making the gas convectively unstable. For the MMM, the predicted increase in entropy which occurs between a radius of 120kpc and the cluster centre is ∼ 30%.The reason for this behaviour is that the temperature increases rapidly at small radii while the gas density is forced to flatten (due to the use of a King model description). However, it is precisely this temperature increase that allows the model to achieve consistency with the gravitational mass derived from strong lensing.
As noted in Section 3, the NFW gas parameterisation is neither preferred nor disallowed by the PSPC data. However, if the King gas density parameterisation used for the MMM is replaced by the NFW parameterisation, the central entropy drops by ∼ 75% (See Fig 5). This difference is large enough to ensure that entropy rises continuously at all radii, removing the problem of convective instability.
CENTRAL GALAXY MASS
An alternative method for bringing the strong lensing and Xray analyses into agreement has been examined by Makino (1996). In this study, a massive central galaxy was embedded within the cluster to increase the predicted core mass without violating observational X-ray constraints. As the cD galaxy envelope can be traced to at least 25 ′′ (96kpc), which encompasses the critical radius, this is potentially an important consideration. Kneib et al. (1995) and Loeb & Mao (1994) (boxes) together with the weak lensing points of Squires et al. (1996) (crosses). Figure 5. Derived entropy profiles for the cluster gas. The MMM is plotted for both a King (solid) and an NFW (dashed) gas density distribution. The remaining best-fit models (dotted) are plotted using the King parameterisation. It can be seen that all of the models are consistent with a gas which is convectively stable, except within ∼ 120kpc, where the high central gas temperatures of certain models (such as the MMM, see Table 1) lead to an increase in entropy. Where the NFW gas density form is used, the derived entropy increases at all radii. Makino (1996) tested this hypothesis by, firstly, constructing a total mass distribution which included both cluster and cD components. These were parameterised by a King mass model (equation 7 with αDM = 3/2) and an isothermal sphere mass model (αDM = 1), respectively. Secondly, the gas temperature distribution was constrained to be isothermal at large radius (beyond the cD galaxy) and to rise or fall linearly with r at small radius. By assuming the gas to be in hydrostatic equilibrium, the gas density distribution corresponding to different temperature models was extracted and compared to the observed Einstein surface-brightness data.
Constraining the temperature to be isothermal at r > 31 ′′ , at the McHardy et al. (1990) derived value, Makino (1996) found that models consistent with both the X-ray surface brightness and with enough central mass to account for the strongly lensed arcs had two notable features. Firstly, a cD component was required to ensure that the critical surface mass density was attained by the model. Secondly, the gas temperature rose sharply within 31 ′′ , reaching a central value of ∼ 11keV. Models which did not include a cD mass contribution, or which assumed the gas to be isothermal at all radii, were unable to provide sufficient mass at small radii to account for the lensing data, whilst remaining consistent with the Einstein data.
The projected gravitating mass within 1.5 Mpc, from the model favoured by Makino (1996), of 1.3x10 15 M⊙ is in reasonable agreement with the MMM value of 9.0 +1.4 −5.3 x10 14 M⊙. This consistency is also found at smaller radii, r=100 kpc, where Makino (1996) obtains a mass of 5.2x10 13 M⊙ compared to 6.6 +1.5 −5.1 x10 13 M⊙ for the MMM. It should be noted that the MMM employs a mass profile with a central mass cusp (ρDM ∝ r −1 ) but does not include any discrete component corresponding to the central galaxy. We now explore the effect of adding such an additional central component to our model.
Adding a central galaxy of radius 100kpc and adjustable mass normalisation to the MMM, we find that the statistical quality of the fit deteriorates. The largest additional mass allowed, at the 95% level, is 1.7 x 10 12 M⊙ (within 100kpc). From here on this model is referred to as the MMMC, since it represents the MMM plus a central galaxy mass. Note however that the MMM, which contains no central mass component, is still statistically preferred (see Table 1).
This central mass is substantially less than the Makino (1996) value of 5.2x10 13 M⊙ (recall, however, that our cluster profile contains a central cusp, whilst Makino's has a flat core). Compared to the MMM, it provides a mass increase at the critical radius of only ∼5%. However, whilst the addition of a central mass component has little impact at small radius, it has the rather counter-intuitive effect of providing more mass at large radius. The reason for this is that, in the case of the MMM, the DM distribution has a scale radius of 1.89 +0.06 −1.16 arcmin. When a central mass component is added, the DM profile no longer needs to peak so sharply and its scale radius increases to 6.79 +0.95 −1.49 arcmin. Fig 6 shows the MMM and MMMC mass profiles compared to the strong and weak lensing results. It can be seen that the increased mass of the MMMC at large radius (compared to the MMM) allows it to be consistent, within its 90% statistical confidence envelope, with both the two outer points derived by Kneib et al. (1995) and the weak lensing points of Squires et al. (1996).
We can use our models to estimate the baseline error involved in the weak lensing analysis, as a result of cluster mass residing within the reference annulus employed by Squires et al. (1996). In Fig 6 the result of using the MMMC to correct the weak lensing results for this mass is also plotted. It can be seen that the points come into better agreement with the MMMC profile, particularly at r > ∼ 500 kpc. At smaller radii the weak lensing points still lie systematically below the MMMC profile, although this is to be expected (see Section 4.2) due to dilution of the lensing signal by cluster galaxies. In Fig 7 the derived gas entropy profile for the MMMC is compared with those from the best-fit X-ray models within the central 500kpc region, where the MMM shows a noticeable rise. It can be seen that the MMMC provides a flatter central entropy distribution, such that the gas is less likely to be subject to convective instability.
ANALYSIS OF GALAXY MOTIONS
The dynamics of cluster galaxies provide a further way of investigating the mass distribution in clusters. In the case of A2218, galaxy redshifts are available only within a few core radii of the centre. However this includes the region where the lensing analysis of Kneib et al. (1995), and the high resolution X-ray observations of Markevitch (1997), suggest that the potential may be seriously disturbed.
Galaxy velocity and position data have been obtained from the NASA Extragalactic Database. This consists largely of the data obtained by , who performed an extensive photometric survey of the cluster core. On the basis of the 3σ clipping technique of Yahil & Vidal (1977), 49 of the 53 galaxies with measured redshifts are identified as cluster members. All of these galaxies lie within 3 ′ of the X-ray centroid of the cluster, which has been adopted as the centre for this optical analysis. The average line of sight velocity dispersion, calculated using the 49 cluster galaxies, and corrected (Harrison 1974) to the cluster rest frame, is 1354 +176 −118 km s −1 . This galaxy distribution has been studied using the techniques described by Hobbs & Willmore (1997), using the Jeans equation to relate the spatial and velocity distribution of the galaxies to the gravitating mass profile for the cluster. The aim of such an analysis is to determine the radial behaviour of the anisotropy in the galaxy velocity distribution, since no information about galaxy orbits is otherwise available. Since the radial distributions of galaxy density and velocity are projected along the line of sight, the analysis requires an extrapolation to large radius of the Xray determined mass profile, the galaxy velocity dispersion profile and the galaxy surface density profile. Extrapolating the galaxy profiles is particularly uncertain because data only extend out to ∼ 2 ′ (∼ 500kpc) and include no information (except in projection) regarding the behaviour of these profiles beyond this region.
The anisotropy is studied through β, the anisotropy parameter, which is 1 in the case of purely radial orbits, 0 for an isotropic velocity distribution and increasingly negative as the orbits become predominantly circular, with the limiting case of −∞ for purely circular orbits. It is unlikely, however, that the full range of allowed values of β is covered Figure 6. The projected gravitating mass profile for the MMM (dotted line) is compared with that of the MMMC (solid line). Also shown is the 90% error envelope for the MMMC (dashed lines). Overlaid are the strong lensing points from Kneib et al. (1995) and Loeb & Mao (1994) (boxes) together with the weak lensing points of Squires et al. (1996) (crosses). Also shown are the weak lensing points after correction for cluster mass (as predicted by the MMMC) within the lensing reference annulus (diamonds). It can be seen that the MMMC provides a good match to both the strong and weak lensing points (especially after re-normalisation of the latter). in any cluster. On the basis of numerical simulations, Yepes & Dominguez-Tenreiro (1992) found that over the lifetime of most clusters, anisotropies corresponding to a β more negative than -1.5 are unlikely to have had time to develop.
For the galaxy surface density profile, a standard modified-Hubble profile (Σ = Σ0 1 + (rp/rc) 2 −1 ), with a canonical core radius of 250 kpc, has been used. The same parameterisation has been adopted by Natarajan & Kneib (1996), who carried out an analysis rather similar to that presented here. The line of sight velocity distribution has been fitted by a linear ramp model using a maximum likelihood method. The best-fit has a central velocity dispersion of 1478 +345 −321 km s −1 , while the gradient, although consistent with zero, is such that the dispersion falls with radius at a rate of −114 +257 −240 km s −1 arcmin −1 . The results of the analysis are shown in Fig 8, in which the calculated anisotropy parameter profiles for both of the most interesting X-ray mass models (the MMM and MMMC) are shown. Although the shape of the MMMC anisotropy profile is in detail different from those presented by Natarajan & Kneib (1996), the conclusion is the samethere is a divergence in the degree of anisotropy in the core of A2218. This is a result of the high central line of sight velocity dispersion being inconsistent with the mass provided by either the MMM or MMMC.
In our anisotropy profiles, the anisotropy plummets to −∞ at a non-zero radius (within the central ∼ 1 ′ in the case of the MMMC model) and within this radius the solution is unphysical, requiring an imaginary velocity dispersion. There are two possible reasons for this. Firstly, in comparison with other clusters, A2218 has an unusually large central velocity dispersion. Numerical simulations (Schindler & Bohringer 1993) have shown that the velocity dispersion can increase by up to a factor of 2 during a merger event. This occurs during the violent relaxation phase of the merger. At this time, the assumptions underlying any analysis based upon the Jeans equation are invalid. This provides one way in which the unphysical behaviour of the anisotropy parameter can be understood.
Secondly, the unphysical values of β may be indicating that the spherically symmetric model and assumptions that our solution uses may be in error. This is supported by several sets of independent evidence which suggest that the cluster core is disturbed, on the scale of ∼ 1 ′ . Kneib et al. (1995) find that the strong lensing data require a bimodal potential, Markevitch (1997) uses high-resolution Xray data to indicate that the cluster strongly deviates from spherical symmetry in the core and recently Girardi et al. (1997) showed that the combined galaxy spatial and redshift data indicate the presence of two merging galaxy subclumps. Thus, there are good reasons to believe that A2218 has recently undergone a merger event, upsetting the virial equilibrium in the cluster core.
In the case of the MMM model, the solutions are unsatisfactory throughout the region for which galaxy data are available, such that a more widespread upheaval would be required to account for the high velocity dispersion.
SUNYAEV-ZEL'DOVICH EFFECT
The Sunyaev-Zel'dovich microwave decrement (Sunyaev & Zel'dovich 1972) results from inverse-Compton scattering of cosmic background photons by electrons in the cluster gas. The magnitude of the effect depends upon the integral of the gas pressure along the line of sight and hence has a different dependence on gas density than the X-ray surface brightness. By combining an analysis of the X-ray emission with the observed SZ effect, it is possible to determine the distance of the cluster, and hence H0. Recent measurements (Birkinshaw & Hughes 1994;Jones et al. 1993;Saunders 1996) have been used to place constraints upon the Hubble constant in this manner. Birkinshaw & Hughes (1994) measure the decrement in a linear strip across the cluster, deriving the 1D profile. Jones et al. (1993) and Saunders (1996) work with 2D images of the decrement, allowing construction of a parameterised model for the observed microwave decrement.
In Fig 9 the predictions of our X-ray analysis are compared with the measurements of Birkinshaw & Hughes (1994) and the allowed envelope of Jones et al. (1993), assuming H0=50 km s −1 Mpc −1 . The typical statistical uncertainty for a single model is similar to the scatter between the best-fit model predictions; both of these are small compared to the SZ error envelope.
At radii greater than 100 kpc, the observed and predicted decrements are in excellent agreement. However, at smaller radii, the decrement observed by Birkinshaw & Hughes (1994) lies significantly below the X-ray model predicted profiles. Limitations inherent to beam-switching single-dish measurements are that they are prone to baseline errors and beam dilution. Thus our analysis is not based upon the Birkinshaw & Hughes (1994) data, but instead makes use of their results only to allow a comparison with earlier studies.
These problems are not shared by the observation of Jones et al. (1993), whose calculated envelope encompasses both the Birkinshaw & Hughes (1994) observations and the majority of the predicted profiles for H0=50 km s −1 Mpc −1 , with the MMM being an important exception. This discrepancy occurs because the MMM requires a steep gas temperature gradient at small radii (see Fig 2), resulting in a high prediction for the central decrement. If the gas is assumed to be isothermal (which is not statistically allowed by our X-ray data) the predicted SZ decrement at large radius is significantly greater than that from the best-fit ASCA models (see Fig 9). However, the isothermal model cannot be discriminated against on this basis as it remains consistent with the large SZ error envelope.
If an alternative value of H0 is assumed, the predicted decrements obtained from the best-fit X-ray models can be varied to achieve consistency with the SZ observations. The dependency is such that for a given X-ray surface brightness, Figure 8. The anisotropy parameter for the two mass models discussed in the text is plotted as a function of radius. The anisotropy derived using the MMMC mass profile is indicated by the solid line, together with its 90% uncertainty envelope (dashed lines). The MMM (dash-dot line) is shown together with its upper error bound (dash-3dot line) only, since the lower bound is unphysical at all radii. The MMMC is acceptable at r > ∼ 1 ′ , while the MMM is only allowed at radii beyond the observed edge of the optical data. Both models predict predominantly radial orbits at large radius. The β = −1.5 limit refers to the constraint from simulations (referred to in the text) that anisotropies more negative than −1.5 should not have had time to develop. the predicted SZ decrement varies as H −0.5 0 . Hence, assuming a higher Hubble constant will lead to a lower X-ray predicted decrement. The 2D SZ observation of Jones et al. (1993) and Jones (1995) is ideal for such a comparison because, first, it avoids the uncertainty inherent in the Birkinshaw & Hughes (1994) analysis and, second, upper and lower bounds to the allowed decrement are derived. However, it should be noted that these decrements have been analytically parameterised to allow a fit to the mosaiced SZ image and hence include a degree of model dependence (Jones 1995). With this caveat in mind, it is possible to determine the range of H0 allowed by the SZ and X-ray results.
Under the requirement that at least one of the ASCA best-fit models must be consistent with the results of Jones et al. (1993), we find that H0 can be limited to the very conservative range 37-230 km s −1 Mpc −1 . The lower bound is obtained by determining what value of H0 is required to make the SZ prediction from the LTF model (which predicts the lowest central decrement of any of the models) equal to the Jones et al. (1993) upper bound. The upper bound, which is clearly ruled out by other H0 determinations, is obtained by determining the value of H0 which would decrease the SZ prediction from the TTF model (which provides the highest central decrement of any of the models) such that it matches the Jones et al. (1993) lower bound. Using a value of H0 outside these bounds results in all of the ASCA best-fit models becoming inconsistent with the SZ observations.
The main conclusion of this analysis is that only weak constraints on the Hubble constant can be obtained, with even the very low McHardy et al. (1990) result being allowed within its errors.
However, this determination neglects an important additional constraint, namely the gravitating masses extracted from lensing analysis. If we consider only the MMM and MMMC, which have been shown to be the preferred models when strong lensing is taken into account, the allowed range of H0 is more tightly constrained.
The predicted central decrements from the MMM and MMMC (for H0=50km s −1 Mpc −1 ) are 4.8x10 −4 and 4.0x10 −4 respectively. These are compared to the Jones et al. (1993) upper bound of 4.0x10 −4 . To achieve consistency between these results, H0 must be > 62km s −1 Mpc −1 for the MMM and > 50km s −1 Mpc −1 for the MMMC. Lower values of the Hubble constant ensure that the predicted decrements do not lie within the Jones et al. (1993) bounds.
If the constraint of isothermality (at the fitted temperature of 7.9 +0.7 −0.6 keV) is applied, the predicted central decrement is 3.7x10 −4 . When the value of H0 is allowed to vary, the range allowed by the Jones et al. (1993) bounds is 30-105 km s −1 Mpc −1 . This is consistent with the value of H0 = 38 +18 −16 km s −1 Mpc −1 derived by Jones (1995). In summary, the constraints currently available from SZ observations combined with X-ray measurements are too weak to constrain the Hubble constant to greater accuracy than 37-230 km s − (Jones 1995). However, when lensing observations are introduced (which favour the MMMC model) the Hubble constant is required to be greater than 50 km s −1 Mpc −1 . This result highlights the importance of using both non-isothermal gas models and an approach which incorporates constraints in addition to those provided by the X-ray and SZ observations alone.
DISCUSSION
We have analysed ROSAT PSPC and ASCA GIS X-ray data, fitting spherically symmetric emission models to allow the extraction of cluster properties. The use of ASCA data allows the gas temperature structure to be discerned, a significant advance over earlier instruments. The analysis procedure utilises this information by fitting a variety of parametric forms for gas density and gas temperature or gravitating mass. This avoids restricting the unknown temperature profile to a single form.
Since the analysis presented here removes the constraint of isothermality, it is important to understand the effect that this assumption has. When isothermality is applied to Equation 1, it degenerates to: Hence the shape of the mass profile is constrained by the gas density profile alone. Since the latter is commonly taken to follow a King model, the gravitating mass distribution is forced to take the form of an isothermal sphere, with ρ(r) ∝ r at large radius and flattening within a region determined by the gas core radius.
The main results are: 1) Comparison with strong lensing indicates that the previously reported discrepancy is not present when the MMM is used. A consequence of the extra mass required within the critical radius is a high central gas temperature, which may be related to the disturbed nature of the core. So, by relaxing the assumption of isothermality and using an appropriate parameterisation for the dark matter distribution (one which includes a mass cusp), the observed X-ray and strong lensing data can become consistent.
2) The suggestion, by Makino (1996), that the cD contributes a significant fraction of the cluster mass at small radius has been tested. The maximum central mass which can be added to the MMM, before the Cash statistic shifts beyond the 95% error level, is 1.7x10 12 M⊙ (within 100kpc). Thus the massive galaxy required by Makino (1996) is ruled out, if the underlying DM distribution follows the form of the MMM model. The effect of adding the maximum allowed central galaxy mass is to extend the DM distribution. Because of this, the MMMC is found to be more consistent than the MMM with the outer two data points extracted from the Kneib et al. (1995) strong lensing analysis.
3) At larger radii, 200-1000 kpc, the projected mass profiles of the MMM and MMMC are consistent with the weak lensing results of Squires et al. (1996). When the slope of the mass profile is considered, the MMMC becomes the favoured model, as the MMM distribution is considerably flatter than the trend of the weak lensing data. It is important to recognise that the weak lensing analysis provides only a lower limit to the actual mass distribution (since cluster mass is known to reside in the weak lensing control annulus). Hence, although the MMMC provides more mass than the weak lensing results currently allow, this discrepancy may be resolved when statistical lensing of the background galaxy population is observed to greater radii (so that a control annulus beyond the edge of the cluster can be used).
4) The steep central rise in temperature required by the MMM leads to a corresponding increase in entropy. Under these conditions the gas is likely to be convectively unstable within ∼ 120kpc. This is either a reflection of the true physical state of the ICM or a model-dependent effect, related to the parametric forms assumed for the gas density and temperature. Evidence for the former is provided by Markevitch (1997), whose HRI analysis indicates the existence of central substructure, perhaps as a result of merging activity. If this is the case, the gas is likely to have been violently shock-heated, such that the assumption of equilibrium within the cluster core is no longer secure. On the other hand, the problem of convective instability is lessened with the MMMC, due to the lower fitted central temperature. In addition, replacing the King gas density profile with an NFW description can resolve the problem entirely. This is a consequence of allowing the density to rise, rather than flatten, in the cluster core. 5) Analysis of galaxy orbits provides another probe of the core region of A2218. Using the derived anisotropy of the galaxy velocities, it is possible to extract information about the gravitational potential and possible presence of substructure. Beyond the central 1 ′ region, the velocity dispersion data are consistent with the MMMC. At smaller radii, a physically reasonable solution for the anisotropy parameter cannot be attained with any of the derived X-ray mass distributions. This suggests that the central velocity dispersion has been raised to its high observed value during the violent relaxation phase of a merger event. Thus, even though the galaxy data are too poorly sampled spatially to indicate the presence of substructure, the velocity information supports the analyses of Kneib et al. (1995) and Markevitch (1997). 6) Comparison with 2D Sunyaev-Zel'dovich observations indicates that the Hubble constant is only weakly constrained, to a range of 37-230 km s −1 Mpc −1 . This is due to the high level of uncertainty which occurs when errors in the SZ and X-ray data are combined. More restrictive constraints on the value of H0 can be obtained if additional information is used, such as lensing observations. When the MMM and MMMC are combined with the SZ data, low values of the Hubble constant are ruled out and we find that H0 > 50 km s −1 Mpc −1 . This result conflicts with the previously determined value of 24 +23 −10 km s −1 Mpc −1 (McHardy et al. 1990), but agrees with the later estimate of 65 +25 −25 km s −1 Mpc −1 (Birkinshaw & Hughes 1994). The differences between these results, and that derived here, are dominated by several factors. The first, and most important, of these is that an unwarranted assumption has been made about the gas temperature in these earlier studies -that it is isothermal. This biases the analysis and also results in overoptimistic error estimates. Secondly, McHardy et al. (1990) and Birkinshaw & Hughes (1994) both use (different) 1D SZ measurements, which are prone to baseline uncertainties. Third, these studies use the less accurate gas density information from Einstein, rather than ROSAT.
Note, however, that these results are all based upon the commonly made assumption that the X-ray and SZ observations can be made consistent purely by manipulating the value of H0. If the cluster gas, at small radius, is not in hydrostatic equilibrium, this methodology may be in error.
CONCLUSIONS
Combining the above results, it appears that A2218 consists of two distinct regions. At small radii (≤ 1 ′ ), the Xray morphology is extremely disturbed (Markevitch 1997) with both the strong lensing (Kneib et al. 1995) and galaxy data (Girardi et al. 1997) indicating the presence of bimodal structure. An X-ray model (the MMMC) consistent with the strong lensing data requires the gas temperature to rise steeply within the core, possibly at such a high rate that the gas is convectively unstable. When this model is combined with the observed galaxy data, a physical solution cannot be obtained for galaxy orbits within the core. Taken together, these results suggest that A2218 has recently undergone a merger, shock-heating the gas and disturbing the equilibrium of the components within the cluster. This explains the lack of a cooling flow in the system.
Outside the core, our results show that the data can be consistently interpreted on the basis of a cluster in equilibrium. The gas temperature falls to < 10 keV, consistent with the observed galaxy velocity dispersion, and the gas entropy increases with radius. The MMMC provides a mass profile consistent with both the outer two strong lensing points and the weak lensing mass profile (with the proviso that the latter is likely to represent an underestimate of the true mass profile). When the MMMC is combined with the galaxy velocity data, a physical solution for the galaxy orbits is recovered. The SZ data, which are sensitive to the gas pressure at large radius, are consistent with the MMMC when H0 ≥50 km s −1 Mpc −1 .
Thus, while it is premature to regard a model such as the MMMC as a complete description of the physical structure of A2218, it does appear to explain all the data available beyond the disturbed core. To probe further, more detailed X-ray and weak lensing observations are required.
One of the principal conclusions to be drawn from the present analysis is that it is dangerous to assume an isothermal ICM without supporting evidence. This can lead to biased conclusions and underestimated errors. | 2014-10-01T00:00:00.000Z | 1998-06-17T00:00:00.000 | {
"year": 1998,
"sha1": "bd315637fed91462f096d4670bd68ff2f99f7433",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/302/1/9/3529212/302-1-9.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "54c69addab3d52dfd94b0203761e9a4a91ea9161",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225268081 | pes2o/s2orc | v3-fos-license | Genetic prion disease: D178N with 129MV disease modifying polymorphism—a clinical phenotype
Background Human prion diseases are a group of rare neurological diseases with a minority due to genetic mutations in the prion protein (PRNP) gene. The D178N mutation is associated with both Creutzfeldt-Jakob disease and fatal familial insomnia with the phenotype modified by a polymorphism at codon 129 with the methionine/valine (MV) polymorphism associated with atypical presentations leading to diagnostic difficulty. Case We present a case of fatal familial insomnia secondary to a PRNP D178N mutation with 129MV disease modifying polymorphism who had no family history, normal MRI, electroencephalography (EEG), cerebrospinal fluid (CSF) and positron emission tomography findings and a negative real-time quaking-induced conversion result. Conclusion Patients with genetic prion disease may have no known family history and normal EEG, MRI brain and CSF findings. PRNP gene testing should be considered for patients with subacute progressive neurological and autonomic dysfunction.
CASE REPORT
A Caucasian, left-handed, office worker in her early 40s, presented with a 12-month history of progressive symptoms which began with intermittent diplopia and lightheadedness with an initially normal neurological examination. Over the following months, she developed cerebellar and brainstem dysfunction with macrosaccadic oscillations, saccadic smooth pursuit and bilaterally inaccurate saccades with significant leftwards hypermetric saccades (figure 1). Worsening dysarthria, limb and truncal ataxia resulted in numerous falls and communication issues forcing her to leave her job. Focal seizures began 4 months post disease onset with speech arrest preceded by an autoscopic phenomenon where she felt 'to the right' of herself. Focal to bilateral tonic-clonic seizures occurred 7 months post disease onset with post ictal left hemiparesis and expressive dysphasia.
She was admitted 8 months into the disease course at which point strength and sensory testing was unremarkable. However, there were upper motor neuron signs of upgoing plantar reflexes, generalised hyperreflexia and lower limb spasticity in addition to severe cerebellar findings resulting in an inability to mobilise independently. While she was alert and able to hold an appropriate conversation, an Addenbrooke's cognitive assessment revealed diffuse cortical dysfunction with a total score of 73/100 9 months post disease onset. In particular, there was executive dysfunction with deficits in letter more than category fluency and poor planning with visuospatial tasks. Memory was affected with impaired delayed recall, but there was relative sparing of language domains. Autonomic dysfunction with persistent borderline tachycardia of 90-100 bpm and a postural systolic blood pressure drop of 50-60 mm Hg was noted throughout her inpatient stay. Furthermore, she had a 6 kg loss of weight over 6 months despite a normal oral intake and no symptoms of malabsorption.
Her medical history was significant for long-standing mild insomnia, not worse in the recent months, with a normal sleepwake cycle in hospital although formal sleep studies were not performed. She also had a history of depression and anxiety since her teenage years. Family history was incomplete as the patient's father was adopted and she had lost contact with her maternal extended family. However, her parents, and her two older brothers and one older maternal halfbrother, along with her four nieces and nephews, had no neurological issues.
Given the presentation with subacute ataxia, dysautonomia, cognitive dysfunction and seizures, autoimmune, atypical infectious and rapidly progressive neurodegenerative conditions were considered. MRI brain, positron emission tomography (PET) brain and cerebrospinal fluid (CSF) studies, including 14-3-3 protein, tau and real-time
Open access
quaking-induced conversion (RT-QuIC) were unremarkable as was testing for anti-neuronal, glutamate decarboxylase, tissue transglutaminase and antigliadin antibodies. Interictal electroencephalography (EEG) was normal and post ictal EEG showed non-specific frontal intermittent rhythmic delta activity. A small bowel biopsy did not have evidence of Whipple's disease and syphilis antibody testing was negative. Genetic testing for spinocerebellar ataxias 1, 2, 3, 6, 7 and 15, Friedreich's Ataxia and Huntington's disease was unremarkable. 24 hours urinary copper and organic and amino acid screens for inherited metabolic disorders was also negative.
Given the clinical features and rapidly progressive course, a prion protein (PRNP) genetic analysis was performed and revealed a heterogenic pathogenic variant, D178N (aspartic acid to asparagine substitution), in the PRNP gene combined with a heterozygous MV (methionine/valine) disease modifying polymorphism at codon 129 which is associated with Creutzfeldt-Jakob disease (CJD) and fatal familial insomnia (FFI).
DISCUSSION
Prions are pathogenic misfolded proteins which replicate by causing conformational change and misfolding in neighbouring proteins. This leads to an exponential increase in prion formation with consequent neuronal damage. 1 Since their discovery, several rapidly progressive human neurological diseases have been shown to be due to prions. These human prion diseases include CJD, Kuru and various genetic prion diseases. 1 In more recent times, the pathogenesis of other neurodegenerative diseases including the α-synucleinopathies and tauopathies, are also hypothesised to involve prion like spread. 2 In addition, multiple systems atrophy, which can present with rapidly progressive cerebellar and autonomic dysfunction akin to the classic human prion diseases, has been shown to be transmissible following animal inoculation with brain homogenate from deceased patients. 3 While the implications of prions in human disease is broad, this discussion will focus on the classic human prion diseases.
Human prion diseases are rare, with an annual incidence of around 1-1.5 per million people. 4 5 Genetic prion diseases account for around 8%-15% of human prion disease 1 4 5 and are classified into three clinicopathological subtypes: genetic CJD, FFI and Gerstmann-Straussler-Scheinker syndrome (GSS syndrome). There are over 30 PRNP gene mutations 1 that have been associated with human genetic prion diseases with significant genotype-phenotype variability and differing age dependent penetrance.
The D178N mutation on the PRNP gene is associated with the CJD or FFI phenotypes. The clinical presentation can be modified by a polymorphism in codon 129 of the PRNP gene-methionine homozygosity (129MM) at this position is associated with an FFI phenotype and valine homozygosity (129VV) is associated with a CJD phenotype, although significant clinical phenotypic overlap exists. 6 7 CJD characteristically presents with a rapidly progressive cognitive impairment, cerebellar dysfunction, behavioural or psychiatric disturbance and visual changes with later development of extrapyramidal and pyramidal symptoms and myoclonus. 1 Compared with sporadic CJD, familial cases tend to present earlier with slower progression. 8 FFI typically presents with sleep disturbance, dysautonomia and visual deficits with subsequent development of extrapyramidal signs, hallucinations, disorientation and cerebellar signs and later myoclonus and pyramidal signs. 9 Our patient presented with an atypical clinical syndrome with prominent early ataxia and less pronounced cognitive deficit and sleep disturbance in association with heterozygosity at codon 129. A D178N mutation with 129 heterozygosity (129MV) is less commonly reported but can also present clinically as CJD or FFI, typically with a longer duration of disease than the codon 129 homozygotes. 6 10 11 In terms of the clinical phenotype, a study by Krasnianski et al suggested that 129MV patients with FFI present with visual changes and ataxia earlier than their 129MM counterparts (6 vs 13 weeks and 9 vs 21 weeks, respectively) and tend to have a later onset of sleep disturbance (15 vs 3 weeks), hallucinations (57.5 vs 16 weeks), spatial disorientation (38.5 vs 20 weeks) and myoclonus (32 vs 16 weeks). 10 Montagna et al found similar phenotypic differences but also noted that the 129MV patients were more likely to suffer from tonic-clonic seizures. 11 This phenotype of FFI would align with our case's presentation with early ataxia and visual disturbance with relatively preserved cognition and sleep wake cycle.
Despite being an autosomal dominant condition, our case did not have a known family history. While a family Open access history may be identified in 76%-92% of patients with a D178N mutation, 7 12 not all patients will have a positive family history likely due to a combination of misdiagnosis of family members, variable age-dependent penetrance or a sporadic mutation in the index case.
While FFI patients with the D178N mutation frequently do not have a positive CSF 14-3-3 protein 7 9 and only have non-specific changes on EEG and MRI brain, 7 9 they often display thalamic and/or cortical hypometabolism, even pre-symptomatically, on [ 18 F] fluorodeoxyglucose PET scan 11 13 which our patient did not.
RT-QuIC is a relatively new technique that detects PrP Sc (pathological scrapie isoform of the prion protein) in tissue and CSF 14 and has a reported specificity of 98% and a sensitivity that varies with genetic mutation. 15 In contrast, CSF 14-3-3 protein and tau have much lower specificities of 63% and 46%, respectively 15 . In FFI cases, only 7%-8.3% of patients demonstrate CSF 14-3-3 protein or tau positivity. 9 14 In comparison, RT-QuIC positivity is significantly higher but varies widely between studies from 17% in a Chinese study to 57% in a German study and 83% in a Japanese study. 14 16 17 The significant discrepancy in sensitivities between studies may reflect different populations studied with different disease modifiers or different testing protocols. While it is clear that the PRNP 129 codon affects the clinical expression of the D178N mutation, it is not certain whether it has an effect on the RT-QuIC positivity and most studies only contained 129MM homozygotes or very few 129MV heterozygotes. 14 16 17 While RT-QuIC is a promising new technique in the diagnosis of prion diseases, it may not be as useful in FFI cases with a PRNP D178N mutation and 129MV polymorphism as highlighted by our case.
This case is an uncommon presentation of a rare genetic prion disease. The patient presented without a known family history and with normal CSF, MRI and PET brain scans and interictal EEG. Promising new techniques such as RT-QuIC, which is highly sensitive in gCJD and GSS syndrome 14 16 may not be as reliable in the FFI cohort and perhaps especially so in patients with the less common codon 129MV disease modifier mutation, although further studies are required.
The variability in presentation and the insensitivity of investigations in detecting genetic prion diseases, as illustrated by this case, emphasises the importance of maintaining a high level of suspicion for these conditions. PRNP genetic testing is crucial to obtaining the correct diagnosis in this situation which, in turn, allows for the appropriate counselling and management of patients and their families. | 2020-10-28T18:34:11.286Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "391973c5e8447089547e9d98bc2c43903f167acd",
"oa_license": "CCBYNC",
"oa_url": "https://neurologyopen.bmj.com/content/bmjno/2/2/e000074.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0d3bf74b747ce64f5901fe112ad630413b6d880",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239052202 | pes2o/s2orc | v3-fos-license | Smoking tobacco is associated with renal hyperfiltration
Abstract Tobacco consumption is a renal risk factor, but the effects on the estimated glomerular filtration rate (eGFR) remain unclear. We aimed to evaluate the possible impact of using tobacco products (smoking and snus) on eGFR based on creatinine or cystatin C. We used a first cohort with 949 participants and a second cohort with 995 participants; none had pre-existing renal disease. All subjects donated a blood sample and completed a questionnaire, including questions about tobacco use. To assess the effect on eGFR, hierarchical multiple linear regression models were used. Active smoking associated independently with a higher eGFRcreatinine in all subjects (p < 0.001; β = 0.11). Further analyses stratified for sex, showed similar findings for men (p < 0.001; β = 0.14) and for women (p = 0.026; β = 0.10). eGFRcystatin C was significantly associated with active smoking in all subjects (p = 0.040; β = −0.05), but no association was seen after stratification for sex. Snus did not associate with eGFR. In conclusion, smoking associated significantly with a higher eGFRcreatinine. The mechanism may be renal hyperfiltration of smaller molecules such as creatinine. This is probably caused by substances from smoked tobacco other than nicotine, as no effect was seen for snus.
Introduction
The detrimental effects of tobacco consumption are well known. Smoking is a significant risk factor for multiple diseases, such as atherosclerosis, cancer, and chronic respiratory disease. [1,2]. The World Health Organization (WHO) estimated that tobacco products were responsible for about 8 million premature mortalities worldwide in 2017 [3]. Furthermore, the usage of tobacco products is still one of the leading causes of premature death and global disease burden [4].
There is also evidence suggesting that smoking has a negative impact on renal function. Harmful outcomes such as the increased risk for end-stage renal failure in renal patients, progression of nephropathies, and diabetic nephropathy in people with diabetes are all relatively welldocumented effects of smoking [5]. Smoking is thus an established renal risk factor in individuals already diagnosed with renal disease. However, in individuals without preexisting renal disease, previous studies have been more inconclusive.
Few studies have investigated the effects of both smoking and snus consumption; in the study by Ekberg et al., smoking was associated with a higher GFR, whereas no association was seen for snus [21]. It is thus unknown if tobacco products other than smoking are associated with altered renal function.
Both creatinine and cystatin C are filtered over the glomerular membrane in the kidney with different size-dependent sieving coefficients and both can reliably be used to calculate the estimated glomerular filtration rate (eGFR).
The aim of this study was to evaluate if there was a difference in the association of smoked tobacco and consumption of snus on eGFR in subjects without pre-existing renal disease. Another aim was to see if there was a difference between eGFR creatinine and eGFR cystatin C . To assess this, we used healthy controls from two previous population-based studies [22,23].
Study population
The participants in this study were all part of the Northern Sweden Health and Disease Study (NSHDS) that includes the V€ asterbotten Intervention Project (VIP), Mammography screening program (MA), and the Northern Sweden WHO Monitoring of Trends and Cardiovascular Disease (MONICA). These population-based cohorts have previously been evaluated [24].
VIP is an ongoing project offering a systematic screening of risk factors and subsequent health counselling to all residents in V€ asterbotten county since 1985. The program is offered annually to all residents upon turning 30 (until 1995) 40, 50, and 60 and is previously described in detail [25]. In MONICA, seven health surveys have been performed between 1986 and 2014 with randomly selected individuals in V€ asterbotten and Norrbotten counties aged 25 to 74 years [26]. The MA cohort includes women aged 40 to 70 who were invited to the cohort while undergoing routine mammography between 1995-2006. Added together, these cohorts included 140,414 participants up to December 2014, with an estimated participation rate of 65-75%.
In all three cohorts, the participants are asked to complete a questionnaire covering tobacco habits and to donate a blood sample to the Northern Sweden Medical biobank for future research. The clinical examinations conducted at inclusion have recently been described in detail [27].
The study was approved by the regional ethical board, Umeå, Sweden (Dnr 03-320, Dnr 07-174 M, Dnr 2014-348-32 M). All participants gave their informed written consent before inclusion in the NSHDS, and the study complied with the Declaration of Helsinki.
Identification of participants
We used the matched controls from two prospective casecontrol studies nested within the NSHDS cohort [22,23]. Cases in the first study were first-time myocardial infarction (fatal or nonfatal) or suspected MI that occurred prior to 1 January 2000. Two controls were matched for sex, age, geographic area, subcohort, and date of the health survey. Controls were excluded if they had cancer, stroke or myocardial infarction prior to the matched case time of diagnosis. A total of 1054 controls were selected, of which 77 were excluded because of missing tobacco status and/or eGFR variable, and 28 because they used both snus and were smokers, leaving 949 individuals for analysis. In the second study, cases were patients with valvular surgery due to aortic stenosis (AS) at Umeå University Hospital, Sweden [23]. For each case, two randomly selected controls were matched for sex, age (±2 years), geographic area, subcohort, date of health survey (±4 months), and altogether 1052 controls were identified, of which 57 were excluded because of missing tobacco status, eGFR, or usage of both snus and smoking tobacco, leaving 995 individuals for analysis. No exclusions were done in the second cohort due to previous cardiovascular disease or cancer. A flowchart for both studies is shown in Supplementary Figure 1.
Diabetes was defined according to WHO guidelines as either self-reported diabetes or fasting plasma glucose !7.0 mmol/L and⁄or an oral glucose tolerance test with a 2h postload plasma glucose !11.0 mmol/L (12.2 mmol/L in VIP as capillary plasma was used). Smoking habits were self-reported and were categorized by whether the participants used tobacco products daily (smoking or snus) or not (including earlier tobacco use and never tobacco-users).
Sample collection and biochemical analysis
Venous blood samples were obtained after at least 4 h of fasting (extended to 8 h after 1992) in the VIP and MONICA projects. In MA, the samples were obtained throughout the day. Blood samples were drawn into evacuated glass tubes and centrifuged for 15 min at 1500 g to get heparinized plasma, which was aliquoted and stored at À80 before analysis.
Plasma samples were analyzed at Umeå University Hospital, Department of Clinical Chemistry, Swedac accreditation no 1397. In the first cohort, creatinine and cystatin C were analyzed with a Hitachi 911 multi analyzer (Roche/Boehringer Mannheim, Germany). Creatinine was analyzed with kits from Roche, Crea plus, enzymatic method (Creatinine kit Cat. No. 11775669216), and creatinine results were IDMS-corrected. Cystatin C (Cat. No LX 00210, calibrator X097401) was analyzed with kits from DAKO (Copenhagen, Denmark). In the second cohort, creatinine and cystatin C were analyzed on a Cobas 8000 modular analyzer, c502 module (Roche Diagnostics, Basel, Switzerland). Creatinine was analyzed with an IDMS traceable enzymatic method, CREP2 (catalogue no. 03263991190) and cystatin C with Tina-quant cystatin C Gen. 2 traceable to the ERMDA471/IFCC standard (catalogue no. 06600239190 [23]. Both these kits were from Roche Diagnostics (Basel, Switzerland).
Statistical analysis
Baseline characteristics were presented as medians (25th to 75th percentiles) for continuous variables and non-continuous variables as proportions/percent. We used Mann-Whitney Utest for continuous variables and a Chi-squared test for independence for categorical variables to compare baseline characteristics in different tobacco groups. P-values < 0.05 were considered significant.
We calculated ln z-scores for eGFR creatinine and eGFR cystatin C, which was done separately for the two cohorts and also separately for men and women. The two cohorts were then merged into one. For assessment of smoking, a hierarchical multiple regression model was used to predict eGFR creatinine and eGFR cystatin C levels. The predictor variables tobacco status (current smoker or nontobacco user) and age were entered into the model in step 1. Sex, systolic blood pressure (mm Hg), BMI (body mass index), diabetes and study (first or second) were added into the model in step 2. Participants with missing data points were excluded pairwise. Preliminary analyses were carried out to ensure that assumptions of sample size, outliers, multicollinearity, linearity, normality, and homoscedasticity were not violated.
For snus consumption, a similar hierarchical multiple regression model was used to predict eGFR creatinine and eGFR cystatin C levels. However, the predictor variable 'tobacco status' was defined as current snus user or non-tobacco user.
All calculations were performed with SPSS version 27 (IBM Corporation, New York, NY, USA).
Results
Baseline characteristics for the first (n ¼ 949) and second (n ¼ 995) cohorts are shown in Tables 1 and 2. In the first cohort, both snus users and active smokers had higher eGFR creatinine than non-tobacco users. Stratified for sex, this finding was also seen in men and for smoking women eGFR cystatin C was lower in smokers compared to nontobacco users. Snus users had a higher eGFR cystatin C compared to both smokers and non-tobacco users. These findings were also seen for men. In the second cohort, eGFR creatinine was higher in smokers than non-tobacco users after stratification for sex. This was also seen for all snus users, although not significant for men (p ¼ 0.06). Men using snus had lower eGFR creatinine compared to smokers. eGFR cystatin C did not differ between smokers and nontobacco users. In the second cohort, eGFR cystatin C was not higher among snus users.
Hierarchical multiple regression models for smoking and z-scores of ln eGFRs
For all subjects, current smoking (compared to not currently using tobacco products) and age were entered in Step 1, explaining 22% of the variance in eGFR creatinine and 14% of the variance in eGFR cystatin C (Supplementary Table 1).
Smoking associated with increasing eGFR creatinine (p < 0.001; b ¼ 0.11), whereas no association was seen for cystatin C. In step 2, sex, BMI, diabetes, systolic blood pressure and cohort were added. This model explained the variance in eGFR creatinine by 24% and 17% in eGFR cystatin C . Smoking still associated with increasing eGFR creatinine (p < 0.001; b ¼ 0.11). In contrast, smoking associated with a lower eGFR cystatin C , but only in step 2 (p < 0.040; b ¼ À0.05). As sex was a significant predictor for eGFR, further analyses were stratified for sex, showing similar findings. In men, smoking associated significantly with increasing eGFR creatinine (p < 0.001;b ¼ 0.14) in Step 1 (Table 3). After adjustments in step 2, smoking remained associated with increasing eGFR creatinine (p <0 .001; b ¼ 0.14). This was not seen for eGFR cystatin C . In women, smoking remained associated with increasing eGFR creatinine after adjustments in step 2 (p ¼ 0.026; b ¼ 0.10). As for men, no association between smoking and eGFR cystatin C was seen for women.
Hierarchical multiple regression models for snus usage and z-scores of ln eGFRs
In total, only six women were using snus combining both cohorts; thus, the analyses were only conducted for men. Snus usage (compared to not currently using tobacco products) did not associate with eGFR creatinine or eGFR cystatin C (Supplementary Table 2).
Discussion
The main finding in our study was that active smoking independently associated with increasing eGFR creatinine after adjustments, an association seen in both men and women. In contrast, smoking associated inversely with eGFR cystatin C although not in the stratified analysis for sex. Snus did not associate with eGFR. Little is known about tobacco consumption and its effect on renal function in individuals without pre-existing renal disease. Both an increased eGFR [11][12][13][14][15], decreased eGFR [6,8,10], or a decreased measured GFR (mGFR) [7] have been proposed in previous studies. A higher eGFR, has previously been called renal hyperfiltration since a higher prevalence of proteinuria has been found simultaneously [12][13][14].
Some previous studies have reported an effect of current smoking for both men and women on eGFR, but with a more pronounced effect for men [12,13], and some studies included only men [14,15]. In our study, active smoking was still associated with a higher eGFR creatinine in both sexes after including factors previously associated with renal hyperfiltration [18][19][20]31] such as age, BMI, diabetes and systolic blood pressure.
To our knowledge, all previous studies on the consumption of tobacco reporting an increased eGFR only evaluated eGFR creatinine and not eGFR cystatin C [11][12][13][14][15] or used measured mGFR based on cr-EDTA clearance [21]. In our study, we evaluated both eGFR creatinine and eGFR cystatin C. One study reported a decreased GFR and a higher risk for renal function decline based on MAG3-clearance but not for mGFR based on DTPA-clearance [7]. Other studies measured urinary albumin excretion [9], or when using eGFR, followed the participants for five to ten years until developing CKD [6,8], thus possibly missing the early phase of renal disease. Renal hyperfiltration might be a marker of early glomerular damage, supported by a recent systematic review suggesting that 'glomerular hyperfiltration is thought to play an important role in the initiation of glomerular damage' [32]. Renal hyperfiltration has also been associated with increased cardiovascular risk, carotid plaques, rapid decline in renal function in people with diabetes, obesity and other metabolic parameters [16][17][18][19][20].
A possible mechanism behind the difference in eGFR creatinine and eGFR cystatin C results is the difference in cystatin C and creatinine molecular size, affecting the filtration in the renal glomeruli. This has been described as 'shrunken pore syndrome', defined as eGFR cystatin C being 60% lower than eGFR creatinine [33]. Interestingly, this has also been associated with several health consequences similar to those associated with renal hyperfiltration, such as; increased mortality [34], aortic stenosis with concomitant atherosclerosis [23], as well as with accumulation of atherosclerosis promoting proteins [35] and increased risk of a future first-ever myocardial infarction in women [36].
In addition, recent studies have shown that eGFR equations based on plasma levels of cystatin C are outperforming eGFR equation based on creatinine in predicting outcomes such as; end-stage renal failure, all-cause mortality, cardiovascular disease, CKD in the elderly, and glomerular filtration rate in people with diabetes [37][38][39][40][41]. Partly suggesting a greater accuracy of cystatin C than creatinine as a filtration marker, but on the other hand, it has also been shown that eGFR is associated with many cardiovascular and mortality risk factors independent of mGFR [42][43][44][45]. This indicates that eGFRs are not only markers for glomerular filtration but are also biomarkers dependent on cardiovascular risk factors, including smoking. These different factors more often affect eGFR cystatin C than eGFR creatinine . This might also explain the mechanisms behind the atherogenic potential of smoking and the tight relationship between CKD and CVD. Another possible explanation behind the increased eGFR creatinine results seen in smokers would be that smokers might have a lower muscle mass due to an overall lower level of fitness. This would also explain why eGFR cystatin C are similar or lower in smokers, since cystatin C does not depend on muscle mass. In our study, we did not measure muscle mass. However, BMI did not differ between smokers and non-smokers in the first cohort, but we found a difference in the second. When stratified for sex, no difference in BMI was seen in the first cohort. In the second cohort significantly lower BMI was seen for female smokers only (data not shown). However, since BMI does not take into consideration body composition and is not a reliable measurement of muscle mass, we can not conclude whether the higher eGFR creatinine found in smokers is dependent on muscle mass.
It is unclear if tobacco products other than smoking affect GFR since we could only find one study assessing snus consumption [21]. This study reported no significant effect on mGFR in 13 snus users, which we could confirm in our study assessing 206 male oral snus users. Unfortunately, our cohorts did not include a sufficient number of women consuming snus to determine the effects on eGFRs in women. Some experimental studies have shown reduced glomerular filtration after nicotine administration, indicating that not only smoking but all kind of nicotine administration influences glomerular filtration [46], but this was only seen in healthy non-smokers and not in chronic smokers. In our study, the null finding for snus users indicates that substances other than nicotine in smoked tobacco influence eGFR, as the nicotine metabolite cotinine has been shown to be higher among snus users than smokers [47] The main strength of the study is that the participants are from cohorts that are evaluated as population-based [24]. VIP has also been evaluated in participation trends, showing minimal differences in, e.g. age and education between participants and non-participants, and no declining participation rate [48]. We also used traceable creatinine and cystatin C methods, and creatinine was analyzed with an enzymatic method not sensitive for pseudocreatinines.
Our study has several limitations. Firstly, smoking has been associated with eGFR in a dose-dependent manner [10,11]. However, we could not assess a dose-dependent association between tobacco and eGFR since we did not have sufficient information about the amount of tobacco consumed in our participants. Secondly, our study did not include a sufficient number of smokers to stratify into current smokers, ex-smokers, and nonsmokers and still retain statistical power, making it impossible to draw conclusions whether the effects of smoking on eGFR are reversible upon smoking cessation. On the other hand, it has been stated that eGFR was significantly higher in current smokers (both light and heavy smokers) compared to both ex-smokers and non-smokers [11,14], indicating a reversible effect. Another limitation was that the analysis of creatinine and cystatin C were performed at different time points and for cystatin C with different methods. To minimize the effect of this we used z-scores derivated from the first and second cohort separately.
In conclusion, we provide evidence that smoking contributes significantly to a higher eGFR creatinine in men and women. The mechanism behind this association might be renal hyperfiltration of smaller molecules, including creatinine, as this was not seen for the larger molecule, cystatin C. This is probably caused by substances from smoked tobacco other than nicotine, as no effect was seen for snus. Further longitudinal studies, including both eGFR creatinine and eGFR cystatin C , are needed to determine the effects of different distribution routes of tobacco on eGFR, including possible dose-dependency and reversibility upon smoking discontinuation. | 2021-10-22T06:53:19.010Z | 2021-10-20T00:00:00.000 | {
"year": 2021,
"sha1": "96fe004d44ac55f651c30a359259a08ac617a17b",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/00365513.2021.1989713?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "860b4bd6bf924f86dbc42f3c014e252ba4f170b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
656173 | pes2o/s2orc | v3-fos-license | Theory-Guided Design of Organic Electro-Optic Materials and Devices
Integrated (multi-scale) quantum and statistical mechanical theoretical methods have guided the nano-engineering of controlled intermolecular electrostatic interactions for the dramatic improvement of acentric order and thus electro-optic activity of melt-processable organic polymer and dendrimer electro-optic materials. New measurement techniques have permitted quantitative determination of the molecular order parameters, lattice dimensionality, and nanoscale viscoelasticity properties of these new soft matter materials and have facilitated comparison of theoretically-predicted structures and thermodynamic properties with experimentally-defined structures and properties. New processing protocols have permitted further enhancement of material properties and have facilitated the fabrication of complex device structures. The integration of organic electro-optic materials into silicon photonic, plasmonic, and metamaterial device architectures has led to impressive new performance metrics for a variety of technological applications.
Introduction
A number of factors have motivated interest in organic electro-optic (OEO) materials over the past two-plus decades.The ubiquitous femtosecond phase relaxation times of π-electron chromophores raise the possibility of terahertz bandwidth in device applications [1][2][3][4].This potential has been demonstrated in all-optical switching and optical rectification [1,2], terahertz generation and detection [3], and in femtosecond pulse experiments [4].However, more frequently, device bandwidth will be defined by the properties of other materials such as the resistivity of electrode materials (e.g., metals or doped silicon) used in devices such as Mach Zehnder modulators [5,6].Electrode resistivity commonly limits device bandwidths to 100 GHz or less.Thus, optimization of device bandwidth has largely been an issue of device design (electrode engineering) with shorter electrical/optical interactions lengths leading to higher bandwidth.If a material with large electro-optic activity can be utilized in device fabrication, a shorter device length can still be employed for realization of high bandwidth without device drive voltages becoming excessively high.In other words, drive voltages and bandwidth are related and can be traded off.
A second factor, which has motivated interest in OEO materials, is the potential for exceptional electro-optic activity, i.e., large electro-optic coefficients.In the late 1980s and 1990s, belief in this potential was a major driving force for synthesizing new materials although during that period electro-optic activity of OEO materials essentially remained less than that of lithium niobate (i.e., below 30-32 pm/V).There are two reasons for early failure to realize large electro-optic activity: (1) Difficulties in producing chromophores with large molecular first hyperpolarizability which also exhibited prerequisite thermal, chemical, and photochemical stability and (2) difficulties associated with translating large molecular optical nonlinearity into large macroscopic (bulk) material optical nonlinearity (electro-optic activity).A major advance in overcoming the first road block was achieved in the late 1990s with introduction of chromophores based on the tricyanovinylfuran (TCF) acceptor moiety (see Figure 1) [7].As illustrated in Figure 1, OEO chromophores are charge-transfer (dipolar) molecules that can be described as constituted from donor, bridge, and acceptor modules [7].Throughout the history of OEO research, the donor moiety has largely been of the amine structure shown.Bridge moieties have consisted of heteroaromatic or polyene structures including isophorone-protected polyene structures (see Figure 1).Early OEO materials contained acceptor moieties based on nitro, cyano, alkoxy, isoxazolone, tricyanovinyl, 1,3-bis(dicyanomethylidene)indane, and diarlythiobarbituric acid groups [7].Introduction of the TCF acceptor led to significant increase in molecular first hyperpolarizabilities (β) and exceptional stability [7] as is illustrated in Figure 2 [8].
Figure 2. The chronological variation of the product of chromophore first hyperpolarizability (β) and dipole moment (μ) is shown in red (dotted line) while the variation of electro-optic activity (r 33 ) is shown in blue (dashed line).Adapted from reference [8] with permission of the American Chemical Society.
The efficient translation of large molecular nonlinearity into large macroscopic nonlinearity has proven to be more elusive.Noncentrosymmetric (or acentric) order is required for non-zero macroscopic electro-optic activity.Indeed, electro-optic activity is proportional to the product of chromophore number density (N, molecules/cc), molecular first hyperpolarizability (β), and the acentric order parameter (<cos 3 θ>).A severe problem observed for early chromophore-polymer composite OEO materials prepared by electric field poling was that electro-optic activity went through a maximum as a function of chromophore number density and this maximum shifted to lower number density with increasing chromophore dipole moment (μ) and molecular first hyperpolarizability (β) [7].This is because chromophore-chromophore dipolar interactions tend to drive centric pairing of chromophores at high concentrations.Throughout the 1990s chromophore loading in polymers was limited to about 20% and acentric order parameters to about 0.1 with the net result that only about 2% of the potential EO activity of chromophores was being translated to macroscopic (material) EO activity.In the late 1990s, an important theoretical advance was achieved by noting that there are two components of the chromophore-chromophore dipolar interaction potential with one favoring centric order and the other favoring acentric order [9][10][11].The relative importance of these two components could be shifted by control of chromophore shape.Thus, shape engineering became an important pursuit in optimizing the performance of chromophore-polymer composite materials [12].Correlated quantum/statistical mechanical investigation of structure/function relationships is relevant to such shape engineering and permitted development of OEO materials that, for the first time, exceeded the electro-optic activity of lithium niobate.Such electro-optic activity was still below the non-interacting particle limit for a three-dimensional (Langevin) lattice (i.e., for non-interacting particles, <cos 3 θ>∼μE pol /5kT or about 0.2 for a normalized poling energy, μE pol /kT, of unity).Simple theoretical calculations suggest, that for a given normalized poling energy, acentric order (and thus electro-optic activity) can be increased by reducing lattice dimensionality experienced by the OEO chromophore [13,14].For example for non-interacting particles and unity normalized energy, <cos 3 θ> ∼ μE pol /3kT for a 2-D (Bessel) lattice and <cos 3 θ> ∼ 3μE pol /4kT for a 1-D (Ising) lattice.
Since 2005, efforts to improve EO activity have largely focused on the theory-inspired, systematic control of intermolecular electrostatic interactions and lattice dimensionality.This work will be a major focus in the next section.
However, a large electro-optic activity is a necessary but not sufficient requirement for a practical advance in OEO device technology.Thermal and photochemical stability are also required.Electric field poling-induced acentric order and accompanying electro-optic activity are not thermodynamically stable.The rate at which poling-induced order is lost is related to the difference between the material glass transition temperature (T g ) and the measurement (or device operational) temperature [7,15,16].The T g needs to be at least 50 °C above the operational temperature to achieve stability that will satisfy Telcordia standards.For chromophore-polymer composite materials, this required using host polymer materials (such as polyimides or polycarbonates) with glass transition temperatures above 150 °C.Such high glass transition temperatures in turn require high poling temperatures, which can result in sublimation of chromophores and other unwanted effects [7,15,16].An alternative to a final high glass transition OEO material involves crosslinking subsequent to introduction of acentric order by electric field poling [7,15,16].Initial research into lattice hardening by crosslinking involved exploitation of free radical and condensation reactions [7].More recently, these protocols have been replaced by lattice hardening based on utilization of cycloaddition reactions including those involving the fluorovinyl ether moiety [16][17][18] and Diels-Alder/Retro-Diels-Alder reactions [16,[19][20][21].The latter class of reactions has been particularly successful in yielding final material glass transition temperatures as high as 300 °C [16,[19][20][21].Cycloaddition reactions involve minimal lattice disruption in contrast to earlier methods, e.g., lattice disruption associated with evolution of condensation products [7,22].Electro-optic devices manufactured by Lumera/Gigoptix have met Telcordia standards.
Photochemical stability is essentially defined by the production and reaction of singlet oxygen [16].The hardness of the lattice dramatically influences the rate of degradation as does packaging of OEO materials and devices.Steric protection of reactive positions within the OEO chromophore and the addition of quenchers of singlet oxygen also have major effects.Current materials routinely yield results under accelerated testing that suggest stability of greater than ten years at traditional telecommunication power levels.
Optical loss is also an important consideration.Indeed, a figure-of-merit (FOM) that has been suggested for electro-optic materials is electro-optical activity (e.g., an operationally utilized element of the electro-optic tensor such as r 33 ) divided the response time (τ) and the total material optical loss (MOL).There are three elements of total device-relevant optical loss: (1) absorption loss associated with either electronic or vibrational transitions; (2) processing-induced loss associated with introduction light scattering; and (3) coupling losses associated with either index of refraction or mode size mismatch in terms of getting light into and out of device structures.With OEO materials minimization of optical loss from interband (charge transfer) electronic absorption requires operating at wavelengths sufficiently far removed from the electronic absorption band and avoiding the introduction of excitonic contributions from chromophore aggregation during material processing.Minimization of vibrational absorption loss requires control of hydrogen concentration in OEO materials.This is usually achieved by utilization of dendritic structures and/or partial fluorination of OEO materials.With these two absorption minimization protocols effectively implemented, total absorption loss can be reduced to approximately 0.2 dB/cm, which is very comparable to the loss observed for lithium niobate.However, values of 1-2 dB/cm are more common for OEO materials.Scattering loss observed in OEO waveguides arises from a variety of contributions including phase-separation induced in spin casting or poling of materials, material damage associated with electric field poling, and scattering losses arising during the fabrication of waveguide structures (associated with waveguide wall roughness).With care, scattering losses can be keep to insignificant values although integration of OEO materials into nanoscopic silicon photonic waveguide structures frequently involves dealing with optical losses associated with the roughness of silicon waveguides on the order of 2 dB/cm.Coupling losses can easily be the dominant source of total insertion loss; however, the use of special coupling structures can reduce coupling losses to very acceptable values, e.g., total insertion loss (material, processing, and coupling) values of 6 dB or less [23][24][25][26][27][28][29].
A relatively unappreciated advantage of OEO materials is the fact that these materials can be tailored to be compatible with a wide variety of materials (e.g., metals, metal oxides, etc.) and processing options (e.g., vapor and solution deposition, nanoimprint lithography, lift-off techniques, etc.).For example, soft and nanoimprint lithography techniques permit rapid and cost effective production of complex photonic circuitry such as coupled ring resonators [30].Lift-off techniques permit fabrication of conformal and flexible devices [31].Three-dimensional (3-D) optical circuitry has been fabricated using a variety of masking techniques including grayscale lithography [32].
A recently demonstrated advantage of OEO materials is the ease of integration into silicon photonic, plasmonic, and metamaterial device structures [1,2,[33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51].Concentration of optical and electric fields in such devices facilitates the more effective utilization of the large electro-optic coefficients and material bandwidth (fast switching speeds) characteristics of OEO materials.Demonstration of sub-1 volt GHz operational voltages has become common and operational voltages as low as 10 millivolts are not out of the question as loss and bandwidth issues in the fabrication of silicon (and silicon nitride) photonics, plasmonics, and metamaterial architectures are more effectively addressed.Material conductivity has become an issue in the engineering of more complex materials and devices, particularly as it affects poling efficiency in the introduction of OEO material electro-optic activity in specific device structures.Conductivity problems have motivated interface engineering efforts such as the introduction of nanoscopic metal oxide (e.g., titanium dioxide) between drive electrodes and OEO materials [16,52].Such charge blocking layers inhibit injection and withdrawal of charge to and from OEO materials associated with control of material (electrode metal or metal oxide and electro-optic material) work functions and inhibiting quantum mechanical tunneling of charge across interfaces.
Another route to dealing with OEO material conductivity at poling temperatures is to employ lower poling voltages but this involves utilizing materials with exceptional poling efficiencies.This brings us to the topic of the next section; namely, the engineering of intermolecular electrostatic interactions in materials so that such interactions dramatically enhance poling efficiency.As noted above, one route to enhancement of poling efficiency is the reduction in lattice dimensionality that the OEO chromophore experiences.
Materials Development
A starting point for the development of new materials is definition of material structure-function relationships to guide that development.Quantum and statistical mechanics have played a critical role in the engineering of simple molecular materials; however, the complexity of OEO materials poses a challenge for use of such methods.Recently, the utility of computation of linear and nonlinear optical properties of OEO materials including the dependence of properties on dielectric permittivity [53,54] and optical frequency [55][56][57] by time-dependent density functional theory, TD-DFT (including real-time time-dependent density functional theory, RT-TD-DFT [55][56][57]) has been advanced.Improvements have been based on careful correlation of theoretical and experimental data and on the development of new hybrid functionals (e.g., M06-2X gives better reproduction of trends in OEO materials going from heteroaromatic bridges, e.g., YLD156, to polyene bridges, e.g., YLD124, compared to the highly utilized B3LYP functional).However, it should be noted that B3LYP has provided good quantitative simulation of molecular hyperpolarizabilities for a range of materials.
Fully atomistic Monte Carlo (MC) and molecular dynamics (MD) statistical mechanical methods [58][59][60][61] are useful for simulation of material properties but are too demanding of computational resources and time to be of general utility.Course graining of such methods, as illustrated in Figure 3, has permitted extension of these methods to complex polymer and dendrimer OEO materials [62][63][64][65].Reproduced from reference [65] with permission of the American Chemical Society.
Course-grained or pseudo-atomistic MC/MD methods are useful for OEO materials as extended π-electron conjugation inhibits rotation about π-bonds.Thus, application of a "United Atom" approximation to phenyl groups and OEO chromophores is sensible and has been demonstrated to be as accurate as fully-atomistic calculations.The coupling of these "new" quantum (new hybrid functional TD-DFT and RT-TD-DFT) and statistical (PAMC/MD) mechanical methods has permitted development of reliable structure-function relationships for all classes of OEO materials.Let us now review these different classes of materials and the relationships critical for the optimization of each class of material.
The first and most simple class of OEO materials to be considered is chromophore-polymer composite materials prepared by dissolving OEO chromophores in a non-NLO active (typically, commercially available) polymer host such as polymethylmethacrylate (PMMA), polymer carbonate (PC), amorphous polycarbonate (APC), polyquinoline (PQ), or polyimide (PI).Both fully atomistic MC/MD and pseudo-atomistic PAMC/MD methods have shown that chromophore-host interactions can be neglected to good approximation and poling-induced order is dominated by the competition of chromophore dipole-electric poling field interactions with chromophore-chromophore dipolar interactions.Both analytical [9,10,62] and numerical [11,[63][64][65][66] methods have demonstrated that there are two components to the chromophore-chromophore dipolar interaction potential.One component favors centrosymmetric (centric) chromophore organization while the other favors noncentrosymmetric (acentric) organization with the weighting of the contributions from these two components determined by chromophore shape (steric or nuclear repulsive potentials).Unfortunately, even the best chromophore shape (spherical) does not permit exceeding the Langevin limit for acentric order, i.e., chromophores behave as if they exist in a 3-D lattice [63].The variation of electro-optic activity (e.g., r 33 ) with chromophore number density (N) will always be less that the limit expected for non-interacting dipolar molecules (which is a linear dependence of r 33 on N defined by the molecular first hyperpolarizability and the effective poling field experienced by the chromophores).However, there continues to be considerable research on modification of chromophore structure (as first demonstrated in [12]) to achieve improved EO activity.Such modification is now commonly referred to as "site isolation" and is an alternative to site isolation achieved by lowering chromophore concentration in composite materials [67][68][69][70][71].An example of such modification of is shown in Figure 4 [71].For this material, the behavior of r 33 vs. N is Langevin (in good agreement with theory) with the maximum in EO activity corresponding approximately to the neat material.Note that doping this material into APC is used to observe the full concentration range but no aggregation leading to phase separation is observed for the neat material.Although the neat material exhibits good processability and could be used for device fabrication, the phenyl ring of the chromophore interrupts π-conjugation leading to low molecular first hyperpolarizability.Thus electro-optic activity (the maximum observed EO activity is approximately 23 pm/V) is too low for practical application despite the high number density and reasonable (Langevin limit) order parameter.For currently available core chromophore architectures, the maximum electro-optic activity achieved for chromophore-polymer composites is typically less than 120 pm/V (50-100 pm/V are the most commonly observed values).
The second and slightly more complex class of materials considered involves chromophores covalently incorporated into non-NLO active polymer and dendrimer matrices (e.g., such as shown in Figure 3).For such materials, the restrictions placed on chromophore organization by covalent bonds (covalent bond potentials) must be taken into account.In the PAMC/MD approach, atoms involved in sigma bonds are treated by fully atomistic methods and atoms involved in conjugated pi bond structures are treated within the United Atom Approximation.If segments connecting chromophores to the core polymer or dendrimer structure are very flexible, the poling-induced organization is essentially the same as found for the chromophore in a composite material, i.e., one can neglect chromophore-host interactions).However, if covalent bond potentials prevent chromophores from pairing centrosymmetrically, then the behavior approaches that expected for independent particles, i.e., a linear dependence of r 33 on N.Such behavior is observed for the three-chromophore-containing dendrimer of Figure 3 and for related dendrimers [65,72].An advantage of covalent incorporation of chromophores is that high chromophore number densities can be obtained without chromophore phase separation.For example, the OEO material (PSLD_33) shown in Figure 3, has a number density of ∼6.5 × 10 20 molecules/cc, i.e., about three times the maximum loading achievable for the core (YLD156-type) chromophore in a chromophore-polymer composite.Decreasing the number density by creating higher generation versions of the PSLD_33 dendrimer results in an observed (and theoretically-predicted) linear variation of r 33 with N as does decreasing number density by dissolving dendrimers in polymer hosts such as APC [65,72].For such materials, the observed electro-optic activity is typically above 150 pm/V but less than 250 pm/V.Conductivity issues encountered at high concentrations act to attenuate realizable EO activity.
We now consider materials where intermolecular electrostatic interactions improve poling efficiency (i.e., r 33 /E pol where E pol is the poling field strength) leading to a linear variation of electro-optic activity with chromophore number density that exceeds the non-interacting particle limit.We refer to such materials as "matrix-assisted poling (or MAP)" materials and we will demonstrate that the improvement in poling efficiency can be viewed as resulting from a reduction in lattice dimensionality.We will introduce the concept of fractional lattice dimensionality (M) appropriate for non-perfectly-ordered (Boltzmann) materials and will show how such fractional lattice dimensionality (M) can be quantitatively defined experimentally.In the next section we will introduce experimental techniques for defining lattice dimensionality from the ratio of centric and acentric order parameters and for defining the structure and dynamics (e.g., nanoscale viscoelasticity) associated with reduced dimensionality and molecular cooperativity.Both nanostructural and thermodynamic characterizations of the role of specific spatially-anisotropic intermolecular interactions are presented.
MAP materials involve the introduction of molecular cooperativity associated with specific spatially-anisotropic intermolecular electrostatic interactions.Perhaps the simplest example is that of binary chromophore organic glasses (BCOGs) [8,64,73,74] which are composite materials where both guest and host are chromophore-containing materials, e.g., a chromophore guest dissolved in a chromophore-containing polymer or dendrimer host.A practical example is the YLD124 chromophore (Figure 1) dissolved in the PSLD_33 dendrimer material (Figure 3).Guest and host chromophores can experience dipolar interactions and the poling field will affect both guest and host.The poling field will reduce the effective lattice dimensionality of both guest and host and the intermolecular electrostatic interaction between the two can further amplify the effect leading to enhanced poling efficiency through MAP.The effect on r 33 or r 33 /E pol is to result in a new linear dependence with an enhanced slope.Of course, the shapes of guest and host chromophores can play an important role in defining the detailed assembly of the chromophores [64].Very high chromophore concentrations can be achieved without phase separation for BCOGs because one is dissolving a polar guest into a polar host in contrast to traditional composite materials where a polar guest is being dissolved into a nonpolar host.As expected because the dielectric environment is not changing with relative concentrations of guest and host materials, solvatochromic shifts are not observed, in contrast to the observation of large shifts observed for conventional composite materials.The absence of solvatochromic shifts and phase separation frequently results in low optical absorption and scattering loss.Plasticization of the material glass transition observed for traditional composite materials typically does not occur with BCOGs.MAP intermolecular electrostatic interactions also generally lead to improve thermal and photochemical stability.The high chromophore number densities (e.g., >5 × 10 20 molecules/cc) achieved with BCOGs contributes to improved macroscopic electro-optic activity but can also contribute to material conductivity at poling temperatures and to absorption loss unless care is exercised in design of chromophore structures and in design of poling configurations (i.e., control of processing conditions).Obviously, the critical aspect of MAP materials is the control of intermolecular electrostatic interactions between guest and host chromophores by control of chromophore dipolar interactions and steric (nuclear repulsive) or shape interactions so that the reduction in lattice dimensionality effected by the chromophore-poling field interactions can be effectively exploited.
Laser-assisted electric field poling (LAEFP or LAP for short) [73][74][75] can also be used to reduce effective lattice dimensionality and improve electro-optic activity.An early demonstration of this effect involved incorporating a high μβ chromophore guest (e.g., YLD124 or YLD156) into a DR1-co-PMMA chromophore host (which is commercially available from Aldrich Chemical).The disperse red DR1 chromophore is known to undergo photo-induced (photochromic) trans-cis-trans isomerization with the net effect being photo-driven molecular reorientation.If polarized light is used, the DR1 chromophores can be driven into a 2-D lattice structure.If the guest chromophore interacts with the host chromophores, the guest experiences an effective 2-D lattice.Thus, the effective electro-optic activity of both guest and host is increased with the observed electro-optic activity being the sum of the two.
LAP can also be applied to a single pure chromophore material if intermolecular electrostatic interactions that drive acentric organization exist between chromophores.An example (see Figure 5) is provided by vapor deposited BNA (which forms acentric single crystals in melt or solution processing) [75].
In this case, LAP can be viewed as producing orientation-selective melting with the intermolecular electrostatic interactions among the BNA molecules driving crystal growth in the direction of the electric poling field.Because chromophores pointing in the direction of the poling field are not heated (the chromophore charge-transfer band transition moment is ∼zero for that orientation), the net effect is the generation of acentric thin film order in a direction appropriate for device application.Acentric order parameters as high as 0.95 have been obtained but the graph of Figure 5 indicates how sensitive generation of optimum EO activity is to poling conditions.This approach produces desired correctly-oriented thin film materials much more rapidly and avoids the problems of orienting crystals with undesirable growth anisotropy obtained with traditional single crystal growth.The preceding example of BNA involves strong intermolecular electrostatic interactions among small molecules with modest chromophore-chromophore dipolar interactions.The difficulty with strong interactions, including ionic and hydrogen bonding interactions which are important for the formation of many crystalline materials, is that these strong interactions can elevate material melting (or glass transition) temperatures above material decomposition temperatures.Most OEO materials start to exhibit decomposition above 300 °C.If decomposition occurs before melting, then melt processability is lost.A fruitful avenue for development of improved OEO materials would seem to be the exploration of a number of dipolar and quadrupolar interactions that can be introduced to achieve MAP but which do not inhibit melt processing.Here we discuss dipolar interactions based on the intermolecular electrostatic interactions of coumarin moieties (which are known to be critical in formation of certain liquid crystalline phases) and quadrupolar interactions operative in arene-perfluoroarene dendritic materials.Representative molecular structures are shown in Figure 6.Such interactions induce molecular cooperativity for the coumarin-containing or arene-perfluoroarene-containing dendrimers of Figure 6 that are on the order of 100 nm at the poling temperature.Such molecular cooperativity results in effective 2-D lattices and theoretically-predicted approximate factors of two enhancements of order parameters.Molecular cooperativity also affects nanoscale viscoelasticity and the thermodynamics (energetic) of various phases of matter including phases of reduced dimensionality.The viscoelastic properties can be characterized by nanoscopic measurement techniques such as shear modulation force microscopy (SM-FM) [76][77][78][79] and intrinsic friction analysis (IFA) [80].They can also be studied by dielectric relaxation spectroscopy (DRS) [81,82] and by differential scanning calorimetry (DSC).As thermal transition is made between various phases of matter, the corresponding viscoelastic properties change.A well-known transition in polymer and dendrimer materials is the material melting or glass transition temperature, T g .If interactions that promote molecular cooperativity are present then a lower temperature transition is observed associated with changes in effective lattice dimensionality.Molecular organization and lattice dimensionality can be defined by measurement of acentric (<cos 3 θ>) and centric (<cos 2 θ> or <P 2 > where 2<P 2 > = 3(<cos 2 θ> − 1)) parameters.Theory demonstrates that effective (including fractional) lattice dimensionality, M, can be defined by the relationship between acentric and centric order parameters; namely, [cos 3 θ>] 2 = {(9 − 2M)/ (2 + M)}<P 2 > − {(3 − M)/2M}.Experimentally, the acentric order parameter can be defined using techniques such as attenuated total reflection (ATR) [83][84][85], two-slit interferometry (TSI) [86,87], Fabry-Perot interferometry (FPI) [88], or a Mach Zehnder interferometric technique (MZI) [89], which permit measurement of both of the two non-zero electro-optic tensor elements, r 33 and r 13 , for poled OEO materials.The acentric order parameter can be extracted either from the ratio of r 33 /r 13 or from r 33 alone if the elements of the molecular first hyperpolarizability tensor (e.g., β zzz ) are correctly estimated from a combination of quantum mechanical calculations and hyper-Rayleigh scattering (HRS) measurements [53,90,91] and/or electric field induced second harmonic generation (EFISH) measurements [16].Figure 5 illustrates the variation of the ratio r 33 /r 13 with LAP optical power (orientation-selective heating) illustrating how these change with increasing order.The maximum in the plot in the ratio of r 33 /r 13 also corresponds to the maximum in the graph of r 33 .The ratio of minor to major elements of the β tensor is estimated to be 1/6 for BNA from the data of Figure 5 and single crystal measurements.Quantum mechanical calculations of β are also consistent with this result.
The centric order parameter, <cos 2 θ> or <P 2 >, can be measured using variable angle polarization referenced absorption spectroscopy (VAPRAS) [92] or variable angle spectroscopic ellipsometry (VASE) [93,94].VASE has the advantage of permitting simultaneous definition of <P 2 > order for both chromophore and dendrimer pendant, e.g., for the C1 chromophore of Figure 6, <P 2 > ∼+0.2 while the <P 2 > for the coumarin moiety is ∼−0.2.The VASE data (see Figure 7) clearly indicates that the chromophores and pendant lie in orthogonal planes.For the C1 chromophore, <cos 3 θ> = 0.15 (from ATR and HRS data together with quantum mechanical calculations).The ratio of acentric to centric order parameters suggest that the material lattices observed for the coumarin (C1) and arene-perfluoroarene materials are approximately 2-D lattices (e.g., M = 2.2 for C1) while typical composite materials and polymers containing covalently incorporated chromophores are approximately 3-D lattices (e.g., M = 2.9 for the PSLD_33 dendrimer; <P 2 > = 0.019, <cos 3 θ> = 0.063).Note that for PSLD_33 (and for a variety of chromophore-polymer composite materials) M ∼ 3 was determined with exactly the same measurements and analysis applied to the C1 (and HDFD class materials).Thus, there is strong internal consistency in the defined lattice dimensionalities.The |TΔS*| associated with transitions to phases of reduced dimensionality is approximately 50 kcal for the C1 and arene-perfluoroarene materials.Optimum poling efficiency is observed (by in situ monitoring of the introduction of electro-optic activity employing the Teng-Man technique [95,96]) at the temperature corresponding to minimum entropy or maximum order (from IFA data).The cooperativity lengths (ξ), defined by combining IFA and DRS data, are on the order of 100 nm.Combining theoretical calculations with experimental data suggests that the combination of pendant (coumarin or arene-perfluoroarene) and chromophore-chromophore interactions work together to define molecular cooperativity and lattice dimensionality.Theory suggests that other van der Waals interactions make relatively minor contributions.The pendant interactions are approximately twice as important as the chromophore dipole interactions to defining ΔS* for C1 and HDFD.Atomistic and pseudo-atomistic MD calculations clearly indicate finite autocorrelation functions for coumarin-coumarin, chromophore-chromophore, and coumarin-chromophore interactions consistent with experimental observations.The detailed discussion of MD calculations on C1 and other MAP materials is beyond the scope of this review; however, it can be stated that order and viscoelastic measurements together with MD theoretical calculations provide a very self-consistent picture of the role of nano-engineered pendant interactions in enhancing poling efficiency and EO activity.The pendant interactions both improve acentric order (through reduction of lattice dimensionality) and permit realization of high chromophore number densities.Both of these factors contribute to the observed enhanced electro-optic activity.This research suggests that stronger and better positioned pendant interactions will be required to reduce lattice dimensionality below 2-D and to dramatically enhance (e.g., by a factor of approximately 3-4) acentric order.It is practically impossible to go to significantly higher number densities.
Chromophore guests can be added to the pendant materials discussed above further enhancing electro-optic activity through the additional interactions operative for BCOGs.In some cases, LAP enhancement is possible.The next logical step with respect to chromophore modification is the highly selective introduction of hydrogen bonding and/or ionic interactions.
It is useful to summarize and generalize the observations of this section.In comparing different classes of materials it is useful to discuss poling efficiency, r 33 /E pol rather than r 33 , as a figure of merit.The reason for this is that r 33 will depend on poling configuration (e.g., parallel plate, coplanar electrode, corona poling) and on factors such as the resistivity of poling electrodes (e.g., gold, indium tin oxide, doped silicon, etc.).Moreover, the dielectric breakdown of silicon can limit poling voltages to a few tens of volts/micron.The ratio r 33 /E pol basically indicates how the chromophores of the material respond to a given voltage across the material.Even comparison based on this parameter is not without problems but is probably the most reliable of various alternatives.Values of r 33 /E pol will depend upon chromophore hyperpolarizability thus we must compare across classes of materials using the same chromophore (molecular first hyperpolarizability).Typically, comparisons (for chromophores with optimized shapes) will be made for the core chromophores of Figure 1.For chromophore-polymer composite materials, such as YLD156 in PMMA poling efficiencies at low chromophore concentrations (in region where r 33 is linear with N) will typically lie in the range 0.4-0.8nm 2 /V 2 (i.e., for a poling voltage of 100 V/microns, r 33 values in the range 40-80 pm/V would be expected if such a poling field can be reached).For chromophore-polymer composite materials, the poling efficiency will decrease at higher concentrations because of centrosymmetric ordering of chromophores.For example, for the YLD156 chromophore in PMMA at a concentration of ∼4 × 10 20 molecules/cc (the concentration of the chromophore in C1), the poling efficiency decreases to 0.15 nm 2 /V 2 .For comparison the poling efficiency of the PSLD_33 dendrimer is 1.4 nm 2 /V 2 and covalently incorporated chromophores in site-isolated dendrimers lie in the range 1-1.5 nm 2 /V 2 .For a typical MAP material such as C1, the poling efficiency is ~2 nm 2 /V 2 .For this class of materials, values can vary widely depending on the strength of the intermolecular electrostatic interactions among pendant (dendron) moieties.BCOG MAP materials typically exhibit the largest poling efficiencies, e.g., 3-6 nm 2 /V 2 yielding maximum obtainable electro-optic coefficients in the range 250-500 pm/V for simple thin films.The high poling efficiencies of MAP and BCOG MAP materials is a combination of high number density and reduced-dimensionality-enhanced acentric order parameters.LAP can also produce poling efficiencies approaching this range although photo-induced conductivity tends to attenuate efficiency.
It is also useful to attempt to visualize the effect of MAP interactions on lattice dimensionality.Molecular dynamics simulations provide autocorrelation functions for the orientation and interaction of various components of the material including relative to the poling field.Simulations also provide snapshot pictures of the distributions of molecules.However, since the order is relatively low and since order is statistical, it is difficult to envision structures.At any rate, the detailed discussions of the pictures provided by statistical mechanical (MD) simulations for the large number of materials studied is beyond the scope of this review.Theory does permit one to turn up (or down) the strengths of interactions artificially and envision limiting structures.For example, for weak interactions, chromophores and pendants will be nearly randomly oriented.In the accompanying Figure 7, we show the "cartoon" (since it doesn't correspond to real material intermolecular interaction strengths) limiting form of the structure of C1 for the strong interaction limit.
In the next section, we provide more details regarding processing and characterization techniques.
Advances in Materials Processing and Characterization
Individual device applications will define the choice of OEO material used and processing protocols that will be executed in the fabrication of devices.There is a great diversity of applications and device structures developed for those applications and no one material or processing strategy will satisfy all applications.This is a great advantage of OEO materials in that they can be adapted by design to be compatible with a wide range of device structures and are amenable to a wide array of processing protocols.
As already noted, OEO materials can be processed either from solution or the vapor phase although solution processing (spin casting of a thin film followed by electric field poling of the material near its glass transition temperature) has been the far more heavily utilized approach.Materials can also be prepared by sequential synthesis/self-assembly techniques and by crystal growth including from solution, the melt and the vapor phase [97][98][99][100][101][102][103][104][105][106][107]; however, these approaches have not been as widely adapted to the fabrication of prototype devices as have electric field poling methods.
Electric poling has been achieved by parallel plate and coplanar electrode structures and by corona poling.As already noted, electric field poling can be augmented by laser-assisted poling for some materials.A variety of electrode materials have been employed and choice of electrode material and poling configuration can create problems with conductivity.One approach to reducing conductivity under poling conditions has been to deposit a 25 to 150 nm thin layer of titanium dioxide between the electrode and the OEO material.
For all-organic devices, polymer cladding layers are typically deposited on top of the OEO film to create a triple stack sandwich consisting of cladding layer-active OEO layer-cladding layer [7,[108][109][110][111][112][113][114].The OEO material must be sufficiently hard and inert to the solvent used for cladding deposition that pitting of the OEO layer does not occur.Hardness is also crucial for achieving smooth waveguide walls using reactive ion etching.
An attractive feature of OEO/silicon photonic hybrid devices is their simplicity of fabrication.These frequently consist of a simple OEO layer deposited on top of the silicon waveguide structure.When integrating OEO materials with silicon nitride device structures care must be paid to control of material index of refraction but otherwise OEO materials exhibit good compatibility with silicon nitride as well as silicon.
Both electrode (parallel plate or coplanar) and corona poling have been used to induce acentric order although electrode poling has been increasing utilized.Electrode materials have typically consisted of gold (Au) or indium tin oxide (ITO) although doped silicon (Si) has been employed with silicon photonic device structures.The resistivity of electrode materials is an important factor in defining device bandwidth.Modest conductivity of poorly doped Si can also reduce poling efficiency.
Diels-Alder/Retro-Diels-Alder chemistry can be important in controlling material glass transition temperatures, which is important in processes such as nanoimprint lithography and for optimizing poling efficiency and the thermal stability of poling-induced electro-optic activity.The choice of processing protocols is frequently defined by device structure and intended application.Thus, the reader is referred to other reviews for more detailed discussion of a number of examples [7,[108][109][110][111][112][113][114].
A few words need to be said regarding advances in materials characterization.Clearly, if theory guiding the nano-engineering of OEO materials is to be tested realistically, improved accuracy and the reliability in the measurement of order parameters needs to be achieved.Most of the characterization improvements of the past several years deal with improved measurement of acentric and centric order parameters.The acentric order parameter cannot be measured directly but can be accessed by measurements of r 33 or the ratio r 33 /r 13 making use of theory to estimate the anisotropy of the first molecular hyperpolarizability tensor.For the most simple systems considered here, the anisotropy of β can be approximately defined by a parameter b = {β zxx + β zyy }/β zzz .Defining Θ = <cosθ>/<cos 3 θ>, where θ is the angle between the poling field and the molecular axis of the dipolar chromophore, the equation for the ratio can be approximately written as: (1) This is an over-simplified result but will be useful for certain materials of interest here.Providing that b can be determined from quantum mechanics and/or estimated by applied electric field HRS and EFISH measurements, Θ can be obtained from r 33 /r 13 measurements.The greatest uncertainty in extracting the acentric order parameter from electro-optic measurements relates to the uncertainty in b.It is also important to cross compare various techniques for measurement of the elements of the electro-optic tensor.For a given measurement technique, errors can arise from failing to employ an analysis that takes into account all of the features of a particular measurement.The analysis of the Teng-Man by Hermann and coworkers [96] is an example of this problem.Our approach has been to employ a variety of measurement techniques mentioned in the last section and to develop a more sophisticated analysis of the ATR experiment.
In like manner, characterization of the centric order parameter, <P 2 >, requires some improvement in measurement techniques.We have improved on the VAPAS technique of Graf and coworkers [115] by measuring the dependence of the ratio of "p" and "s" absorption on angular variation.We refer to this modification as VAPRAS [92].As discussed in detail elsewhere [92], utilizing the ratio of p and s absorption takes out many of the experimental artifacts associated with a measurement using p-polarized light alone.Because our samples are strongly absorbing we employ full Jones matrix analysis of data [116].
Devices
As with the diversity of materials processing and characterization options, a diverse array of devices and device architectures have been explored.As all-organic devices have been reviewed recently [117], we limit this discussion to two relatively new areas of focus on devices; namely, (1) slotted silicon photonic devices and (2) plasmonic/photonic crystal/metamaterial devices.
Basically, four classes of hybrid OEO/silicon photonic stripline and ring microresonator devices have been explored.The first simply involves over coating a silicon waveguide with OEO material.Modulation of light propagating in the silicon waveguide is effected by modulation of the evanescent field.Lipson [33] motivated interest in slotted structures by showing that light could be concentrated in low index of refraction slots cut into silicon waveguides (see accompanying Figure 8).This permitted a dramatic enhancement of optical fields and because of the potential of reduced electrode dimensions there was also the potential for significant enhancement of low frequency fields including the poling field.Both vertical and horizontal slot structures (see Figure 8) have been demonstrated with the vertical slot structures much easier to fabricate.The main advantage of the horizontal slot structure is the convenient deposition surface that exists for this structure.
Most recently, nanowire or rib structures have been fabricated.This device structure also exploits modulation of the evanescent field in the OEO material that is spun on top of a silicon rib structure.Sub-1 volt modulation has been realized for slotted structures and these likely define the state-of-the-art for second order optical nonlinearity-defined device performance (electro-optic modulation and switching, optical rectification, and difference frequency generation).The major problems with devices involving incorporation of OEO materials into plasmonic, photonic crystal (slow light), and metamaterial device architectures involve optical loss and bandwidth limitation.As already noted in this review, some intriguing prototype devices have been demonstrated.
For hybrid OEO/plasmonic devices, it has been possible to reduce optical loss by employing nanostructured gold and silver metal films rather than utilizing solid films.Utilization of nanostructured metal films permits the systematic variation of contributions made by long range surface modes (LRSM) and long range surface plasmonic polaritons (LRSPP).Consideration of both solid and nanostructured metal films has been pursued for a variety of simple device structure (e.g., Mach Zehnder modulators and phased modulators employing insulator-metal-insulator (IMI) and metal-insulator-metal (MIM) architectures) and more sophisticated structures (optical signal processors, optical SSB modulators, 4 channel RF phase shifters, linearized double channel MZ modulators) [48,49,118].The dependence of performance on film thickness (i.e., the relative contributions made by LRSMs and LRSPP) has been investigated.The major advantage of nanostructured metal films is the ability to control optical loss and to achieve short (e.g., ~1 cm or less) device lengths.We have explored OEO hybrid plasmonic devices for both simple composite (e.g., YLD146 in APC) and binary chromophore organic glasses.Some representative results are shown in Figure 9 for simple Mach Zehnder amplitude and single channel phase modulators.For a 1.5 cm.Mach Zehnder IMI modulator based on nanostructured gold films of 1 nm thickness, a driving voltage (at 1,550 nm optical wavelength) of 4 volts was observed for single arm driving (2 volts for push-pull driving).
For a 0.8 cm.interaction length phase modulator employing nanostructured gold films of 2 nm thickness, a driving voltage of 5.4 volts was observed operating at an optical wavelength of 1,550 nm.Sub-λ concentration of light has also been demonstrated for EO-active plasmonic waveguide structures.
Conclusions
Nano-engineering of chromophore structures has been shown to permit control of intermolecular electrostatic interactions that influence electric field poling efficiency.Electro-optic activity greater than that expected for chromophores behaving independently (chromophores experiencing no intermolecular electrostatic interactions), has been demonstrated.Incorporated-by-design interactions (coumarin-coumarin and arene-perfluoroarene) can influence lattice dimensionality and nanoviscoelastic properties.Lattice dimensionality can be defined by investigation of acentric and centric order parameters and their ratio.Record electro-optic activity is observed for binary chromophore organic glasses utilizing nano-engineered chromophore-containing host materials.Correlated quantum and statistical mechanics methods afford quantitative investigation of the effects of a wide range of intermolecular interactions and permit understanding of the factors that define electro-optic activity in all classes of materials studied to the present.
The compatibility of OEO materials with a diverse range of materials and their ease of integration into a diverse range of device structures has motivated incorporation of these materials into a wide variety of device structures including slotted and nanowire silicon photonic, plasmonic, and metamaterial devices.Prototype device performance results are very promising, particularly for hybrid OEO/silicon (and silicon nitride) devices.
Because of the enormous number of publications dealing with OEO materials over the past two-plus decades a comprehensive review of the literature is not possible here.The reader is referred to the cited reviews for more comprehensive coverage.
Figure 1 .
Figure 1.The YLD156 chromophore, utilizing a heteroaromatic bridge, is shown on the left and the YLD124 chromophore, utilizing on a polyene bridge, is shown on the right.
Figure 3 .
Figure 3.Chemical and Pseudo-Atomistic structures for an EO dendrimer are shown.Reproduced from reference[65] with permission of the American Chemical Society.
Figure 4 .
Figure 4.The CZC7a chromophore modified to approximate a spherical shape is shown.
Figure 5 .
Figure 5.The structure of BNA is shown at the upper right (inset) and below the variation of the ratio r 33 /r 13 is shown as a function of laser power used in LAP.A value of the ratio near 3 indicates low order while increasing values of the ratio are consistent with a larger acentric order parameter and reduced dimensionality.
Figure 7 .
Figure 7. On the left is a cartoon simulation illustrating the relative order for strong interactions and on the right VASE experimental results are shown that support the orthogonal orientation of chromophore and coumarin moieties suggested on the left.
Figure 8 .
Figure 8. Nanoscopic silicon photonic waveguides including vertical and horizontal slot waveguides, together with computed mode profiles, are shown.
Figure 9 .
Figure 9.The electro-optic modulation and optical loss characteristics are shown for a nanostructured gold thin (1 nm) film IMI structure.The optical propagation loss was observed to be 0.65-0.70dB/mm at 1,550 nm.The measured insertion loss was 14 dB. | 2014-10-01T00:00:00.000Z | 2011-08-19T00:00:00.000 | {
"year": 2011,
"sha1": "8871427e3f58f3e9637469cccb1db3614b5dc1f4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/3/3/1325/pdf?version=1313750624",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "8871427e3f58f3e9637469cccb1db3614b5dc1f4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
259196659 | pes2o/s2orc | v3-fos-license | Cesarean section rates according to the Robson Classification and its association with adequacy levels of prenatal care: a cross-sectional hospital-based study in Brazil
Background The rate of Cesarean section (CS) deliveries has been increasing worldwide for decades. Brazil exhibits high rates of patient-requested CS deliveries. Prenatal care is essential for reducing and preventing maternal and child morbidity and mortality, ensuring women's health and well-being. The aim of this study was to verify the association between the level of prenatal care, as measured by the Kotelchuck (APNCU – Adequacy of the prenatal care utilization) index and CS rates. Methods We conducted a cross-sectional study based on data from routine hospital digital records and federal public health system databases (2014–2017). We performed descriptive analyses, prepared Robson Classification Report tables, and estimated the CS rate for the relevant Robson groups across distinct levels of prenatal care. Our analysis also considered the payment source for each childbirth – either public healthcare or private health insurers – and maternal sociodemographic data. Results CS rate by level of access to prenatal care was 80.0% for no care, 45.2% for inadequate, 44.2% for intermediate, 43.0% for adequate, and 50.5% for the adequate plus category. No statistically significant associations were found between the adequacy of prenatal care and the rate of cesarean sections in any of the most relevant Robson groups, across both public (n = 7,359) and private healthcare (n = 1,551) deliveries. Conclusion Access to prenatal care, according to the trimester in which prenatal care was initiated and the number of prenatal visits, was not associated with the cesarean section rate, suggesting that factors that assess the quality of prenatal care, not simply adequacy of access, should be investigated.
Several studies show that this option is a global trend, being more pronounced in some countries, including Brazil [1][2][3][4].
A national hospital-based study with 23,894 pregnant women in Brazil showed an overall CS rate of 51.9% in 2011-12. When stratified according to health system, CS rates were 42.9% in national public healthcare (from a sample of 19,129 parturients) and 87.9% within private healthcare (4,765) [5]. There is no evidence that high rates of non-required CS deliveries, as seen in Brazil, are associated with lower mortality or morbidity for women or infants [6]. CS can also increase the risk of abnormal placentation and uterine rupture in future pregnancies, besides surgical adhesions, painful menses, endometriosis, and infertility [3].
Due to concerns by the medical community on the increasing CS rates, there are ongoing international efforts, supported by the World Health Organization (WHO) [7], to monitor and potentially reduce CS rates. The internationally accepted system to monitor and compare CS rates across delivery units is the Robson Ten Group Classification System [8,9]. It categorizes every childbirth into one, and one only, group. The groups are mutually exclusive, inclusive, and clinically relevant. In recent studies, two of the original ten groups (Groups 2 and 4) have been split into four (2a and 2b, 4a and 4b) to allow for more granular analyses [10][11][12]. The parameters used to define each group are parity, previous CS, number of fetuses, fetal presentation, gestational age, and onset of labor (Table 1).
There is no "ideal" CS rate for each Robson group. Numbers depend on epidemiological factors, as well as the organizational and cultural context of each delivery unit [9]. Nevertheless, given the growing adoption of the Robson classification [8], it is increasingly possible to contrast and compare experiences between obstetric units [14]. In fact, to help the implementation of Robson classification all over the world, WHO provides a manual with guidelines and some typical ranges of CS rates [15].
Prenatal care could be an important factor influencing CS rates, as prenatal visits are not only essential for monitoring maternal and fetal health, but they also provide an opportunity for discussing and planning the delivery itself [16]. Given the substantial number of dimensions involved in prenatal visits that are not collected in data records, it is challenging to assess prenatal care systematically. The Adequacy of Prenatal Care Utilization (APNCU) index provides relatively simple criteria for assigning the prenatal care into four "adequacy" levels: inadequate, intermediate, adequate and adequate plus. The levels are based on the number of visits and gestational age at the first prenatal visit [17].
Thus, the aim of this study was to evaluate a potential association between the adequacy of prenatal care, as categorized by the APNCU index, and the rate of Cesarean deliveries across Robson groups in our institution. If there were an association between adequacy levels of prenatal care and lower CS rates, it could be a strategy worth considering in the ongoing initiatives related to reducing CS deliveries.
Study design, setting, and participants
This was a cross-sectional hospital-based study conducted at the PUC Hospital-Campinas in Campinas-São Paulo, Brazil -a tertiary care facility and teaching hospital that serves both public healthcare and privately insured patients. The study population included all records of women who gave birth at the aforementioned hospital from January 2014 to December 2017.
Data source and variables
Data was extracted from routine hospital digital records and supplementary information was obtained from DATASUS/SINASC [18] (a national, publicly available health information system) and, where necessary, patient medical notes. Records of 837 births were excluded due to missing data, and 518 were excluded due to data inconsistencies between delivery date, delivery time and/ or birth weight that could not be solved after further investigation with the hospital medical records. Figure 1 exhibits a flowchart with details on how the 8,910-record database was compiled.
The variables considered in this study were: method of delivery, number of prenatal visits and gestational age at first prenatal visit (used for assigning a APNCU category), payment source for each birth (either public healthcare or private insurers), year of delivery, maternal sociodemographic information (age, level of schooling, marital status, and ethnicity/skin color), and Robson group (which considers parity, previous CS, number of fetuses, fetal presentation, gestational age at labor, and onset of labor).
Data processing and analysis
The CS rate for each variable was calculated, and Chisquare tests were performed to verify the association between these variables and type of delivery. Women were then categorized into one of the ten Robson groups and standard Robson Classification Report tables were prepared. Level of access to prenatal care was computed according to APNCU index based on Brazilian Ministry of Health categories: 1-Inadequate, for pregnant women who began antenatal care after the first trimester of pregnancy and/or attended less than three consultations; 2-Intermediate, pregnant women who started antenatal care during the first trimester and who had three to five consultations; 3-Adequate, pregnant women who commenced antenatal care during the first trimester, with six consultations; or 4-Adequate Plus, pregnant women who began antenatal care during the first trimester, with at least seven consultations [19].
To investigate association between CS rates and level of access to prenatal care by relevant Robson group and health system (payment source), adjusted CS rates and their 95% confidence intervals were calculated through logistic model. Robson Groups 1 to 5 and 10 were deemed the relevant ones for this analysis. Groups 1 and 2 encompass nulliparous women with single cephalic pregnancy at term. Groups 3 and 4 comprise multipara with single cephalic pregnancy at term and no previous Cesarean delivery. Group 5 consists of all multiparous women with single cephalic pregnancy at term and at least one previous uterine scar (CS). Group 10 is composed of all women with single cephalic pregnancy before term, including those with previous CS. Robson Groups 6,7,8,and 9 were not considered in this analysis because a CS delivery is usually the obstetric recommendation in such cases.
Analyses were performed using the statistical software SAS on Demand for Academics (SAS Studio version 3.8). The level of significance (α) adopted was 0.05.
Results
Over the four-year period, a total of 8,910 births were evaluated: 2251 in 2014, 2177 in 2015, 2117 in 2016, and 2,365 in 2017. The mean (standard deviation) maternal age at delivery for the whole sample was 25.9 (6.6) years, with a minimum age of 11 years old and a maximum age of 51 years old. There were 7359 (82.6%) deliveries within the public health system (SUS) and 1551 (17.4%) were paid for by private health insurance.
The overall CS rate was 48.8%, with large differences between SUS and private health insurance (42,0% and 80,7%, respectively). Higher CS rates were observed among older (≥ 35 years) women (61.6%), with clear increase in this rate as age increases; with a higher level of schooling (71.7%); separated, divorced, or widowed (63.3%); classified as white (52.5%); and within private healthcare (80.7%). No statistically significant differences in CS rates were found between the different years of delivery, ethnicity/skin color for SUS and access to prenatal care for private health insurance ( (Table 2). Table 3 shows data of Robson groups, but because of the disparities in CS rates between private and public healthcare, data is split by healthcare system. Due to missing data, there were 43 deliveries within private healthcare and 12 within public healthcare whose Robson groups could not be determined.
Within private healthcare, women in Group 5 were the largest group, accounting for 25.1% of all deliveries. This was closely followed by Group 2 (nulliparous women with single cephalic pregnancy at term who either had an induction of labor or a CS before the onset of labor) at 23.6%. Group 1 (nulliparous women with single cephalic pregnancy at term in spontaneous labor) was the third largest group accounting for 15.2% of deliveries. The largest relative contributors to the overall CS rate were the same groups in the same order, Group 5 (29.6%), Group 2 (26.4%) and Group 1 (12.9%). These three groups contributed for nearly 70% of cesarean deliveries within private care. Women in Groups 2 and 4 were further divided into the ones who had either (a) induced labor or (b) pre-labor cesarean deliveries. For privately insured parturients, in Group 2, 74 (20.3%) had induced deliveries (2a) and 291 (79.7%) pre-labor CS (2b). Group 4 had 44 (33.8%) inductions (4a) and 86 (66.2%) pre-labor CS (4b). Group 10 (single cephalic preterm deliveries) were 10.4% of all deliveries and contributed with 9.5% of cesarean deliveries (Table 3).
For the public healthcare system, women in Group 5 were also the largest group, accounting for 22.5% of deliveries. Followed by Group 3 (multiparous women with single cephalic pregnancy at term in spontaneous labor without previous CS) and Group 2, which accounted for 18.3% and 17.3%, respectively. In terms of relative contribution to the overall CS rate, Group 5 was the largest one, accounting for 36.0%. This was followed by Group 2, which accounted for 20.3%. Next was Group 4 (multiparous women with single cephalic pregnancy at term without previous CS who either had an induction of labor or CS before the onset of labor) (9.0%) and Group 1 (7.6%). These four groups accounted for over 75% of all CS in public healthcare. As for the size of Robson subgroups in public healthcare, of the 1271 women in Group 2, 289 (22.7%) had pre-labor CS (2b) and the remaining 982 (77.3%) were induced (2a). The 826 women in Group 4 subdivided into 688 (83.3%) who had induction (4a) and 138 (16.7%) who had pre-labor cesarean Sects. (4b).
Group 10 accounted for 13.1% of all deliveries and 13.5% of C-sections (Table 3).
Robson Groups 6 to 9, accounted for 8.3% of women in private care and 4.4% in public healthcare. Their relative contributions to the overall CS rate were 10.1 and 9.4%, respectively (Table 3). These groups were not included in the following analysis due to noncephalic presentation or multiple fetus pregnancy, and consequently, with a high (or full) probability of CS indication. Figure 2 shows the groups of Robson according to APNCU index in public and private health systems. In general, 67.8 and 76.3% of women attended by public and private health system, respectively, received antenatal care classified as adequate plus. This percentage is much lower for those in group 10 (< 37 gestational weeks), with 44.9 and 63.8% for the public and private health systems, respectively. Notwithstanding, 27.7% of women attended in the public health system received no or inadequate care. Table 4 shows data for the estimated CS rates of selected Robson Groups (1, 2, 3, 4, 5, and 10) by level of access to prenatal care (APNCU), adjusted for maternal age, schooling, marital status, and ethnicity/skin color and are described as follows: Group
Discussion
Our data did not show any statistically significant association between the adequacy of prenatal care -according to the APNCU index -and the rate of CS for any of the Robson Groups selected (1, 2, 3, 4, 5, and 10) in either public or private healthcare. Distinctively, while the CS rates vary little across APNCU categories, they are markedly different across some Robson Groups and between the types of health system. This suggests that the level of access to prenatal care as measured by the APNCU index is not a relevant factor behind the CS rates observed in our data. The APNCU index evaluates the adequacy of prenatal care through the number of prenatal visits and gestational age at the first prenatal visit. For assessing a potential relationship between prenatal care and CS rates, perhaps this index is not appropriate to capture potentially relevant information. For instance, the APNCU category does not indicate if and how there were any discussions on methods of delivery during prenatal visits.
Moreover, visits are often brief, with insufficient time for elaborating on such topic, especially in the public health system [20].
A recent systematic review evaluated measurement properties of 12 prenatal care indices [21]. According to this review, both the APNCU index and the Kessner index are supported by moderate evidence regarding their reliability, predictive, and concurrent validity. The indices were the two most utilized among the studies reviewed and presented the strongest evidence regarding their measurement properties. Nevertheless, Rowe et al. reported that there is insufficient research to inform the choice of a single best index [21].
A Brazilian study investigated associations between CS rates and different variables in the state of Rio de Janeiro from 2015-2016. Their results differ from ours in that they reported an association between CS rates and the level of prenatal care based on the APNCU index. As the category of prenatal care improved, the CS rate increased [22]. Besides from recruiting from are also a few methodological differences between the two studies. Crucially, we explored the association between CS rates and APNCU categories by splitting Robson groups and maintaining only the ones deemed relevant to this analysis, calculated adjusted CS, and analyzed data according to health care system. Another Brazilian study also investigated the role of prenatal care as a factor to CS deliveries [20]. Because they did not use the Robson classification nor the APNCU index, comparisons between their work and ours is less appropriate. Nevertheless, it is worth noting that Fabbro et al. reported that six or more prenatal visits increased the probability of CS rates by 47%. In analyzing all such results, we consider that the number of prenatal visits and the first gestational age in which they occur might not contribute to decrease the number of CS deliveries. Thus, the APNCU index, on its own, does not seem to be a parameter worthy of targeting as part of a strategy to increase vaginal deliveries.
Recent CS rates reported in Brazil -from studies encompassing cities, states, and the whole country -vary between 43.5 and 60.3% [2,5,10,20,[22][23][24]. Our study exhibits overall CS rates (48.8%) significantly higher than the global averages but within this range. The latest available data (2010-2018) from 154 countries covering 94.5% of world live births shows that 21.1% of women gave birth by CS worldwide, rates ranging from 5% in sub-Saharan Africa to 42.8% in Latin America and the Caribbean [14].
Analysis of the Robson Classification groups revealed an increased cesarean section rate in Group 2 (nulliparous women with induction of labor) and Group 5 (multiparous women with a previous cesarean section). Therefore, it is important to prepare the pregnant woman for induction of labor to reduce the possibility of cesarean section, thus avoiding its use in this group. As a consequence of avoiding the first cesarean section, a reduction of multiparous women with previous cesarean sections would be encountered, hence creating long-term benefits.
The examination of our Robson Report Table also reveals two general points. First, the rate of CS is much higher in private healthcare. Secondly, CS rates within public healthcare, despite being lower than in private care, were also elevated across all Robson Groups in comparison with global rates.
Regarding the first point, the very high CS rates in private care agreed with previous reports of Brazilian healthcare data [1,5,23]. For Robson Groups 1 and 2 (which encompass most nulliparas) CS rates within private care are more than two times higher than the ones observed in public healthcare. The relative difference in CS rates between private and public care were even higher within Groups 3 and 4 (most multipara with no previous CS) and reached almost threefold. A significant portion of such private-public differences stems from scheduled CS deliveries, that is, in advance of labor. In fact, the ratio of women in Group 2b as a proportion of Groups 1 and 2 was 48.6% in private care versus 12.9% within public healthcare. For Group 4b as a proportion of Groups 3 and 4, the ratio was 31.9% among privately insured parturients, while only 6.4% in public healthcare. Conversely, the proportion of induced vaginal deliveries (Groups 2a and 4a) in public healthcare is much higher than in private care, suggesting that inductions are part of typical conduct at the hospital. In fact, there is a salient difference between deliveries in private and public healthcare in our hospital that may help explain the higher prevalence of scheduled CS in private care. Privately insured parturients usually have their obstetric team of choice -typically the same professionals that followed them throughout prenatal care -and are, to a certain extent, subject to their schedule. In public healthcare parturients do not usually select the professionals involved. Instead, their deliveries are conducted by the on-call team.
The second general point worth highlighting about our findings is the fact that, even within public healthcare alone, CS rates for all groups are higher than global rates and WHO expected values. The rates for most Robson groups (1, 2, 3, 4) even in public healthcare were approximately two times greater than their respective WHO expected values. CS rates for Groups 5 and 10 were also above expected values but with minor differences (50-60% vs ~ 62%, and ~ 30% vs ~ 40%, respectively). There are probably several intertwined factors contributing to this scenario. Obstetric teams may be privileging unnecessary CS deliveries to save their time. Perhaps induction of labor is not being adequately offered to all pregnant women following Robson criteria. Very importantly, there are the choices that the parturients themselves are making. A recent study conducted in Brazil outlines additional elements that contribute to high rates of cesarean sections in the country. These include the absence of a collaborative approach among healthcare professionals in childbirth care, a limited availability of pharmacological pain relief (especially evident in public healthcare), unclear guidelines regarding the necessity of early delivery in cases of suspected fetal health issues, and insufficient financial support for obstetric care [25].
A national survey with 24,000 Brazilian pregnant women evaluated how preferences changed during pregnancy. It showed that while only 27.6% women in private care stated cesarean as their initial preference (at the beginning of pregnancy), 87.5% ended up delivering via CS. Fear of labor pain was the most cited reason for preferring a CS delivery, especially among nulliparous women [26]. Other studies have further explored this topic with similar findings [27][28][29].
Policies regarding CS deliveries in Brazil have been moving in the direction of giving women more power to decide their mode of delivery [30]. In conjunction with pro-choice rules, campaigns on the risks and benefits of each mode of delivery and additional counseling have been implemented across both public and private care in recent years. This seems fundamental in aiding women make well-informed decisions about their deliveries -such as Rede Cegonha ("Stork Network") and Parto Adequado ("Adequate Delivery"). However, it is not yet possible to prove that such initiatives will, in fact, be able to modify CS rates from here on. A study by Cochrane reviewed 29 studies to evaluate the effectiveness of nonclinical interventions intended to reduce unnecessary CS. The evidence so far, although limited, indicates that prenatal-based programs have made little or no difference on CS rates [31]. There is therefore opportunity for future work to revisit this situation and investigate if these initiatives may affect CS rates.
We should mention that the cross-sectional design is a limitation of our study, which makes it impossible to verify causal associations. As additional limitations we can cite firstly that all deliveries considered here took place in the same hospital, PUC-Campinas. Although data are from a single hospital, all deliveries over a four-year period that contained the information of interest (98.5%) were analyzed, allowing characterization of a census for a tertiary hospital, a reference for high-risk pregnancy. Secondly, the use of data recorded during routine hospital work, although not specifically collected for the study, enabled data verification and minimal use of resources. Thirdly, prenatal visits took place in multiple clinics and healthcare facilities across the Campinas metropolitan area, therefore, the analyses considering adequacy of prenatal (APNCU levels) have limited comparability across public and private healthcare. Finally, no information with regards to maternal or fetal risk was considered (e.g., hypertensive disorders, eclampsia, preexisting diabetes, gestational diabetes, severe chronic diseases, infection at hospital admission for birth, placental abruption, placenta previa, intrauterine growth restriction and major newborn malformation). However, Robson classification utilizes epidemiological criteria to structure the ten groups, that take these prevalences into account.
Conclusion
Our study investigated if there was association between CS rates and the adequacy level of prenatal care (via the APNCU index) and no statistically significant association was found. This lack of association clarifies the importance of adequate and qualified perinatal care, not just adequate access to prenatal care as measured by the APNCU index.
Another finding in our study was the significant differences in private versus public CS rates, considering the Robson groups, although for both healthcare systems the highest CS rate was for group 5 (multiparous women with a previous cesarean section), pointing efforts to avoid the first CS. Therefore, reducing unnecessary CS deliveries remains an elusive challenge. | 2023-06-20T13:57:22.748Z | 2023-06-20T00:00:00.000 | {
"year": 2023,
"sha1": "a0b782c2bbdfcf5df9591aeaddf1e272cba5693a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "a0b782c2bbdfcf5df9591aeaddf1e272cba5693a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221477704 | pes2o/s2orc | v3-fos-license | When do adolescent mothers return to school ? Timing across rural and urban South Africa
In South Africa (SA), ~125 000 girls aged 10 19 years experience a pregnancy every year.[1] Rates of adolescent pregnancy are much higher in populations affected by extreme adversities, which limit their educational attainment and specifically their graduation from high school.[2-5] Previous studies found that only 30 50% of girls return to school after birth or a pregnancy-related dropout.[6,7] Evidence shows that the longer a mother stays out of school after giving birth, the less likely she is to return.[7] While SA’s 2007 national policy (‘Measures for the prevention and management of learner pregnancy’) stipulates that learners are allowed to stay in school while pregnant, the policy also recommends that mothers should not return to school in the same year during which the pregnancy occurred. The policy specifies that young parents may take leave for up to 2 years to exercise full parenting responsibility, and requires the girls to provide a medical report that declares the learner fit for school upon their return.[8] In 2009, the Human Sciences Research Council published a report on behalf of the Department of Education (DoE), which criticised these stipulations and called for revisions. These included suggestions for flexible school policies that enable early re-entry of young mothers into the schooling system.[9] Despite these early suggestions, no official changes to the national policy have been made. A small number of empirical studies provided valuable insights into the rates of postpartum schooling of adolescent mothers. Specifically, two longitudinal studies showed that between 35% and 50% of girls who had a child before completing high school were enrolled in school during the year after birth.[6,10] Examining how schooling further unfolds over the years after birth, one study indicated that there is an ongoing decrease in enrolment rates after the first year post partum, showing that an increasing number of girls drop out of school as time passes.[6] By contrast, other studies suggested that the percentage of girls that are enrolled in school is upward sloping after the first year post partum. This provides some evidence that a few adolescent mothers who had dropped out do return to school at some point after giving birth.[10,11] While the abovementioned longitudinal studies[6,10,11] report rates of return to school in the first year after birth, there is no research on the specific timing of school return. We have no data on how many mothers returned in the first month after birth v. how many returned when their baby was closer to 1 year old. These differences are meaningful to the health and wellbeing of the mother and child and warrant further attention. Assessing when adolescent mothers return to school after the birth of their child is also important for the development of school policies that effectively support mother and child needs by recommending realistic timings for school return. The purpose of our article is to present novel data from two independent projects on the timing of adolescent mothers’ school return across two SA provinces. The findings are presented with reference to the recommended timings outlined in SA’s current national policy, and their relevance for the development of future policies are discussed.
IN PRACTICE
In South Africa (SA), ~125 000 girls aged 10 -19 years experience a pregnancy every year. [1] Rates of adolescent pregnancy are much higher in populations affected by extreme adversities, which limit their educational attainment and specifically their graduation from high school. [2][3][4][5] Previous studies found that only 30 -50% of girls return to school after birth or a pregnancy-related dropout. [6,7] Evidence shows that the longer a mother stays out of school after giving birth, the less likely she is to return. [7] While SA's 2007 national policy ('Measures for the prevention and management of learner pregnancy') stipulates that learners are allowed to stay in school while pregnant, the policy also recommends that mothers should not return to school in the same year during which the pregnancy occurred. The policy specifies that young parents may take leave for up to 2 years to exercise full parenting responsibility, and requires the girls to provide a medical report that declares the learner fit for school upon their return. [8] In 2009, the Human Sciences Research Council published a report on behalf of the Department of Education (DoE), which criticised these stipulations and called for revisions. These included suggestions for flexible school policies that enable early re-entry of young mothers into the schooling system. [9] Despite these early suggestions, no official changes to the national policy have been made.
A small number of empirical studies provided valuable insights into the rates of postpartum schooling of adolescent mothers. Specifically, two longitudinal studies showed that between 35% and 50% of girls who had a child before completing high school were enrolled in school during the year after birth. [6,10] Examining how schooling further unfolds over the years after birth, one study indicated that there is an ongoing decrease in enrolment rates after the first year post partum, showing that an increasing number of girls drop out of school as time passes. [6] By contrast, other studies suggested that the percentage of girls that are enrolled in school is upward sloping after the first year post partum. This provides some evidence that a few adolescent mothers who had dropped out do return to school at some point after giving birth. [10,11] While the abovementioned longitudinal studies [6,10,11] report rates of return to school in the first year after birth, there is no research on the specific timing of school return. We have no data on how many mothers returned in the first month after birth v. how many returned when their baby was closer to 1 year old. These differences are meaningful to the health and wellbeing of the mother and child and warrant further attention. Assessing when adolescent mothers return to school after the birth of their child is also important for the development of school policies that effectively support mother and child needs by recommending realistic timings for school return.
The purpose of our article is to present novel data from two independent projects on the timing of adolescent mothers' school return across two SA provinces. The findings are presented with reference to the recommended timings outlined in SA's current national policy, and their relevance for the development of future policies are discussed.
Informed consent was sought from adolescents who were >18 years old, while caregivers provided consent for underage participants. Adolescent mothers completed two complementary self-report
IN PRACTICE
interviews, which asked a range of questions regarding their health, family, relationships, violence experiences and schooling. Each interview was undertaken by interviewers trained in working with vulnerable youth and lasted ~60 minutes.
There was no fixed time point post partum at which participants completed the interview (i.e. participants' children were aged between 3 months and 9 years). This meant that at the time of the interview, some mothers had had less time to return to school than others. All mothers were interviewed in private spaces in and around their own home, but they were given the option to conduct the interview in a local restaurant if the privacy in their home was compromised. Confidentiality was maintained throughout the study, except where participants requested help or were at risk of significant harm. In these cases, referrals were made to health or counselling services, with follow-up support. There were no monetary incentives, but all participants received a certificate, refreshments and a participant pack containing useful items, e.g. washcloth and soap.
Ethical approval
All study activities were approved by the institutional ethics boards at the University of Oxford (ref. nos R48876/RE001 and R48876/RE002) and the University of Cape Town (ref. no. 226/2017).
Study 2
Study 2 was a pilot intervention study, i.e. mentoring adolescent mothers at school (MAMAS), conducted in a periurban area of KwaZulu-Natal Province, SA. MAMAS was a non-equivalent comparison group study designed to support adolescent mothers in their return to school after childbirth. A total of 111 adolescent mothers (aged 14 -19 years) from Umlazi township, who were part of the comparison group (i.e. they did not receive the MAMAS intervention), were recruited at a public maternity ward and at public health clinics between July 2017 and April 2018.
Participants completed a survey at ~6 months post partum, during which they answered questions on school experiences during pregnancy, returning to school after childbirth and timing of school return. Voluntary informed consent was obtained from adolescents and their caregivers in cases where adolescent mothers were underage. All interviews were completed on a tablet using audio-assisted computer interviewing. A trained research assistant was available to provide support to complete the surveys, as needed.
Ethical approval
All study activities were approved by the institutional ethics boards at Drexel University (ref. no. IRB:1612005048) and the University of KwaZulu-Natal (ref. no. BFC023/17).
Study 1
This study was completed by 1 003 participants. The mean age of study participants was 18.21 (standard deviation (SD) 1.80) years. Mothers in the sample were affected by several vulnerabilities (Table 1) -26.6% indicated not having enough food in the household in the past week and 92.7% came from families receiving at least one grant (mean 3.4; SD 2.1). The majority of mothers had only 1 child (n=916; 91.3%), whereas 87 mothers had ≥2 children (8.7%). The children were aged between 3 months and 9 years.
The majority of the 1 003 mothers in the sample were enrolled in school when they fell pregnant with their oldest child (n=902). After the birth of their oldest child, 64.7% (n=649) of mothers had returned to school, while 35.3% (n=354) had not returned to school at the time of the interview ( Table 2). The median postpartum time to return was 1 (interquartile range (IQR) 0 -2) month. Fig. 1(A) shows the return times of the 649 mothers who continued with school after childbirth. Most young mothers who had returned to school reported having returned <1 month after birth (n=301).
Study 2
This study was completed by 111 participants. Their mean age was 17 (1.33) years, and the population was highly vulnerable (Table 1).
IN PRACTICE
Specifically, over half experienced food insecurity in the past month (53.2%) and four-fifths were living in households that received at least one social grant (mean 1.45; SD 1.02). Furthermore, the majority were first-time mothers (n=104; 93.69%).
With regard to schooling, just fewer than half of all participants had returned to school at the time of the interview (n=53), which took place at ~6 months after childbirth ( Table 2). Four-fifths of the girls returned to the same school and most reported that someone at home assisted them with the re-enrolment process. Fig. 1(B) shows the return times of the 53 mothers who continued school after childbirth. The median time to return was 1.25 (IQR 0.72 -4.11) months. Four girls reported returning to school within 10 days of giving birth.
Discussion
The studies showed that ~65% and ~50% of adolescent mothers from study 1 and 2, respectively, had returned to school after giving birth. Overall, the presented rates are similar to those of previous studies, which reported that up to 50% of learners returned to school in the first year after giving birth, [6] but higher than results from other studies, where only 30% returned. [7] It is likely that the variability in the proportion of returned mothers across the 2 studies is partially due to the interviews being completed at different times post partum. Specifically, participants in study 2 completed interviews at ~6 months after birth and therefore had less time to return to school than participants in study 1. Therefore, the higher number of returners in study 1 may be due to a number of girls who returned to school during the first few years after giving birth. [10,11] Our results also indicated a mismatch between recommended timing for readmissions and actual return among SA adolescent mothers. A large proportion of mothers across both studies returned to school within the first 2 months after birth. Interestingly, this was the case across the 2 studies despite their differences in terms of study location, follow-up timing and sample size. The observed patterns indicate that school returns may occur much earlier than advised in SA's national policy on pregnant learners, which does not recommend returning to school in the same year that the pregnancy occurred. In line with previous studies, [7] study 1 also showed that only a very small percentage of mothers returned 1 year after childbirth. This finding supports previous research commissioned by the DoE, which highlighted that the proposed 2-year leave period may hinder mothers' readmission to school. [9] Developing school policies that flexibly promote mothers' rights to education while simultaneously addressing the child's needs is difficult. SA's national policy may attempt to balance both goals, but our results indicate that the policy does not lead to uniform patterns of school return that follow the outlined recommendations. The high proportion of early returners in the 2 studies discussed above raises questions about if and how different schools are interpreting and implementing the policy in practice. It is possible that the policy is unknown or partly disregarded by some schools that perceive the recommended timings as ambiguous, overly restrictive or hard to implement. It is also possible that some schools follow the DoE-commissioned research and, as such, enable early re-entry for adolescent mothers. [9] However, a government policy draft incorporating this research has not yet been made official. [12] Our findings point to the particular urgency for policies that are flexible to the needs of adolescents who decide to return to school very soon after childbirth. The high prevalence of early returners in the current sample reinforces previous calls for policy amendments that provide mothers with the flexibility to return to school earlier than recommended in the national policy. [9] Our results may indicate the level of demand for an updated policy that speaks to girls' realities. A refined policy that considers the timings of return identified in this study may contribute towards increased effectiveness and acceptability among adolescent mothers.
Recent research emphasised the importance of aligning educational policies for adolescent mothers with health policies that protect the needs and development of children. [13] SA's Department of Health (DoH) follows the World Health Organization (WHO) and the United Nations Children's Fund (UNICEF) recommendations [14] and advises exclusive breastfeeding during the first 6 months, with gradual weaning, [15] irrespective of HIV status. However, recent qualitative research involving 57 SA adolescent mothers indicated low rates of exclusive breastfeeding, despite awareness of its benefits. Even though mothers reported various reasons for introducing mixed feeding early after birth, they indicated that schooling largely precludes exclusive breastfeeding. [13] To ensure that policies affecting adolescent mothers are compatible with one another, the DoH and DoE should work on the development in unison. For instance, it would be intuitive for school policies to recommend a timeframe for school return that is aligned with the 6 months of recommended exclusive breastfeeding specified in the health policies. Maximising the health and wellbeing of mother-child dyads and acknowledging the nurturing needs of their children, require policies that not only address questions on school return timings but also outline ways in which the school context can actively support mothers after their return. Given the health benefits of breastfeeding, [16] policy documents should make recommendations for the way in which schools can facilitate breastfeeding for schoolgoing adolescents who wish to breastfeed. The implementation of similar policies [17] that rely on collaborations between the DoH and DoE has proven somewhat challenging. [18][19][20] Therefore, the success of
IN PRACTICE
an updated policy for adolescent mothers hinges on the political will and intersectional efforts to formulate clear, realistic and sustainable implementation processes. For the development of successful school policies, it is also necessary that future research crystallises additional concrete needs and outcomes of schoolgoing mothers and their children. Building on past research that sheds light on how adolescent mothers navigate schooling, parental responsibilities and nurturing their child, [13,21] further studies should aim to elucidate the motivations and challenges experienced by early returners in particular. To develop social interventions aiming at reintegrating mothers into schools and at increasing breastfeeding uptake among schoolgoing mothers, it is important to identify the factors that contribute to mothers' decision-making in this context. Future research should aim to identify the different routes through which complementary policies from the DoH and DoE could address the factors associated with mixed feeding among adolescent mothers, [13] which may interfere with school efforts to support schoolgoing mothers with exclusive breastfeeding. Lastly, the current findings highlight the need to assess if and how school policies targeting adolescent parents are monitored. Identifying the best ways to monitor the implementation of re-entry policies that promote the right to education is important to achieve educational equity for pregnant teenagers and adolescent mothers.
Study limitations
The 2 studies include several limitations. The cross-sectional nature of both studies precludes insights into the long-term, complex patterns of schooling behaviours. Neither study sought information on the schools' level of implementation on the current learner policy. This means that it is unclear whether or not the decision to re-enter school was guided by the national policy or the recommendations that emerged from research by the Human Sciences Research Council. Lastly, the 2 presented studies did not assess which breastfeeding practices were in place among the early-returning mothers. It is possible that mothers who returned to school very early decided against breastfeeding, were unaware that breastfeeding would provide benefits, were unable to breastfeed, were HIV-positive and advised not to breastfeed, or managed to breastfeed their child by means of various arrangements. Knowledge regarding the circumstances surrounding breastfeeding and other nurturing practices is needed to develop policies and programmes to support the realities of young mothers. Finally, our research was restricted to 2 provinces in SA. Further research with larger samples and in other provinces may increase the generalisability of the findings.
Conclusions
These are the first studies that report specific timings of return to school for SA adolescent mothers. Our findings highlight the complexity of developing school policies targeting adolescent mothers' return to school. School completion confers long-term benefits, [22][23][24] but returning prematurely could potentially impair child development, [16] and prolonged breaks can result in permanent dropout. To maximise the health and wellbeing of adolescent mothers and their children, future policies need to consider these consequences carefully and ensure that recommended timing reflects the best evidence-based practices. | 2020-09-03T09:04:47.411Z | 2020-08-31T00:00:00.000 | {
"year": 2020,
"sha1": "fafde18e8ae3df1314895713f613d054b8ad4395",
"oa_license": "CCBYNC",
"oa_url": "http://www.samj.org.za/index.php/samj/article/download/13069/9472",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d0030b84e983974677b6a9cf2c2c9174ce2cda96",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236573563 | pes2o/s2orc | v3-fos-license | Invited perspectives: Landslide populations – can they be predicted?
Landslides are different from other natural hazards. Unlike volcanoes, they do not threaten human civilization (Papale and Marzocchi, 2019). Unlike tsunamis, they do not affect simultaneously several thousands of kilometres of coastline – although a submarine landslide in Norway caused a tsunami to hit Scotland (Dawson et al., 1988). Unlike floods and earthquakes, they do not cause hundreds of thousands of casualties in a single event – although a landslide killed thousands in Peru (Evans et al., 2009) and debris flows tens of thousands in Colombia (Wieczorek et al., 2001). But the human toll of landslides is high (Froude and Petley, 2018), and their economic and societal consequences are largely undetermined. Compared to other hazards, landslides are subtle, often go unnoticed, and their consequences are underestimated. As with other hazards, the design and implementation of effective risk reduction strategies depend on the ability to predict (forecast, project, anticipate) landslides. I have argued that “our ability to predict landslides and their consequences measures our ability to understand the underlying [. . . ] processes that control or condition landslides, as well as their spatial and temporal occurrence” (Guzzetti, 2021). This assumes that landslide prediction is possible, something that has not been demonstrated (or disproved), theoretically. Yet, there is nothing in the literature that prevents landslide prediction, provided that one clarifies the meaning of “prediction” (Guzzetti, 2021), that the prediction is scientifically based (Guzzetti, 2015), and that we understand the limits of the prediction (Wolpert, 2001). Efforts are needed to determine the limits of landslide predictions, for all landslide types (Hungr et al., 2014) and at all geographic and temporal scales (Fig. 1). Here, I outline what I consider to be the main problems that need to be addressed in order to advance our ability to predict landslide hazards and risk. The field is vast, and I limit my perspective to populations of landslides – that is, the hazards and risk posed by many landslides caused by one triggering event or by multiple events in a short period. In this context, predicting landslide hazard means anticipating where, when, how frequently, how many, and how large populations of landslides are expected (Guzzetti et al., 2005; Lombardo et al., 2020; Guzzetti, 2021). Predicting landslide risk is about anticipating the consequences of landslide populations to different vulnerable elements (Alexander, 2005; Glade et al., 2005; Galli and Guzzetti, 2007; Salvati et al., 2018). Landslides tend to occur where they have previously occurred (Temme et al., 2020). Therefore, one way to assess where they are expected is to map past and new landslides. The technology is mature for regional and even global landslide detection and mapping services based on the automatic or semi-automatic processing of aerial and satellite imagery: optical, SAR and lidar data (Guzzetti et al., 2012; Mondini et al., 2021). An alternative – and complementary – way is through susceptibility modelling, an approach for which there is no shortage of data-driven methods but rather of suitable environmental and landslide data (Reichenbach et al., 2018). The increasing availability of satellite imagery, some of which repeated over time and free of charge (Aschbacher, 2017), opens up unprecedented opportunities to prepare event and multi-temporal inventory maps covering very large areas, which are essential to build space–time prediction models (Lombardo et al., 2020), to investigate the legacy of old landslides on new ones (Samia et al., 2017; Temme et al., 2020), to obtain accurate thematic data for sus-
Here, I outline what I consider to be the main problems that need to be addressed in order to advance our ability to predict landslide hazards and risk. The field is vast, and I limit my perspective to populations of landslides -that is, the hazards and risk posed by many landslides caused by one triggering event or by multiple events in a short period. In this context, predicting landslide hazard means anticipating where, when, how frequently, how many, and how large populations of landslides are expected Lombardo et al., 2020;. Predicting landslide risk is about anticipating the consequences of landslide populations to different vulnerable elements (Alexander, 2005;Glade et al., 2005;Galli and Guzzetti, 2007;Salvati et al., 2018).
Landslides tend to occur where they have previously occurred (Temme et al., 2020). Therefore, one way to assess where they are expected is to map past and new landslides. The technology is mature for regional and even global landslide detection and mapping services based on the automatic or semi-automatic processing of aerial and satellite imagery: optical, SAR and lidar data Mondini et al., 2021). An alternative -and complementary -way is through susceptibility modelling, an approach for which there is no shortage of data-driven methods but rather of suitable environmental and landslide data . The increasing availability of satellite imagery, some of which repeated over time and free of charge (Aschbacher, 2017), opens up unprecedented opportunities to prepare event and multi-temporal inventory maps covering very large areas, which are essential to build space-time prediction models (Lombardo et al., 2020), to investigate the legacy of old landslides on new ones (Samia et al., 2017;Temme et al., 2020), to obtain accurate thematic data for sus- ceptibility modelling , and to validate geographical landslide early warning systems Guzzetti et al., 2020). However, the literature reveals a systematic lack of standards for constructing, validating, and ranking the quality of landslide maps and prediction models Mondini et al., 2021;. This reduces the credibility of the maps and models -a gap that urgently needs to be bridged .
Predicting when or how frequently landslides will occur can be done for short and for long periods. For short periods -from hours to weeks -the prediction is obtained through process-based models, rainfall thresholds, or their combination. Process-based models rely upon the understanding of the physical laws controlling the slope instability conditions of a landscape forced by a transient trigger, e.g. a rainfall, snow melt, seismic, or volcanic event Greco, 2016, 2018). The major limitation of physically based models is the scarcity of relevant data, which are hard to obtain for very large areas. New approaches to obtain relevant, spatially distributed data are needed, as well as novel models able to extrapolate what is learned in sample areas to vast territories (Bellugi et al., 2011;Alvioli and Baum, 2016;Alvioli et al., 2018;Mirus et al., 2020).
Thresholds are empirical or statistical models that link physical quantities (e.g. cumulative rainfall, rainfall duration) to the occurrence -or lack of occurrence -of known landslides. Reviews of the literature (Guzzetti et al., 2008;Segoni et al., 2018) have highlighted conceptual problems with the definition and use of rainfall thresholds for operational landslide forecasting and early warning systems Guzzetti et al., 2020), including the lack of standards for defining the thresholds and their associated uncertainty , and for the validation of the threshold models (Piciullo et al., 2017Guzzetti et al., 2020). The community needs shared criteria and algorithms coded into open-source software for the objective definition of rainfall events, of the rainfall conditions that can result in landslides, of rainfall thresholds (Melillo et al., 2015, and for the validation of the threshold models (Piciullo et al., 2017). This will not only provide reliable and comparable thresholds, allowing for regional and global studies (Guzzetti et al., 2008;Segoni et al., 2018), but also increase the credibility of early warning systems based on rainfall threshold models .
The projection of landslide frequency for long periodsdecades to millennia -is much more difficult and uncertain, as it depends on climatic and environmental characteristics that are poorly known and difficult to measure and model (Crozier, 2010;Gariano and Guzzetti, 2016), as well as on the inherent incompleteness of the historical landslide records (Rossi et al., 2010). The literature on the analysis of historical landslide records remains scarce, but the number of studies projecting the future occurrence of landslides is increasing Peres and Cancelliere, 2018;Schlögl and Matulla, 2018;Patton et al., 2019;Schlögel et al., 2020;Gariano and Guzzetti, 2021). In this field, studies will be relevant if they compare analyses and validation methods in different areas. This requires the exchange of data and information.
Predicting how many and how large landslides are expected means anticipating the size (e.g. area, volume, length, width, depth) and number of landslides in an area -with size and number correlated in a population of landslides. This in-formation is obtained by constructing and modelling probability distributions of landslide sizes obtained typically from landslide event inventory maps (Stark and Hovius, 2011;Malamud et al., 2004). The literature on the topic is limited, with differences in the way the distributions are modelled. This hampers comparisons from different areas. Although models have been proposed to explain the probability size distributions (Katz and Aharanov, 2006;Stark and Guzzetti, 2009;Klar et al., 2011;Bellugi et al., 2021), further efforts are needed to explain the observed distributions of landslide sizes and to evaluate their variability and uncertainty.
By combining probabilistic information on where, when or how frequently, and how many or how large landslides are, one can evaluate landslide hazards for different landslide types. However, the existing models are crude, they work under assumptions that are difficult to prove , and the possibility to export them in different areas is limited or untested. Novel efforts are needed to prepare reliable landslide hazard models (Lombardo et al., 2020;. Assessing landslide hazard is important, but for social applications what is needed is the estimation of the landslide consequences, which means assessing the vulnerability to landslides of various elements at risk (Alexander, 2005;Galli and Guzzetti, 2007) and evaluating landslide risk (Cruden and Fell, 1997;Glade et al., 2005;Porter and Morgenstern, 2013), including risk to the population (Petley, 2012;Froude and Petley, 2018;Salvati et al., 2018;Rossi et al., 2019). Here, the main limitation is the difficulty in obtaining data on landslide vulnerability and reliable records of landslide events and their consequences (Petley, 2012;Froude and Petley, 2018;Salvati et al., 2018). Where the information is available, comprehensive landslide risk models can be constructed and validated (Rossi et al., 2019). It is important that efforts are made to collect reliable records of landslides and their consequences and that the records are shared to test different risk models.
Of the various factors governing landslide hazard, the most uncertain and the one requiring more urgent efforts is the time prediction (when, how frequently), followed by the prediction of the size and number of expected failures. For both, multi-temporal inventories and landslide catalogues are essential to build innovative predictive models. To construct the records, systematic efforts are needed for landslide detection and mapping (Mondini et al., 2021). For susceptibility (where), the challenge is to prepare reliable regional, continental, or global assessments (Stanley and Kirschbaum, 2017;Broeckx et al., 2018;Wilde et al., 2018;Mirus et al., 2020). Critical are also novel modelling frameworks combining the hazard factors (Lombardo et al., 2020). But the goal is to reduce risk . For that, vulnerability studies (Galli and Guzzetti, 2007), improved early warning capabilities Guzzetti et al., 2020), quantification of the benefits of prevention, and better risk communication strategies are crucial . Much work is needed on these largely unexplored subjects.
Ultimately, I note that in medicine -a field of science conceptually close to the field of landslide hazard assessment and risk mitigation -the paradigm of "convergence research" is emerging (Sharp and Hockfield, 2017), where "convergence comes as a result of the sharing of methods and ideas . . . It is the integration of insights and approaches from historically distinct scientific and technological disciplines" (Sharp et al., 2016). The community of landslide scientists should embrace the paradigm of "converge research", exploiting the vast number of data, measurements, and observations that are available and will be collected, expanding the making and use of predictions, assessing the economic and social costs of landslides, designing sustainable mitigation and adaptation strategies, and addressing the ethical issues posed by natural hazards, including landslides (Bohle, 2019). I am convinced that this will contribute to advancing knowledge and building a safer society .
Data availability. No data sets were used in this article.
Competing interests. The author declares that there is no conflict of interest.
Special issue statement. This article is part of the special issue "Perspectives on challenges and step changes for addressing natural hazards". It is not associated with a conference. | 2021-05-08T00:04:46.007Z | 2021-02-08T00:00:00.000 | {
"year": 2021,
"sha1": "5302eb0c7158a20ff81fa46f5781f984f1303543",
"oa_license": "CCBY",
"oa_url": "https://nhess.copernicus.org/articles/21/1467/2021/nhess-21-1467-2021.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1555dd7d233331bfb99ca7b398e8fcbf4a6f89e5",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Geology"
]
} |
49332330 | pes2o/s2orc | v3-fos-license | A Performing Arts Intervention Improves Cognitive Dysfunction in 50 Hospitalized Older Adults
Abstract Background and Objectives Arts in medicine programs have emerged as a patient-centered approach that aims to improve health-related quality of life for patients in U.S. hospitals. Storytelling and poetry/monologue recitation are forms of arts-based experiences designed to enhance healing and are delivered by an artist-in-residence. We evaluated the effect of a storytelling/poetry experience on delirium screening scores and patient satisfaction in hospitalized older adults. Research Design and Methods:
We conducted an observational pre–post study with a control group in the Acute Care for the Elders (ACE) unit at an academic medical center. A convenience sample of 50 participants was recruited to participate and complete two questionnaires measuring pain, anxiety, general well-being, and distress at hospital admission and at hospital discharge. Multivariable regression models were used to compare delirium screening score (primary outcome) between the control and intervention groups and to adjust for the differences in baseline characteristics between groups. Results At baseline participants in the intervention group were younger and had significantly lower cognitive impairment compared with those in the control group. Participants exposed to the storytelling/poetry intervention had a lower delirium screening score at hospital discharge compared with those in the control group. The result remained significant after adjusting for age, baseline cognitive impairment, and general well-being. Participants in the intervention group reported a high level of satisfaction with the interaction with the artist delivering the intervention. Discussion and Implications An artist in residence-delivered storytelling/poetry experience was associated with a lower delirium score at discharge in this pilot study. Further larger studies in diverse inpatient settings are needed to examine whether storytelling/poetry interventions or other types of arts in medicine programs can prevent or reduce delirium in hospitalized older adults.
Background and Objectives
Although advances have been made in the management of delirium (Ely et al., 2004), pain (Puntillo et al., 2001), anxiety, distress, and depression (Wilkinson et al., 2007) in hospitalized older patients, addressing these clinical issues, in older adults remains a challenge (Ahlers et al., 2008;Misra, & Ganzini, 2003;Puntillo et al., 2001;Rincon et al., 2001;Rotondi et al., 2002;Stolic & Mitchell, 2010). Arts in medicine programs have emerged as an adjunct form of support for patients that promotes a healing environment, facilitates the physical, mental, and emotional recovery of patients, and aims to improve patients' quality of life through the management of pain, stress, anxiety, and depression (Lane & Graham-Pole, 1994;Rollins, Sonke, Cohen, Boles, & Li, 2009). According to a 2007 Joint Commission survey, nearly half of the 1,807 respondent healthcare institutions reported having arts programming (Rollins et al., 2009). Common types of bedside art programming include music, visual arts, drama, dance, literature, creative writing, and storytelling (Lane & Graham-Pole, 1994;Rollins et al., 2009). For example, music therapy has been recognized as a simple and inexpensive adjunct to pharmacologic treatment regimens in managing postoperative pain and anxiety (Allred, Byers, & Sole, 2010;Bonny, 1983) and can inhibit stress by reducing anxiety and pain (Almerud & Petersson, 2003;Chlan, Engeland, Anthony, & Guttormson, 2007;Nilsson, 2008;Twiss, Seaver, & McCaffrey, 2006) in intensive care units.
Arts-based interactions represent creative approaches to healing that can be categorized as either active (involving patient participation) or receptive (patient listens and observes) sessions that provide creative experiences and positive distractions (Warth et al., 2014). Storytelling is a form of arts experience designed to be delivered in both active and receptive sessions by artists in residence. Poetry/ monologue recitations have also been included in performing arts programs. The artist in residence can tailor the type of session according to the individual patient's physical and mental state. Despite anecdotal evidence of patients benefiting from storytelling/poetry sessions, few studies have examined whether exposure to this form of art programming influences patient outcomes. For example, in a study involving children with leukemia, a hypnotic trance through use of a child's favorite story was found to be significantly more effective than a behavioral distraction and standard medical practice in alleviating distress, pain, and anxiety during bone marrow aspirations (Kuttner, 1988).
Delirium is common in hospitalized older adults (Ehlenbach et al., 2010) and is associated with poor outcomes including prolonged hospitalization, decreased cognitive and physical functioning, increased placement in long-term health care facilities and increased mortality (Campbell et al., 2009). Cognitive impairment is a major risk factor for delirium, and cognitively stimulating activities have been found efficacious in preventing delirium in hospitalized older patients (Inouye et al., 1999). In hospitalized patients, storytelling and/or poem/monologue recitations could reduce stress levels, increase pain tolerance, improve mood, and hasten recovery times (Rollins et al., 2009). Based on the need for an improved understanding of the effectiveness of performing arts programming in health care settings, the goals of our study was to evaluate the feasibility of a storytelling/poetry intervention among hospitalized older adults and to provide preliminary data on its effect on delirium and patient satisfaction in an inpatient setting. We hypothesized that exposure to our storytelling/ poetry intervention will be associated with fewer cases and fewer symptoms of delirium.
Study Design
This pilot study was conducted in the Acute Care for Elders (ACE) unit at the University of Alabama at Birmingham (UAB) between June and August 2016 and was approved by the local institutional review board. We used a pre-post design in which an intervention phase (storytelling intervention in addition to usual inpatient care) was followed by a control phase (usual inpatient care). The intervention phase was conducted between June and mid-July followed by the control phase between mid-July and August. During the study period, no other type of arts experience (e.g., music therapy, dance, textile therapy) was offered in the unit where the participants were hospitalized and recruited for this study. In addition, for the duration of the study, visits by hospital volunteers and pet therapy were not offered in the hospital unit where the study was performed.
Study Participants
Potential study participants were patients admitted to the ACE unit. Patients aged 65 years or older were recruited to be part of the intervention or control group using the same process that had been previously established in the ACE unit for utilization of storytelling by artists in residence as an adjunct modality for usual clinical care. Specifically, the ACE interdisciplinary clinical care team comprised of hospitalists, nurses, geriatrician/geriatrics nurse practitioners, chaplain, a social worker, a clinical pharmacist, and therapists considered if the patients might benefit from interacting with artists in residence during their daily interdisciplinary rounds. As per the predefined exclusion criteria, the interdisciplinary ACE team excluded patients admitted to the ACE unit who had severe agitation or delirium (i.e., anyone who required restraints, needed medications for their behaviors, or were so inattentive that they could not participate in the intervention or respond to questionnaires), those who refused to participate, or whose families declined participation. Each participant was given an information sheet describing the study and were asked to give a verbal agreement before receiving the study surveys and intervention. The unit clinical nurse coordinator obtained informed consent for participation verbally. Participation in the study was voluntary.
Storytelling/Poetry Intervention
Two artists in residence, who were part of UAB Hospital's Institute for Arts in Medicine (AIM), delivered the bedside storytelling/poetry intervention. The AIM program, initiated in 2013, is a partnership between UAB Medicine and the UAB-affiliated performing arts center and aims to transform the health care environment and enhance healing and wellness for patients, visitors, and staff through creative arts experiences. Both artists in residence have more than 15 years of acting experience and have been trained to facilitate arts experiences in the health care environment. The artists visited the patients at the bedside for 15 minutes once during the hospital stay. At the beginning of their interaction with the patient, the artists in residence introduced themselves and asked if the patient would like to hear a story or poem. If the patient responded positively, the artist in residence asked the patient preference regarding the type of story or poem (e.g., religious, humorous, folktale, legend, myths, fairy tale) they would like to hear. Upon completion of the story/poem, the artist in residence asked the patient for feedback. The session was designed to be interactive, with the patient having the opportunity to reflect on the story or poem and share stories from his or her own life. An example of the intervention can be found at http://www.uab.edu/news/arts/ item/6304-creative-approaches-to-healing-at-uab-s-institute-for-arts-in-medicine-inspire-patients-and-clinicians.
Baseline Data Collection
Demographic characteristics (age, sex, race/ethnicity), comorbidities, and insurance coverage were collected from the electronic health record for all participating patients. A baseline paper-based questionnaire ( Figure 1) evaluated pain, anxiety, general well-being, and distress and was conducted on average 1-3 days after the admission and before the participants in the intervention group were exposed to storytelling/poem recitation. Pain (Gallagher, Bijur, Latimer, & Silver, 2002), anxiety (Facco et al., 2011), and general well-being (Warth, Keßler, Hillecke, & Bardenheuer, 2015) were assessed using visual analog scales (range 0-10, lower values are better). The level of distress/anxiety was assessed using the one-item subjective units of disturbance scale (SUDS) scored from 0 (no distress/totally relaxed) to 10 (highest distress/fear/anxiety/discomfort ever; Kim, Hwallip, & Yong, 2008;Wolpe, 1969). Nursing personnel, who were unaware of the patient's participation in the study, assessed patients' level of cognitive impairment and whether delirium was present, and recorded these data in the patient's EHR during the first day of hospitalization and the day of discharge. Cognitive impairment was measured as part of the routine clinical using the Six-Item Screener (SIS; Callahan, Unverzagt, Hui, Perkins, & Hendrie, 2002). The participants were asked to recall three random words and to state the year, the month, and the day. The number of errors is added together for a score ranging from 0 (no cognitive impairment) to 6 (severe cognitive deficit; Callahan et al., 2002). The presence of delirium was assessed by clinical nurses according to the usual clinical protocol at our institution once per hospital shift using the Nurses Delirium Screening Scale (Nu-DESC, range 0-10; score ≥ 2 indicates delirium; Gaudreau, Gagnon, Harel, Tremblay, & Roy, 2005), which includes five symptom domains (scored from 0 to 2): disorientation, inappropriate behavior, inappropriate communication, illusions/hallucinations, and psychomotor retardation. Each domain was scored either 0 (no signs of item present), 1 (mild to moderate, barely expressed), or 2 (moderate to severe). The total score from the five domains was added together and a total score ≥2 represents positive screening for the presence of delirium. The first assessment of delirium occurred at admission to the ACE unit. Psychometric properties of SUDS, SIS, and Nu-DESC have been previously published (Callahan et al., 2002;Kim et al., 2008;van Velthuijsen et al., 2016). Of note, a validation study including hospitalized patients found that while Nu-DESC is a specific delirium detection tool, has lower sensitivity at the usually proposed cut-off point of ≥2 (Hargrave et al., 2017).
Outcomes and Follow-Up
We collected study outcomes for both intervention and control phase at the hospital discharge using paper-based questionnaires (Figure 1). The primary outcome of this study was delirium score as measured by the Nu-DESC scale (Gaudreau al., 2005;Neufeld et al., 2013). Secondary outcomes were patient satisfaction with the physician, satisfaction with the nonclinical team and satisfaction with the artist in residence administering bedside story or poetry. Patient satisfaction outcomes were assessed using a 5-point Likert scale scored from 1 (strongly disagree) to 5 (strongly agree).
Similar to the baseline questionnaire, the follow-up questionnaires also included the same items evaluating the patient's level of pain, anxiety, general well-being, and distress.
Statistical Analysis
We used descriptive statistics to compare participant characteristics between the intervention and control group. Means and standard deviations (SDs) were calculated for continuous variables, and frequencies and proportions were calculated for categorical variables. Differences in sociodemographic and clinical characteristics of between control and intervention group were examined using t-tests, chisquare tests, or Fisher's exact tests, as appropriate.
In preliminary analyses, logistic regression models were used to compare the proportion of participants screening positive for delirium at discharge between the control and the intervention groups. The small sample size limited power to detect differences between the intervention and control groups in the proportion of participants that met Nu-DESC criteria ≥2. Thus for this pilot study, we decided to evaluate at the Nu-DESC score as a continuous variable to see if we could identify an effect of the intervention that might warrant further study after controlling for important confounders. Multivariable regression models were used to compare delirium screening score (primary outcome) between the control and intervention groups to adjust for the differences in baseline characteristics between groups. We used generalized linear models to evaluate the association between the exposure to the storytelling intervention and discharge Nu-DESC score or change in Nu-DESC score, respectively. In multivariable regression models, we included as covariates those baseline variables which were found at p <.10 to be associated with both the intervention and the primary outcome. We assessed for the presence of multicollinearity between cognitive impairment and Nu-DESC score at discharge. Paired t-tests were used to compare the pre-post measures of patient satisfaction. A p <.05 was the criterion for statistical significance. No adjustments for multiple comparisons were performed. All analyses were conducted in SAS (v9.3, Enterprise Guide v4.3, Cary, NC).
Results
A total of 50 patients, mostly women (64%) with a mean (SD) age of 81.2 (9.5) participated in the study. Compared with the control group, participants in the intervention group were slightly younger, 77 (8.7) years versus 85.4 (8.6) and had less cognitive impairment: SIS score of 1.05 (1.9) versus 2.6 (2.2). There was no significant difference between groups (intervention vs control group) in terms of sex, race, anxiety, pain, general well-being, distress, Charlson comorbidity index, and Nu-DESC score at baseline (Table 1). A total of seven (28%) participants in the control group and four (16%) in the intervention group had a Nu-DESC score ≥2 at hospital admission. At hospital discharge, five (20%) participants in the control group and one (4%) participant in the intervention group had a Nu-DESC score ≥2 and thus met criteria for delirium according to the Nu-DESC assessment (p = .18). On the day of discharge, the delirium screening score was significantly lower (less cognitive dysfunction) in the intervention group compared with the control group in univariable analysis (Table 2). However, there were no differences in the length of stay and measurements of anxiety, pain, general wellbeing, and distress at hospital discharge between the intervention and control group (Table 3). After adjustment for level of cognitive impairment, age, general wellbeing, and admission delirium score, exposure to the intervention remained independently associated with a significantly lower discharge delirium screening score (beta = 0.7 [0.17, 1.24], p = .01; Table 4, Model A). Since the duration of hospital stay was longer among those in the intervention group compared with the control group, we further adjusted for length of stay. However, the storytelling/poetry intervention remained independently associated with significantly lower delirium screening score at discharge (beta = 0.7 [0.15, 1.24], p = .01; Table 4, Model B). Similarly, after adjustment for level of cognition, age, and well-being, there was a borderline significant association between exposure to the intervention and decrease in delirium score between hospital admission and discharge (beta = 0.8 [−0.01, 1.6], p = .05).
Patients in the intervention group reported being satisfied with the artist encounter 4.9 (0.37) ( Table 2). Compared with the participants assigned to the control group, there was no significant difference in the patient satisfaction with their physician or nonphysician team (Table 2).
Discussion and Implications
To our knowledge, this is the first study to evaluate the association between a bedside storytelling intervention delivered by artists in residence and changes in measures of cognitive dysfunction in hospitalized older adults. We found that exposure to a storytelling/poetry intervention was associated with improvement in Nu-DESC scores, after controlling for potential confounders including age, baseline cognitive impairment, level of distress, and general well-being. In addition, patient satisfaction with the bedside storytelling/poetry intervention was high.
Many hospitalized patients, especially the older adults, are at risk of developing delirium, a risk that is increased by the presence of cognitive, functional, visual and hearing impairment, depression, and other comorbidities. Delirium is precipitated by hospitalization related factors (e.g., medications, procedures, unfamiliar environment) and is associated with increased morbidity and mortality, longer hospital stays and substantial additional health care costs (Inouye, Westendorp, Saczynski, Kimchi, & Cleinman, 2014). There is a lack of strong evidence for pharmacologic therapies to prevent delirium, thus, nonpharmacologic modalities have the strongest evidence of benefit (Inouye et al., 1999). Such nonpharmacologic interventions have included music therapy, exercise, light, and sensory therapy as well as complementary alternative medicine modalities that have been evaluated with variable success (Inouye et al., 1999(Inouye et al., , 2014Levy, Attias, Ben-Arye, Bloch, & Schiff, 2017).
Because stories can be used to discuss personal experiences and/or can provide a fantasy escape for the listeners (Rollins et al., 2009) arts programming using storytelling are increasingly encountered in the health care setting (Hanna, Rollins, & Lewis, 2017). However, the evidence supporting the benefits of storytelling/poetry on improving clinical outcomes is sparse, a gap that our study aimed to fill. Storytelling interventions exposing personal experiences with disease management have been shown to decrease blood pressure (Houston et al., 2011), and improve selfefficacy in adults with diabetes and hypertension (Bertera, 2014;Bokhour et al., 2016), while recounting a favorite story has been employed to help children deal with pain (Heiney, 1995;Kuttner, 1988). Storytelling interventions like the one we employed in the present study can provide hospitalized patients with cognitive stimulation and positive distractions from the monotony and stress associated with the hospital stay. Listening to a story provides an emotional experience which may uplift patients' mood, relieve stress, promote wellness, and assist in the healing process (Buchanan, 2015). However, while storytelling interventions may reduce anxiety and improve pain tolerance in some populations (Hanna et al., 2017;Rollins et al., 2009), we did not replicate these results in our study. This could be due to the different characteristics of the participants enrolled in our study and that acutely sick elders may respond differently to storytelling compared with other groups, or to the small sample size employed by our pilot study.
Our study findings should be interpreted in the light of some limitations. Because our study was a pilot study with small sample size, we were able to adjust only for some potential confounders and the precision of our estimates of the association between our intervention and improved delirium scores upon discharge is limited. Participant recruitment in two phases, first in the intervention phase Note: FFS = fee for service; Nu-DESC = Nurses Delirium Screening Scale; SD = standard deviation. a Range 0-10; 0 (very relaxed); 10 (very tense). b Range 0-10; 0 (no pain); 10 (worst possible pain). c Range 0-10; 0 (very good); 10 (very bad). d Range 0-10; 0 (no distress/totally relaxed); 10 (highest distress/fear/anxiety). e Cognitive impairment measured using the Six-Item Screener (range 0-6). f Nu-DESC Nursing Delirium Screening Scale (range 0-10, score ≧ 2 indicates delirium). and then in the control phase, as we implemented in this study, rises the concern for temporal selection bias. However, this approach was chosen to prevent contamination bias, where the participants in the control group were inadvertently exposed to the storytelling/poetry intervention, which could minimize the difference in outcomes between the two groups. In addition, this was an observational study, and we did not employ "attention control" procedures in the control group, and thus we were not able to adequately control for the nonspecific effects of the intervention such as the time spent with the patient. The follow-up period for the study was short, and the long-term effects of exposure to the storytelling intervention were not studied. In addition, the timing of the intervention in regard to the day of discharge may have influenced the effect of our intervention on the delirium screening score at hospital discharge. Because we did not collect data on the timing of the daily Nu-DESC assessment in relation to the intervention, we could not evaluate whether our storytelling/poetry intervention influenced the daily Nu-DESC scores. Given the pilot nature of this study and that we did not record information on the participation rate, the generalizability of our findings is limited and larger studies are needed to confirm our results among hospitalized older individuals (Thabane et al., 2010). In summary, in this pilot study, we evaluated the use and feasibility of an artist in residence-delivered storytelling/ poetry program in older adults admitted to an Acute Care for Elderly unit. We found that the patients participating in the study had positive views about the interaction with the artist. Although our artist in residence-delivered storytelling/poetry experience was associated with lower delirium screening score at hospital discharge, further larger studies in diverse care settings are needed to examine whether storytelling interventions or other types of arts-based experiences in health care can prevent delirium in older adults. In addition, future research should focus evaluating whether it is the art experience itself or the patient-artist interacting about the art form that influences health outcomes. | 2018-06-23T01:47:48.374Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "1d94b3ad0c366ec5ba9f10c6345ff3d34aa800bc",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/innovateage/article-pdf/2/2/igy013/25911318/igy013.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d94b3ad0c366ec5ba9f10c6345ff3d34aa800bc",
"s2fieldsofstudy": [
"Medicine",
"Art"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256390418 | pes2o/s2orc | v3-fos-license | Constraining barrow entropy-based cosmology with power-law inflation
We study the inflationary era of the Universe in a modified cosmological scenario based on the gravity-thermodynamics conjecture with Barrow entropy instead of the usual Bekenstein–Hawking one. The former arises from the effort to account for quantum gravitational effects on the horizon surface of black holes and, in a broader sense, of the Universe. First, we extract modified Friedmann equations from the first law of thermodynamics applied to the apparent horizon of a Friedmann–Robertson–Walker Universe. Assuming a power-law behavior for the scalar inflaton field, we then investigate how the inflationary dynamics is affected in Barrow cosmological setup. We find that the inflationary era may phenomenologically consist of the slow-roll phase, while Barrow entropy is incompatible with kinetic inflation. By demanding observational consistency of the scalar spectral index and tensor-to-scalar ratio with recent Planck data, we finally constrain Barrow exponent to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta \lesssim \mathcal {O}(10^{-4})$$\end{document}Δ≲O(10-4), which is the most stringent bound in so-far literature.
I. INTRODUCTION
The effort to understand the statistical mechanics of black holes [1] has opened up new scenarios in modern theoretical physics, including the study of the AdS/CFT correspondence [2,3] and the investigation of the connection between gravity and thermodynamics.Beyond their intrinsic interest, both these two lines of research might potentially have a deep impact upon the development of quantum gravity, mainly because they are the most successful realizations of the holographic principle [4,5].While the AdS/CFT correspondence is based on the description of the background geometry in terms of anti-de Sitter vacuum solutions, the interplay between gravity and thermodynamics finds its conceptualization in the so-called gravity-thermodynamics conjecture [6][7][8], which states that Einstein field equations are nothing but the gravitational counterpart of the laws of thermodynamics applied to spacetime [9].Besides, in the cosmological context such a conjecture allows to extract Friedmann equations by implementing the first law of thermodynamics on the apparent horizon of the Universe [10][11][12][13].
In the original formulation the gravity-thermodynamic conjecture applies Bekenstein-Hawking (BH) area law S BH = A/A 0 to the Universe apparent horizon of surface area A = 4πr 2 hor and radius r hor . 1 Nevertheless, generalized forms of BH entropy have been discussed in recent literature, motivated by either nonextensive [14,15] or quantum gravity [16] arguments.To the latter class belongs Barrow entropy, which deforms BH area-law to where Barrow exponent ∆ embeds quantum gravitational corrections.In particular, ∆ = 1 corresponds to the maximal departure from BH entropy, which is instead recovered for ∆ = 0. Though being proposed for black holes [16], Eq. ( 1) is also applied within the cosmological framework, giving rise to modified Friedmann equations that predict a richer phenomenology comparing to the standard one [17].In addition, one can rephrase the holographic principle in terms of Barrow entropy, obtaining Barrow holographic dark energy (BHDE) (see, for instance [18][19][20][21][22][23][24][25] for recent applications).Comparison of the above constructions with observations sets upper limits on Barrow exponent [26][27][28][29][30], which slightly deviates from zero, as expected.
In physical cosmology, inflation is supposed to be a crucial era in the evolution of the Universe, consisting of a very short-lived, but extremely accelerated expansion phase occurred right after the Big Bang.Originally proposed in [31][32][33][34], it has been getting increasing attention over the years, becoming one of the two pillars of the present cosmological model along with the late time acceleration [35][36][37].In spite of this, the origin of inflation has not been well understood yet.The most commonly adopted scenario is that it has been driven by a particular form of dark energy represented by a scalar field with slow rolling assumptions [38].Alternative models have been recently proposed in [39][40][41][42][43][44][45][46].The inflationary phase has also been studied in connection with holographic dark energy [47][48][49], motivated by the plausible role of the latter as a mechanism responsible for the late time cosmic acceleration.
Starting from the above premises, in this work we study the evolution and inflation of the Universe in the context of Barrow entropy-based Cosmology.In this sense, our analysis should be regarded as a preliminary attempt to explore the effects of quantum gravity on the dynamics of the Universe.In particular, we apply Barrow formula (1) to the entropy associated with the apparent horizon of a (n+ 1)-dimensional homogeneous and isotropic (Friedmann Robertson Walker-like) Universe, assuming that the matter inside the horizon is represented by a scalar field with a potential.In this setting, modified Friedmann equations are derived from the first law of thermodynamics and compared with the result of [50] for the specific case of n = 3.Furthermore, we investigate the early inflationary dynamics of Barrow cosmology with the power-law potential function.Contrary to nonextensive (Tsallis-like) scenario [51], where it has been shown that inflation may consist of both slow roll-and kinetic-phases, here we find that only the first stage is eligible, the kinetic energy era being incompatible with the allowed values of Barrow exponent ∆.After computing the characteristic inflation parameters, we infer an upper bound on ∆ in compliance with recent observational constraints on the scalar spectral index and the tensor-to-scalar ratio.We finally comment on the consistency of our results with other approaches in literature aimed at exploring inflation driven by BHDE.
The remainder of the work is structured as follows: in the next Section, we derive modified Friedmann equations from Barrow entropy.Sec.III is devoted to to the study of the inflationary era in BHDE, while conclusions and outlook are summarized in Sec.IV.
II. MODIFIED FRIEDMANN EQUATIONS IN BARROW COSMOLOGY
Let us consider a homogenous and isotropic Friedmann-Robertson-Walker (FRW) Universe of spatial curvature k.We first set notation by following [21] and focusing on (3 + 1)-dimensions.To be as general as possible, the derivation of the modified Friedmann equations in Barrow Cosmology is then performed for the (n + 1)-dimensional case, with n ≥ 3.
For a (3 + 1)-dimensional FRW Universe, the line element can be written as where we have denoted the metric of the is the (time-dependent) scale factor and r the comoving radius.Following [52], the dynamical apparent horizon is obtained from the geometric condition For the FRW Universe (2), explicit calculations yield where H = ȧ(t)/a(t) is the Hubble parameter and the overhead dot indicates derivative respect to the cosmic time t.
The apparent horizon has an associated temperature where κ represents the surface gravity.Clearly, for ṙA ≤ 2H rA we have T ≤ 0. To avoid meaningless negative temperatures, one can define T = |κ|/2π.Furthermore, it is possible to assume that ṙA ≪ 2H rA in an infinitesimal time interval dt, which amounts to keeping the apparent horizon radius fixed.This implies the approximation T ≃ 1/2πr A [11].
We now suppose that the matter content of the Universe is represented by a scalar field φ characterized by a perfect fluid form.The corresponding Lagrangian is given by L φ = X − V (φ), where X = − 1 2 h µν ∂ µ φ∂ ν φ and V (φ) are the kinetic and (spatially homogenous) potential terms, respectively.In turn, the stress-energy tensor is where u µ is the four-velocity of the fluid and represent its energy density and pressure, respectively [46].In turn, the conservation equation ∇ µ T µν = 0 gives the continuity equation Combining Eqs. ( 7) and ( 8), we obtain the dynamics equation of the canonical scalar field as where the term containing the Hubble constant serves as a kind of friction term resulting from the expansion.
A. Modified Friedmann equations in (n + 1) dimensions The above ingredients provide the basics to derive the modified Friedmann equations in Barrow entropy-based Cosmology.Here, we extract such equations from the first law of thermodynamics applied to the apparent horizon of the FRW Universe in (n + 1)-dimensions, where is the work density associated to the Universe expansion and is the generalized Barrow entropy.We have denoted the n-dimensional horizon surface by A = nΩ n rn−1 A , where Γ(n/2+1) is the angular part of the n-dimensional sphere and Γ the Euler's function.The dimensionless constant γ is such that γ → 1 for n = 3, so that Eq. ( 1) is restored in this limit.Its explicit expression shall be fixed later.In passing, we mention that an alternative derivation of modified Friedmann equations can be built upon Padmanabhan's paradigm of emergent gravity [53], which states that the spatial expansion of our Universe can be understood as the consequence of the emergence of space with the progress of cosmic time.Now, by taking into account that the total energy of the Universe inside the n-dimensional volume This relation can be further manipulated by resorting to the generalized continuity equation to give On the other hand, by differentiating the entropy (12) we get By plugging Eqs. ( 13)-( 16) into (10), we arrive to With the further use of the continuity equation ( 14), this becomes Integrating both sides, we are led to where the integration constant has been fixed by imposing the boundary condition 8πρ φ = Λ ≃ 0. Finally, with the help of the definition (4), we obtain = 8πG where we have defined and we have set Furthermore, we have introduced the effective gravitational constant [21] Some comments are in order here: first, we notice that for n = 3, we have γ → 1, consistently with the discussion below Eq. ( 12).The same is true for σ, so that Eq. (20) for n = 3 becomes This is nothing but the first modified Friedmann equation derived in [21] when ρ φ ≡ ρ (normal matter).Furthermore, the limit ∆ → 0 correctly reproduces the standard Friedmann equation As a final remark, it must be emphasized that, due to the positive definition of the energy density, Eqs. ( 20) and ( 21) imply the upper bound which is obviously satisfied for any allowed value of n.Now, from the time derivative of Eq. ( 24), one can easily obtain the second modified Friedmann equation as follows By use of the continuity equation ( 14), this gives Replacing ρ φ by the first Friedmann equation ( 20), we find after some simplification This is the second modified Friedmann equation in Barrow Cosmology.Again, one can check that n = 3 gives back the result of [21] ( The further limit ∆ → 0 reproduces the standard second Friedmann equation, here rewritten as where we have used the relation
III. INFLATION IN BARROW COSMOLOGY
Let us now move onto the study of the inflationary era of the Universe.Within the scalar theory framework considered above, the characteristic quantities to compute are the inflation slow-roll parameters, which are defined by Slow-roll conditions assert that both these two parameters take very small values during inflation, i.e. ǫ, η ≪ 1.In the slow-roll theoretical framework, the only requirement ǫ ≪ 1 is actually needed to ensure the existence of an early inflationary era, Then, by imposing φ2 , φ ≪ 1 on the equation of motion of the theory, the first Friedmann equation (20) under the slow-roll assumptions becomes where we have focused on the case n = 3 and we have resorted to Eq. (7a).On the other hand, from the second Friedmann equation ( 30) we get Combining Eqs. ( 35) and ( 36), the slow-roll parameters ( 33) and ( 34) take the form Let us now remark that the above parameters should be computed at horizon crossing, where the fluctuations of the inflation field freeze [51].
The scalar spectral index of the primordial curvature perturbations and the tensor-to-scalar ratio are defined by respectively, which also need to be evaluated at the horizon crossing.For later convenience, it is useful to introduce the e-folding time where t i (t f ) represents the initial (final) time of the inflationary era.Consistently with the above discussion, we consider t i = t c as the horizon crossing time, so that Eq. ( 41) can be rewritten as N = φ f φc H φ−1 dφ, where we have used the notation φ c ≡ φ(t c ) and φ f ≡ φ(t f ).
A. Slow-roll inflation with power-law potential
We now examine inflation from the dynamical point of view.Toward this end, we assume a power-law behavior for the scalar potential V (φ) in the form where m > 0 is the power-term.The latest observational data prefer models with m ∼ O(1) or m ∼ O(10 −1 ), while m ≥ 2 is disfavored in the minimally coupled scalar field.Henceforth, we shall focus on such phenomenologically allowed values of m.We also remark that power law inflation is a very useful model to assess approximation schemes for the computation of scalar power spectra, since its spectrum is exactly solvable2 .In order to extract analytical solutions of the inflationary observable indices, we express φ and φ in terms of the scalar field by using the slow-roll conditions.In this regard, let us observe that the evolution equation ( 9) can be rewritten as By plugging into (35), we get We can now derive the expression of φ f by noticing that inflation is supposed to end when ǫ(φ f ) ∼ 1.By inverting Eq. ( 37), we are led to Similarly, insertion of Eqs. ( 35) and ( 44) into (41) allows to infer the following expression for the scalar field at horizon crossing The scalar spectral index (39) and the tensor-to-scalar ratio (40) can be cast in terms of the power-term m and the e-folding time N as FIG. 1: Plot of n s (left panel) and r (right panel) versus the power-term m and Barrow parameter ∆ for N ∼ 30.
Remarkably, we see that the slow-roll indices only depend on the power-term m and Barrow parameter ∆.A similar result has been exhibited in the context of Tsallis deformation of entropy-area law [51].
B. Kinetic inflation with power-law potential
Above we have argued that the slow-roll inflation terminates when ǫ ∼ 1.Two scenarios can then occur: either the scalar field oscillates to the minimal value of the potential, leading the Universe into a decelerated expansion phase, or the inflation goes on but with different features.Here, we shall examine whether the latter possibility is allowed within Barrow entropy-based Cosmology.In particular, a crucial assumption of the slow-roll inflation is that the kinetic energy of the scalar field can be neglected.However, if the volume of the Universe is large enough before the field starts to oscillate, then a kinetic term might arise and drive a transition from a vacuum state to quintessence.We assume the kinetic contribute in the form φ2 = mV (φ) .
The above expression can be actually deduced from the dynamics relation ( 9) and the modified Friedmann equations ( 24) and ( 30), here rewritten for convenience as These equations also allow us to express the slow-roll parameters as FIG. 2: Plot of r versus the power-term m and Barrow parameter ∆.
Now, the end of the kinetic inflation is set by the condition η(φ f ) ≃ 1 [51], which gives from the definition ( 41) From Eqs. ( 54) and ( 55), we then get (see also Fig. 2) Unlike the previous scenario, we now find that observationally consistency is obtained, provided that ∆ assumes largely negative values.This occurs for both m ∼ O(1) and m ∼ O(10 −1 ), as it can be easily seen from Eq. (58).However, such a condition is at odds with the assumption (1), implying that a kinetic inflation could not be explained within Barrow's framework.This is a remarkable difference with the case of inflation based on Tsallis entropy [51], which allows for kinetic phase too.Specifically, in that case the kinetic inflation is associated to a regime of decreasing horizon entropy and ensuing clumping of fluctuations in particular regions of spacetime.
IV. DISCUSSION AND CONCLUSIONS
Inspired by Covid-19 fractal structure, the modified entropy-area law (1) has been proposed to take into account quantum gravitational effects on the black hole horizon surface [16].In the lines of the gravity-thermodynamic conjecture, this paradigm has been applied to the Universe horizon too, the ensuing framework being known as Barrow Cosmology.Within this framework, we have studied the evolution of FRW Universe, assuming the matter content to be represented by a homogeneous scalar field in the form of a perfect fluid.As a first step, by using the first law of thermodynamics applied to the horizon of the FRW Universe, we have derived modified (∆-dependent) Friedmann equations.The obtained result has been used to analyze the inflationary era.Toward this end, we have supposed a power-law behavior for the scalar inflaton field.We have found that inflation in Barrow Cosmology can consist of the slow-roll phase only, the kinetic inflation being incompatible with the allowed values of Barrow deformation parameter.We have finally constrained Barrow exponent to ∆ 10 −4 by demanding consistency of the scalar spectral index and tensor-to-scalar ratio with recent observational Planck data.
Other aspects deserve further analysis.Besides the background and inflationary evolution, it would be interesting to study the growth rate of matter density perturbations and structure formation.This is an important testing ground to discriminate among existing modified cosmological models.Preliminary investigation in this direction has been proposed in [55] in the context of both Tsallis and Barrow entropies, showing that the entropic deformation parameter significantly influences the growth of perturbations.Moreover, one can attempt to extend the present considerations to Cosmology based on Kaniadakis entropy [56], which is a self-consistent relativistic generalization of Boltzmann-Gibbs entropy with non-trivial cosmological implications [57].In this way, a relationship between Barrow and Kaniadakis formalisms can be established.Finally, since our models is an effort to include quantum gravity corrections in the analysis of inflation, it is essential to examine the obtained results in connection with predictions from more fundamental theories of quantum gravity [58].Work along these and other directions is under active consideration and will be presented elsewhere. | 2023-01-31T06:42:55.499Z | 2023-01-29T00:00:00.000 | {
"year": 2023,
"sha1": "8286d254aeaa7a28476a34c118d3064803a24153",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-023-11499-7.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "8286d254aeaa7a28476a34c118d3064803a24153",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249329814 | pes2o/s2orc | v3-fos-license | Palliative Care for Patients with End-Stage, Non-Oncologic Diseases—A Retrospective Study in Three Public Palliative Care Departments in Northern Italy
Patients with irreversible malignant and non-malignant diseases have comparable mortality rates, symptom burdens, and quality of life issues; however, non-cancer patients seldom receive palliative care (PC) or receive it late in their disease trajectory. To explore the characteristics of non-cancer patients receiving PC in northern Italy, as well as the features and outcomes of their care, we retrospectively analyzed the charts of all non-cancer patients initiating PC regimens during 2019 in three publicly funded PC departments in Italy’s populous Lombardy region. We recorded the baseline variables (including data collected with the NECPAL CCOMS-ICO-derived questionnaire used since 2018 to evaluate all admissions to the region’s PC network), as well as treatment features (setting and duration) and outcomes (including time and setting of death). Of the 2043 patients admitted in 2019, only 12% (243 patients—131 females; mean age 83.5 years) had non-oncological primary diagnoses (mainly dementia [n = 78], heart disease [n = 55], and lung disease [n = 30]). All 243 had Karnofsky performance statuses ≤ 40% (10–20% in 64%); most (82%) were malnourished, 92% had ≥2 comorbidities, and 61% reported 2–3 severe symptoms (pain, dyspnea, and fatigue). Fifteen withdrew or were discharged from the study PCN; the other 228 remained in the PCN and died in hospice (n = 133), at home (n = 9), or after family-requested transfer to an emergency department (n = 1). Most deaths (172/228, 75%) occurred <3 weeks after PC initiation. These findings indicate that the PCN network we studied cares for few patients with life-limiting non-malignant diseases. Those admitted have advanced-stage illness, heavy symptom burdens, low performance statuses, and poor survival. Additional efforts are needed to improve PCN accessibility for non-cancer patients.
Introduction
Palliative care (PC), according to the World Health Organization (WHO), should be made available to all patients with special needs resulting from advanced, life-threatening diseases, including, but not limited to, cancer [1]. Each year, throughout the world, an estimated 40 million people (mainly those living in less developed countries) require PC, but fewer than 15% receive it [1]. WHO data from 2014 [2] revealed diagnoses of cancer in only 34% of the adults with documented PC needs. In the vast majority of cases, the primary diagnosis was non-oncological; in most cases, it was chronic cardiovascular disease (38.5%), chronic lung disease (10.3%), HIV/AIDS (5.7%), diabetes (4.5%), chronic kidney disease (CKD) (2%), liver cirrhosis (1.7%), and Alzheimer's disease or other dementias (1.6%). This last category of diseases is already growing in importance: dementias are now expected to be the non-cancer conditions that will have the greatest impact on patients' quality of life over the next 40 years [3].
In light of the above considerations, the European Association for Palliative Care published a white paper in 2009 with recommendations for implementing PC in Europe, not only for patients with malignancies but also those with advanced, chronic, non-oncological diseases [4]. The latter diseases are, in fact, associated with substantial symptom burdens, i.e., physical (including, but by no means limited to, pain), psychological, and spiritual [5], and patients suffering from these conditions have twice as many PC needs as those with terminal cancer [2,6].
However, substantial clinical and epidemiologic differences have been documented between patients receiving PC for non-malignant vs. malignant disease. The cancer patients tend to be younger, male, and with better functional statuses, and they are generally admitted to PC programs earlier, whereas those with chronic non-oncological diseases (dementia, stroke, and heart failure) have poorer prognoses (<1 month) and low Palliative performance statuses [7] (10-20% [8]).
Published evidence on the benefits of PC is still based largely on its use in patients with cancer [5]. However, recent data show that, individuals with terminal non-cancer diseases (heart failure and other forms of organ failure, chronic obstructive pulmonary disease (COPD), and CKD) who receive PC during the last six months of their lives have lower frequencies of emergency department visits, hospitalizations, and admissions to intensive care units than their counterparts who do not receive PC [9].
In recent years, several assessment tools have been developed to facilitate the early identification of all patients with PC needs (the Gold Standard Framework, Prognostic Indicator Guidance, Supportive and Palliative Care Indicators tool, and NECPAL CCOMS-ICO tool) [10][11][12]; however, even with these supports, patients with non-oncologic conditions continue to be under-represented among those receiving PC [8]. A NECPAL CCOMS-ICOderived tool is currently being used in the Lombardy region of Italy to identify patients with actual PC needs [13]. Lombardy is Italy's most populous region, with a total of 10,103,000 residents in 2019. It is also the region with the most highly developed publicly funded PC network, which includes 73 hospices and 131 home-based care units, providing care for 29,900 patients in 2019. In the study described below, we retrospectively investigated a cohort of patients with advanced non-oncological diseases who were cared for through the dedicated PC facilities of three of Lombardy's local health departments, which include four hospices. Our aims were to characterize the baseline profile of this cohort, as well as the features and outcomes of their care.
Materials and Methods
A retrospective cohort study was conducted in three large publicly funded healthcare departments (Azienda Socio Sanitaria Territoriale, ASSTs) serving extra-urban populations in Lombardy: the Rhodense ASST, which has a catchment population of approximately 485,000; the Valle Olona ASST (catchment population:~430,000); and the ASST of Western Milan Province (catchment population:~470,000). Each ASST has a dedicated PC department that provides intra-/extra-hospital consultation services and delivers palliative care and pain therapy in diverse settings, including four separate hospices, with a total of 44 beds, home-care units, outpatient clinics, and, in the case of Rhodense ASST and Milano Ovest ASST, day-hospital and -hospice units. Applications/referrals for care in all these three PC departments (referred to hereafter as the study PC network or PCN) are assessed by a PC specialist, who interviews the patient and/or family and verifies the patient's actual PC needs, with the aid of the NECPAL CCOMS-ICO-derived tool [12,13].
We retrospectively reviewed the charts of all patients who were consecutively admitted to PC network in 2019. The baseline data recorded included: patient demographics; origin of PC referral (primary care physician; hospital staff; nursing-home staff); primary diagnosis (as established by the referring PC physician); symptoms (presence/absence of dyspnea, pain, fatigue); the Karnofsky performance status (KPS) (scores ranging from 0 to 100, with higher scores indicative of a greater functional capacity and better prognosis) [14]; and the clinical indicators of disease severity/progression defined in the NECPAL CCOMS-ICO checklist [12,13]. The latter included both general indicators (hospitalizations during the past 12 months, co-morbidities, and presence/absence of malnutrition [15], as well as those that were specific to the primary diagnosis). We also recorded the palliative care characteristics (delivery settings (i.e., home and hospice), unplanned transfers to an acutecare facility), and outcomes (discharge to another healthcare facility, voluntary withdrawal from the study PCN, and in-network mortality (time and setting of death)).
Time of death was classified in accordance with the prognostic classes defined by the Palliative Prognostic Index: <3 weeks, 3-6 weeks, and >6 weeks [16].
We have received the approval of the study protocol from the Healthcare Directions of all ASST. These formal approvals are available from the corresponding author.
Baseline Patient Profiles
In the year 2019, a total of 2043 patients were enrolled in the palliative care programs administered by one of the three ASSTs making up the study PCN. Our study cohort comprised the 243 (12%) patients suffering from chronic non-oncological diseases. Table 1 shows their demographic and clinical characteristics at the time of PCN admission. The two most common primary diagnoses were dementia (n = 78, 32%) and chronic heart disease (n = 55, 23%). In roughly two-thirds (64%) of the 243 cases, the PCN referral was made during a hospital admission (general medicine wards [39%] and specialty wards [20%]). In the remaining cases, the referral was made by general practitioners caring for patients in the latters' homes (35%) or less commonly in a nursing home (1%).
As for the general indicators of disease severity/progression defined in the NECPAL tool (Table 1), over half (53%) of the 243 patients had histories of ≥2 unscheduled hospitalizations during the year preceding PCN admission; 199 (82%) patients were malnourished; the majority were suffering from fatigue (63%), pain (55%), and/or dyspnea (53%); and 224 (92%) had two or more comorbidities (detailed in Figure 1). As for the general indicators of disease severity/progression defined in the NECPAL tool (Table 1), over half (53%) of the 243 patients had histories of ≥2 unscheduled hospitalizations during the year preceding PCN admission; 199 (82%) patients were malnourished; the majority were suffering from fatigue (63%), pain (55%), and/or dyspnea (53%); and 224 (92%) had two or more comorbidities (detailed in Figure 1). All 243 had KPS scores of ≤40%, and two-thirds of the scores were 10-20%. In terms of the disease-specific indicators of severe/progressive disease listed in the NECPAL-derived tool, all 243 patients met the minimum requirement for PC eligibility.
Palliative Care Characteristics and Outcomes
In 141 (58%) of the 243 cases, the palliative care was delivered entirely in a hospice setting. Eighty-five other patients (35%) were cared for exclusively in their homes (Table 2), and seventeen (7%) were cared for in both settings.
In 15 (6%) of the 243 cases, the care being delivered by the PC network was interrupted, and the patient was discharged. In three of these cases, the decision to terminate PC was made by the patient or their family, and no specific reasons were given. The remaining 12 discharges involved five patients who were referred for non-palliative care at home, two who were referred to the care of their primary-care physicians, one who was referred for reevaluation by a cardiologist, and four who were transferred to another residential/inpatient healthcare facility, i.e., a nursing home (n = 1), another hospice (n = 1), an acute-care hospital (n = 1), or a rehabilitation facility (n = 1). The other 228 (93.8%) of the patients died while still enrolled in the study PCN (Table 2), and in 175 (76.8%) of these cases, the death occurred within 3 weeks of PC admission. A total of 133 (58%) of the 228 deaths occurred in hospice, 94 (42%) occurred in the patient's home, and in the remaining cases, death occurred shortly after the patient had been transferred to the emergency department at the family's request.
A total of 172 patients (75% of decedents) died within 3 weeks after the enrollment in the PCN.
Discussion
Our study represents the first attempt to explore the profiles and clinical pathways of patients with advanced chronic non-oncological diseases and enrolled in home-and hospice-based PC programs administered by regional public healthcare facilities in the Lombardy region of Italy. In the year 2019, a total of 2043 patients initiated care within the study PCN (total catchment population: 1,385,000). The vast majority of these patients were suffering from cancer: only 12%-the 243 patients we investigated-had primary diagnoses that were non-oncological.
These findings are consistent with findings from a previous study in Italy, which found that patients with non-oncologic diseases who were receiving care through a publicly funded PC network accounted for only 5% of the home-care services delivered [17]. A similar picture emerged from the DEMETRA study, an observational study conducted in five Italian regions in home care and hospice settings. Of the 1013 patients enrolled in this study over the course of 18 months, only 148 (14.6%) had non-oncological diagnoses: 3.5% of the patients in this cohort had cardiovascular disease, 2.6% had dementia, and 2.5% were suffering from chronic lung disease [18].
In contrast, the data reported for 2019 in the United States by the National Hospice and Palliative Care Organization showed that more Medicare hospice patients had a principal diagnosis of Alzheimer's disease/dementia/Parkinson's disease than any other disease. Principal diagnoses categories of stroke, respiratory disease, and circulatory/heart disease have grown the most since 2014 [19]. In a British primary-care setting, the greatest increase in accesses to palliative care from 2009 to 2014 involved dementia (from 20.9% to 40.7% of all cases), whereas smaller increases were seen in the percentages of accesses by patients with heart failure (from 12.6% to 21.2%) and COPD (from 13.6% to 21.2%). On the whole, however, there was still a clear predominance of cancer patients in this setting (increased from 57.6% to 61.9%) [20].
Our cohort was characterized by advanced age (mean: 83.5 years) at admission, and this feature was particularly striking in the female patients (85.4 years), who accounted for over half of the cohort members. All 243 patients had Karnofsky performance status scores of ≤40%, one-fifth were already experiencing pain, dyspnea, and fatigue, and threequarters survived less than 3 weeks. Over 82% were also malnourished, a finding that not only reflects severe/progressive disease but also suggests that nutritional issues have received insufficient attention in the earlier stages of the disease [21].
These findings indicate that, as in other countries [22][23][24], in the area of Lombardy under study, patients with life-limiting chronic diseases other than cancer are being "intercepted" by the public health system's PC network when their diseases are far advanced, clinically complex, and burdened by multiple symptoms that impact quality of life, in the same ways as cancer patients (5). This was especially true of patients with chronic heart disease: 98% had >2 comorbidities, 56% had been hospitalized >2 times during the last 6 months, almost 90% reported 2 or 3 symptoms, and three-quarters were experiencing significant pain. Substantial differences between the proportion of heart-failure patients with PC needs and those who actually receive PC are well-documented, as is the tendency to postpone the initiation of PC in these patients [24][25][26].
Consistent with the above findings, almost two-thirds of our non-cancer patients (64%) had been referred for PC during a hospital stay, and over half of these referrals came from acute-care wards [27]. This finding suggests that: (1) these patients are likely to receive potentially inappropriate aggressive treatments, even during the most advanced stages of their disease, when cures are extremely unlikely, and (2) PC tends to be reserved exclusively for the end of life, an intervention regarded by some (healthcare providers and patients) as a "grim-reaper service" [28].
Early initiation of palliative care can be hindered by an insufficiently large work force of physicians who are specialized in this field. In the United States, the ratio of palliative care specialists to patients enrolled in palliative care programs for the year 2018 was 1:808, and the situation is expected to worsen by 30%, owing to physician burnout and aging/retirement [29]. Other barriers to the early initiation of palliative care are particularly important when patients have chronic diseases other than cancer [30], such as the increased difficulties involved in formulating a short-to-medium term prognosis and identifying the terminal phase of such diseases. Some authors feel that the term "palliative care" itself is also an obstacle [30] because it is identified by many as an intervention reserved solely for the end-of-life phase, when hope has vanished and all efforts to ameliorate the underlying disease will be suspended. The stigma associated with the term "palliative" [31] is encountered among patients and their families but also among their physicians, particularly those in non-oncological branches of medicine, who may be less accustomed to discussing prognosis and end-of-life issues with patients and their families than oncologists [32].
The limitations of our study include its retrospective nature and the possibility that the data we collected are incomplete. However, this risk is minimized by the fact that regional regulations require that all patients admitted to the study PCN be evaluated with the same assessment tool [13]; therefore, it is therefore unlikely to affect the significance of our results. Another important limitation is the absence of the data regarding the analysis of quality of life and symptom relief in our cohort.
This study was designed in the last months of 2020 and analyzed the data regarding 2019, before the COVID-19 pandemic period. The pandemic has increased the number of patients admitted to home care and reduced those admitted to hospice setting.
In the summer of 2021, the situation normalized, with a stable increased number of patients admitted to home care.
Conclusions
In conclusion, our study revealed a significant delay in the initiation of palliative care in patients with advanced, life-limiting nononcologic diseases, despite the WHO recommendations in this regard [1], and very low survival rate after PCN admission. Further efforts should be made to improve and facilitate accessibility to PCN for this important patient population. Informed Consent Statement: Patient consent was waived due to the retrospective typology.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy reasons.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-06-04T15:16:42.205Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "dd5d1dd750230e24d982c41f9835903e0b701295",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/10/6/1031/pdf?version=1654508632",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e25bbbb37fd4d2f5709f1762ba5b91b55d2e9846",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14362969 | pes2o/s2orc | v3-fos-license | Incorporating Language Level Information into Acoustic Models
This paper proposed a class of novel Deep Recurrent Neural Networks which can incorporate language-level information into acoustic models. For simplicity, we named these networks Recurrent Deep Language Networks (RDLNs). Multiple variants of RDLNs were considered, including two kinds of context information, two methods to process the context, and two methods to incorporate the language-level information. RDLNs provided possible methods to fine-tune the whole Automatic Speech Recognition (ASR) system in the acoustic modeling process.
INTRODUCTION
The past few years have witnessed the successful application of Deep Neural Networks (DNNs) to Automatic Speech Recognition (ASR) tasks [1]. The conventional DNN based ASR system has a Hidden Markov Model (HMM) [2] to deal with the variant temporal transitions, forming a DNN-HMM hybrid model.
Although various Recurrent Neural Networks (RNNs) based alternatives to DNN-HMM hybrid models have been proposed [3], as far as we know, none of them prevailed the generalized DNN-HMM hybrid models. The generalized DNN-HMM hybrid models include all models replacing DNN with other frameworks like Recurrent Neural Networks (RNNs) [4], Convolutional Neural Networks (CNNs) [5], and Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) [6].
The way we train the HMM in the conventional DNN-HMM hybrid model is to first use a Gaussian Mixture Model (GMM) as the acoustic model. We can get the force-aligned phoneme-level labels during this process. After that, we train the DNN acoustic model using the frame-wise inputs and the force-aligned labels. At the test stage, we pass the features extracted from speech frames and get the probability of the frame being different phonemes from the output of the DNN model. After that, the outputs of the DNN on different frames are fed into a Viterbi decoder to get the words.
Note that in the process above, the HMM model does not evolve in both DNN training and test stages. In addition, there is a mismatch between the training objective, which is to minimize the differences between DNN outputs and the phoneme-level labels, and the model evaluation criteria, which is to reduce the Word Error Rate (WER).
We view both Connectionist Temporal Classification (CTC) [7] and RNN Encoder-Decoders [3] as beneficial attempts to solve the second problem above. And in this paper, we propose an alternative method using Recurrent Deep Language Networks (RDLNs). We will explain the idea and variants of RDLNs in the next section. Then we will show the experimental results and make a conclusion.
Language-Level Information
As mentioned in the previous section, a long-time problem in the DNN-HMM based ASR system is the mismatch between training objectives and evaluation criteria for the DNN acoustic model. So in order to solve this problem, a natural question is "Where is the language-level information?". We will quickly find that we use language-level information only in the Viterbi decoding process at the test stage.
In Viterbi decoding process, we have the following equation to calculate the probability of HMM state j at frame i.
The T k,j in the above equation denotes the transition probability from HMM state k to HMM state j, which may vary according to different frames in a context-dependent decoder like Kaldi [8]. The O i,j corresponds to the output of DNN. Viterbi decoder uses a backward propagation process using the probabilities calculated in (1), to find the path that has the largest probability.
If we set all values of O i,j in (1) to be the same, for example 1, we will notice that the remaining part in the equation can be viewed as a prediction of the probabilities of HMM states at frame i. This indicates that we can use the decoder in a different way and get the language-level information we want. arXiv:1612.04744v1 [cs.CL] 14 Dec 2016 (2) is a normalization value, which is the same for all j's.
After getting the language-level information, we can choose to build the RDLN model from at least two kinds of context information, two methods to process the context, and two methods to incorporate the language-level information.
Context Information Selection
The context information in RDLN model corresponds to how do we get the P i−1,k 's in (2). We can either use the DNN outputs of the previous frames, or use the labels of the previous frames. Using the labels will usually reduce the computing complexity and make the additional information purely from the language-level. But the advantage of using the real outputs is that it will incorporate a phoneme-level information in addition to language-level. We may choose one kind of context information according to our needs.
Process the Context
We can use either context-dependent or context-independent methods to process the context information in the previous section.
To process the context information in a context-dependent way, we may simply use the Viterbi decoder and set the outputs of current frame to be a same value, like 1. Then we take the HMM state probability generated by the decoder as the processed context. Note that after the decoding process, the processed context now to some extend contains the languagelevel information.
Another way to process the context information is in a context-independent method. We can get the transition matrix T and use it to transform the context information obtained in the previous section. This method is comparable to the first method, as indicated in the Kaldi documentations.
Incorporate Language-Level Information
There are at least two ways to incorporate the language-level information into acoustic models.
The first way is through a modified objective function. We may use the labels based language-level information as an additional target, and real outputs based language-level information as an additional estimation to the target.
The second way is to stack the information into the input vectors. In this way, the outputs may be more accurately and robustly estimated by the DNN model.
EXPERIMENTS
We conducted experiments on all single channel utterances of CHiME-4 dataset. We used the real outputs as the context information, processed the context information in a contextindependent manner, and incorporate the information as additional inputs to the DNN model. In the experiments, I only used the context information of one frame previous to the current one for simplicity, but RDLNs can take on as much context information as we need.
The way we obtained the transition matrix T in Kaldi is as follows. The outputs of DNN are denoted as pdf-id's. And since we only used one previous frame, we have P 0,k = O 0,k . So that our problem is to find the transition matrix from O 0,k to P 1,j . Since there is no relation between pdf-id's and the triphone states, we need to first convert pdf-id's to the transitionid's in Kaldi. Then we use the transition relation represented in transition-id's to transform the state. After that, we convert back from transition-id's to pdf-id's.
Note that after the steps above, the additional input had a dimension of 3161. So we compressed it into a 42 dimensional vector using the relationship between pdf-id's and monophone states. We used the 8th epoch of the baseline model as the initial model to RDLN.
Some preliminary results are shown in Figure 1. Figure 1, we can see that the Cross Entropy values of RDLN is constantly lower than the baseline system after epoch 19, with only one out-lier. In addition, the improvement gets larger after each training epoch. Note that in the experiment above, we only incorporated one previous frame. It is of high possibility that we will get better results using more context information.
CONCLUSION
This paper proposed a method to incorporate language-level information into acoustic models for a DNN-HMM hybrid ASR system. We can fin-tune the whole ASR system, instead of only the acoustic modeling part, using our method.
Then we discussed about two kinds of context information, two methods to process the context, and two methods to incorporate the language-level information. Experiments showed that adding language-level information into acoustic models constantly improved the performance over the baseline system. We can foresee a lot of possible contributions along this technical path, including the comparison of the eight variants of RDLN and the substitution of the conventional DNN by RNNs and CNNs. We also want to extend this method so that it can be used in non-HMM based frameworks. | 2016-12-14T17:40:02.000Z | 2016-12-14T00:00:00.000 | {
"year": 2016,
"sha1": "5a76a5f382ad5dd6c8f5b6257b176ca41286d2b8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5a76a5f382ad5dd6c8f5b6257b176ca41286d2b8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
56343425 | pes2o/s2orc | v3-fos-license | Reliable Data Collection Algorithm Based on AUVs Online Prediction
The underwater acoustic sensor networks (UASNs) is a kind of new marine measurement and control technology that combines autonomous data acquisition, data fusion and transmission applications in oceanographic data collection, water pollution monitoring, earthquake and tsunami prediction, marine navigation and underwater military surveillance, enemy target tracking and other aspects of the potential application value. So that the military and scientific research departments attach great importance to it. In recent years, China has proposed the strategic requirements of building a strong marine power, and the key technologies of the underwater acoustic sensor network are listed as the key research directions. The related researches mainly focus on the fields of MAC protocol, routing protocol, clock synchronization location, target recognition tracking and so on. However, the main problem affecting the performance of the underwater acoustic network is that the energy of the underwater nodes is limited and easy to disappear. Although the current research scholars have adopted a variety of routing protocols to minimize node energy consumption, such as the choice of the next hop node, the node between the rotation of the cluster head to balance the energy, the communication between the network nodes is still multi-hop transmission, and relay nodes are always more likely to consume energy and to be failure, based on multi-hop routing energy-saving breakthrough. Some scholars have proposed to use AUV to load energy and make supplement to the sensor nodes on a regular basis, although it is feasible, the node energy acquisition process is difficult to control by man-made control and cannot guarantee the sensor node long-term normal work. In addition, the harsh and unpredictable underwater application environment poses a significant challenge to routing tasks.1-3
Introduction
The underwater acoustic sensor networks (UASNs) is a kind of new marine measurement and control technology that combines autonomous data acquisition, data fusion and transmission applications in oceanographic data collection, water pollution monitoring, earthquake and tsunami prediction, marine navigation and underwater military surveillance, enemy target tracking and other aspects of the potential application value. So that the military and scientific research departments attach great importance to it. In recent years, China has proposed the strategic requirements of building a strong marine power, and the key technologies of the underwater acoustic sensor network are listed as the key research directions. The related researches mainly focus on the fields of MAC protocol, routing protocol, clock synchronization location, target recognition tracking and so on. However, the main problem affecting the performance of the underwater acoustic network is that the energy of the underwater nodes is limited and easy to disappear. Although the current research scholars have adopted a variety of routing protocols to minimize node energy consumption, such as the choice of the next hop node, the node between the rotation of the cluster head to balance the energy, the communication between the network nodes is still multi-hop transmission, and relay nodes are always more likely to consume energy and to be failure, based on multi-hop routing energy-saving breakthrough. Some scholars have proposed to use AUV to load energy and make supplement to the sensor nodes on a regular basis, although it is feasible, the node energy acquisition process is difficult to control by man-made control and cannot guarantee the sensor node long-term normal work. In addition, the harsh and unpredictable underwater application environment poses a significant challenge to routing tasks. [1][2][3] In order to effectively and reliably collect data, jump out of the whole network node multi-hop transmission architecture, research scholars have proposed a mobile gateway to collect data. 4,5 In this method, the AUV is used as the mobile gateway, and the corresponding sensor nodes are polled periodically to collect the relevant perceptual data. This method can effectively reduce the energy consumption of the sensor nodes, and AUV interact with the sensor nodes when the distance is close with reliable communication, effectively reducing packet loss rate and improving the network life cycle. In these schemes, there are multiple types of polling paths designed to improve the efficiency of data collection. Some researchers discuss the different polling objects, such as polling the cluster heads, random polling and so on. While some researchers discuss the different AUV polls structures, which are divided into level polling and vertical polling. The objective of our paper is to propose a predictive online learning polling model. With the random occurrence of underwater target events, the AUV can continue to learn and predict, and reasonably plan the polling process in the polling process, so as to increase polling in the target event aggregation area, reduce polling in the target event sparse, and perceive data less region. So that the data collection efficiency can be improved. The model can be applied to a variety of existing polling architectures, with good adaptability and promotion. The organization structure of the remaining part of this paper is as follows: the Section 2 of this paper discusses the related work and the existing problems; the Section 3 of this paper describes the online learning prediction model; in Section 4 of this paper, the validity and rationality of the model proposed in this paper is validated through contrast experiments, and conduct performance evaluation. The full text is summarized and prospected in Section.
Related work
For the underwater complex tasks, it is necessary to have fixed nodes to monitor the target area in real time, and the mobile nodes are required to dynamically capture the abnormal state. Therefore, the three-dimensional heterogeneous dynamic model becomes the mainstream model of the current underwater network operation and maintenance. Taking into account the cost of AUVs, there is only a small number of AUVs are deployed in network, most of them are the ordinary sensor nodes. As the AUVs function is strong and energy is high, it has a good effect in the aspect of the reliable transmission of data. To this end, the researchers put forward a series of AUVs auxiliary underwater acoustic sensor networks data reliable collection algorithm, only the AUVs mobility is used to collect data information to the ordinary node polling. The network architecture can be roughly divided into horizontal polling and vertical polling and as shown in Figure 1.
Figure 1
Reliable Collection Architecture based AUV in Underwater Acoustic Sensor Networks. 6,13 Initially, the horizontal polling architecture is aimed at a twodimensional network of sensor nodes at the bottom. AEERP 6 uses a single AUV to interact with the underlying gateway. The bottom gateway node is replaced by a randomly selected method and set the energy consumption threshold. The shortest path tree construction method is used in other nodes to connect with the nearest gateway to generate the network topology. This method effectively reduces the number of hops of the data transmitted by the underwater nodes, reduces the error codes caused by the attenuation, and ensures the integrity and reliability of the data. Based on the horizontal polling, AURP 7 constructs a number of AUVs polling architectures for the first time and designs the elliptical trajectory, and uses heterogeneous acoustic communication channel. Three kinds of data transmission methods are designed according to the distance, which can reduce the same frequency interference with each other. Khan 8 proposed a hierarchical clustering structure and the bottom nodes are divided into three categories of underwater gateway node, path node and ordinary node. The underwater gateway node is the cluster head, and the path nodes on the AUV polling path that will be interactive, and the ordinary node is used as an alternative to replace the energy node with too large energy consumption. The AUV greatly increases the transmission rate of the packet and reduces the overall energy consumption by interacting with multiple path nodes. However, the algorithm needs to divide the monitoring sea area according to the number of underwater gateway nodes. The preprocessing process is cumbersome, and the selection of multiple path nodes is controlled by the underwater gateway nodes, and the higher performance of the gateway node is required.
TCM algorithm 9 is suitable for dynamic 2D underwater environment. The particle swarm optimization algorithm is used for the bottom nodes to cluster. The horizontal polling way is used by AUV to interactively access dynamic cluster head. Although it is more suitable for dynamic underwater environment, TCM algorithm still has many disadvantages. For example, the cluster head changes frequently, it need to constantly notify the AUV node new cluster head ID. Then, the network energy consumption increased. In Shah, 10 , the horizontal architecture can only be deployed by hierarchical in 3D environment. Each layer has a separate AUV polling to achieve reliable data collection and forwarding. To this end, a vertical polling architecture of the polling method is proposed in Umar. 11 By the definition of the depth of the node, the AUV vertical motion is used to transfer the data from the high depth region to the low depth region. For the three-dimensional dynamic underwater environment, the LVRP algorithm Shi 12 is used and the gateway selection according to the Voronoi formed among the nodes. It can effectively improve network performance by combining with AUV vertical polling.
The RE-AEDG algorithm Liaqat 13 compares the horizontal and vertical polling and combines them together. In RE-AEDG , the underwater nodes randomly deployed are divided into five layers, and the nodes at the second layer and the fourth layer are the gateway nodes, and the nodes at the same layer do not communicate with each other. The nodes at the first, third and fifth layers are used to delivery data by selecting the nearest gateway according to the distance. The AUV vertical oval polling in the second and fourth layers is to achieve reliable data collection. However, the method requires that the surface nodes send the data to the water gateway, and then returned to the surface by the AUV, which will result in energy waste. As the two-dimensional network for the level of polling method is more mature, there are more ways to optimize this. AAEERP Ilyas 14 has been improved on the basis of AEERP, and it is considered that the staying time of AUVs in each gateway should be different. Because the shortest path trees generated by different underwater gateway nodes are not the same, the more member nodes are, the more data should be collected. Therefore, the staying time of the AUV should be proportional to the number of nodes of the gateway members. Compared to the AEERP algorithm, AAEERP has lower power consumption and higher data collection capabilities. AEDG Javaid 15 discusses the elliptical trajectory of AUV horizontal polling. According to the selection area of underwater gateway nodes, the radius parameter of ellipse is discussed, which can be used to optimize the AUV polling trajectory based on changes in the gateway. Kartha 16 discusses the polling trajectories of AUV in different scenarios from the perspective of delayed tolerances, including square polling, helical polling, and elliptical polling and so on. A network data collection framework is effectively established for different situations, and can be used for more flexible implementation of different service strategies. In Khan,17 clustering structure in Khan. 8 The method is defined to share the state information of four AUVs. This method not only realizes the data collection and transmission to the water gateway, but also can help communicate data between the two-dimensional layer nodes through the cooperative communication between the four AUVs. In Dalal, 18 a scalable data encryption and decryption algorithm is designed based on AURP for underwater safety challenges. Different AUVs are used for different monitoring waters, and each AUV collects data relying with the gateway node matching key of its monitoring waters. This method guarantees the network security and optimizes the network performance.
To sum up, the existing polling method mainly has the following problems: c. The randomness of the target event is not considered. The data collection across the entire network should be considered in the existing polling architecture, and the networks deployed in most of water areas are targeted. In order to extend the life of the network and reduce the energy consumption of the nodes, it is necessary to collect the monitoring data of the target events and collect the information of the whole network, which not only enhances the energy consumption of the node, but also makes the processing of the subsequent data more complicated.
d. The AUV energy consumption problem is not considered. It is assumed that AUV nodes are infinite energy in most of articles, regardless of their energy consumption problems in the network.
Most of the algorithms are designed to sacrifice AUV energy consumption in exchange for the life of ordinary sensor nodes. Although AUV energy is several orders of magnitude compared to ordinary sensor nodes, there are still energy limits, and the energy which is assumed infinite is not realistic.
In view of the above problems, this paper analyzes the shortcomings of the current polling architectures, and proposes the AUV online learning polling trajectory prediction model, which can be applied to any of the above structures to improve the network data collection performance.
RCAP model based on time series analysis Preparation work
Firstly, we introduce the function definition of each node in the network: Interactive nodes: the nodes in the network are clustered and the cluster head nodes are used as the interaction nodes. They are used for collecting the perceived data of other nodes in the cluster and delivering them to the AUV nodes.
Ordinary nodes: they are used for monitoring the target events in the network and delivering the perceived data to the cluster head interaction nodes of the cluster itself.
AUV: it is used for polling the interactive nodes in the network, and collects and delivers it to the gateway nodes on a regular basis.
This section will introduce the online learning trajectory prediction model. In order to facilitate the analysis and solution of the model, the following assumptions are given under the premise of reliability in line with the actual application scenarios: Assumption 1: each node knows its location information after deployment, and the AUV has the autonomous positioning function.
Assumption 2: AUV energy is much larger than the sensor node not unlimited, regardless of its failure in the process of operation.
Assumption 3: the sensor node has a certain capacity of store.
( Figure 2) The key to collect data by AUV polling lies in two points: one is what nodes to be polled; the other is how to poll these nodes. In the case of question 1, the existing literature generally deals with the established trajectory of the AUV, and the nodes near the trajectory are set as the polled node. The remaining nodes in the small-scale network can transmit the self-perceived data to the other Inquiry node by mean of near transmission. Large-scale networks are clustered depending on the nodes being polled, and the remaining nodes transmit the perceived data to the cluster heads to achieve AUV node on the whole network data collection. In the case of question 2, the repetition traversal method is used in most of existing literatures, which repeated polling along the established trajectory, until the end of the acquisition task and no data aggregation. Since the time interval exists in AUV node polling, the data collected by AUV from the same node at a time can form a set of sequences. Due to the uncertainty of the underwater environment, the target events monitored by each region are not the same, and the number of packets perceived and collected is different. There are many packets of AUV to be delivered by some interactive nodes, and the few packets are delivered by other interactive nodes. Therefore, the uniform traversal of each node to be interactive not only can reduce the efficiency of AUV polling, but also increase the packet delay. Therefore, this paper designs the online learning prediction model based on time Series Analysis, learns the historical data of the pre-traversal of the AUV, and makes the prediction of the number of interactive nodes in subsequent polling and determines to carry out the interaction according to the size of the predicted data node polling.
In this paper, the prediction polling is divided into two parts. The first part is the generation of the prediction model and the other part is the planning of the AUV polling trajectory. The number and location of the interaction nodes are determined using the method specified in literature. 11 After the cluster, there is a total of N and expressed as S, and the set of all the nodes is The time interval of the data collection is set to be T , that is, AUV polling is done every T time, and the data of the interactive node shall be collected. The interactive node is used for tagging the first packet as T j within each interval. ij x means the amount of data collected by the i s node during the j time. The number of packets generated by the 1 s node in the 1 T time period is 11 x .
Generation of prediction model
Since the model is used to predict historical data required to be used, it is assumed that AUV is used in the whole network polling in the first ten times. The sliding window is used for the selection of the historical data, and the size of the sliding window is initially set to be 5. That is, using the historical data of 1 to 5 of time periods to generate the prediction model, and slide five times and use 6 to 10 of time periods to modify the model to determine the prediction model And it is used to adjust the influence of each component the historical data on the late data prediction. The i N amount of data in the sixth time period can be predicted by the following formula: General expression of the estimation function in the prediction model is as follows: Due to the random generation of initial prediction vector θ , the gradient descent model is used to calibrate the predictive variables in order to make the late prediction more accurate. The actual packet of i s node collected at the 6 T time period is 6 i x , and the prediction error is: The error function of ( ) j J θ can be used to describe the pros and cons of the estimation function of ( ) The expression of the error function is: In order to minimize the value of the error function to get min J θ , the location where the gradient of the function decreases most fast, that is, the partial derivative of the function can be expressed as: And then the prediction vector of θ is updated and the vector θ is reduced in the direction of the smallest gradient. The updated θ ′ can be expressed as: Where α refers to step size, i.e. the variable quantity is changed with the direction of gradient reduction each time. As the gradient is directional, for vector of θ , a gradient direction can be solved for each component, so that a whole direction can be solved. At the time of the change, the function is changed towards the direction of the most down to reach a minimum point, which is to ensure the minimum error. It can be described in a simpler mathematical language, namely: Wherein ∇ refers to gradient. When J is equal to 6 to 10, five verifications can be made separately to the ( ) j h x θ to obtain a more effective predictive value, which can be used to predict the data packet that may be generated by the late interactive nodes, so as to plan the reasonable polling trajectory of AUV, and realize the maximization of data collection efficiency and minimize the delay.
Planning of AUV polling trajectory
Through the prediction model in the section 3.2, it is possible to effectively estimate the number of packets aggregated by each interactive node in the next period of T. According to assumption 3, the sensor node has a certain storage capacity and the value is set to be i s , when the amount of data generated by i s node in the current period of T time is few, the point is polled by AUV and the harvest is low, which increases the network delay.
The steps of AUV polling trajectory planning strategy proposed in this section are as follows: Step 1: the prediction model is used to estimate the amount of data packets generated by two consecutive periods of T for each interactive node. Determine if the node storage threshold of N C is exceeded.
Step 2: if it is predicted that the sum of data packets volumes generated by interaction node of i s during the two time intervals of T is over the storage threshold, it indicates that the AUV must poll the node; otherwise it will cause packet loss. The node of i s is included in the path planning considerations.
Step 3: if it is predicted that the sum of data packets volumes generated by interaction node of i s during the two time intervals of T is not over the storage threshold, it is not necessary to traverse the node. The node will not be considered in the path planning.
Step 4: make statistics of the polling nodes, and develop a reasonable planning route according to the location of each node.
Step 5: after traversing the selected node, the estimated function is continually calibrated using the new round of data collected.
Step 6: if the node of i s has not been included in the path plan after the prediction for twice, it will be direct traversal at the third time without predicting (Figure 4).
Simulation experiment
In order to validate the versatility and validity of the model, the model is applied to the representative AAEERP 14 horizontal polling architecture, and 4 groups of experiments are designed with its original model for comparison from sensor node energy consumption, network throughput, packet transmission rate, end-to-end delay design: a) Energy consumption: during the network monitoring, the total amount of energy consumed by all sensor nodes is mainly for data transmission and reception. The energy consumption unit is Joule (J). b) Throughput: refers to the amount of data transferred from sender to receiver. The network throughput has directly affects on the number of nodes in the network for data transmission and the duration of that number. The unit of throughput is bit.
c) Packet delivery ratio: refers to the ratio of data packet successfully transmitted to the water gateway by AUV to the total amount of data packet generated by the network. d) End-to-end delay: refers to the total time that the data packet is transmitted to the water gateway. The unit is sec (s).
According to the experimental parameters of the horizontal polling architecture AAEERP, 14 the compared parameters of the simulation experiment are as follows: the underwater sensor nodes and AUV are deployed in the 1500m * 2000m area. A different number of sensor nodes are deployed to demonstrate the utility of the data collection algorithm in the underwater acoustic sensor networks of different sizes. It is assumed that UWSNs have different numbers of sensor nodes of 18, 30, 42, 54 and 64; the sensor node transmission range is 250m, and the initial energy is 70 joules and the size of each packet is 70 bytes. It is assumed that no collision exists between the underwater communication channels and the interference effects between the channels are ignored. The specific simulation parameters are as shown in Table 1. Considering the randomness of the underwater target monitoring event, the feasibility of the RCAP is verified firstly.
Experiment 1: Feasibility verification of reliable collection algorithm based on AUV with prediction
The error rate of the data collection algorithm is based on the distribution of various target events, such as linear distribution, normal distribution and Poisson distribution. The experimental results are shown in Figure 5. The error rate of the data transmission is predicted in the case of various target events with the increase of the size of the network. For the target event of the linear distribution (the distribution of the event is subject to the AR model), the error rate of the prediction algorithm is kept at a low level, and the larger the network size is, the lower the predicted error rate is. This is because the time series analysis used in this paper is based on historical data to conduct iterative prediction, and it is a better trend forecast for the linear distribution. When the event in the network subjects to the normal distribution, the error rate of the algorithm is higher than that of the forecasting algorithm, but it tends to be stable with the increase of the network scale. Although it is not as accurate as the prediction Start Estimation of amount of data packet of each interactive node in two consecutive T times It is determined that the amount of data of node S i exceeds the threshold C N The node S i is incorporated into the polling trajectory planning to determine the AUV polling trajectory It is determined that the node S i is not polled twice.
Waiting for the next T time to predict of linear distribution events, as a whole, the prediction of events in large-scale networks can be less than 30%, and the trend is convergent and has a certain of operability. For the Poisson distribution event, this paper predicts that the effect of the algorithm fluctuates greatly, the average error rate is about 50%, and the trend diverges and the operability is not strong. To this end, the simulation experiment of data collection algorithm is carried out under the premise of linear distribution and normal distribution of target events. The experiments numbered from 2 to 5 are the comparison of energy consumption, throughput, transmission rate and time delay for the AAEERP horizontal polling architecture.
Experiment 2: Comparison of network energy consumption
AAEERP algorithm and the algorithms specified in this paper are used for data transmission under the circumstances of different event distribution, and the total energy consumption of the entire network is calculated. The data results are shown in Figure 6. It can be seen from Figure 6 that the RCAP (AR) prediction algorithm is more accurate and the energy consumption is lower for the target event with linear distribution. Compared with the AAEERP algorithm which follows the elliptical motion, the AUV trajectory designed in this paper is more flexible and the number of unnecessary polling is reduced to effectively improve the efficiency of data collection. In case of the target event of normal distribution, since RCAP (normal) prediction effect is relatively poor, compared to linear prediction, the energy consumption is greater.
Experiment 3: Throughput comparison
AAEERP algorithm and the algorithms specified in this paper are used for data transmission under the circumstances of different event distribution, and the total amount of the data packet transmitted from source mode to destination node is calculated. The data results are shown in Figure 7. Figure 7 shows the throughput of each algorithm. It can be seen that under the target event of linear distribution, the RCAP (AR) prediction algorithm is accurate, the data transmission efficiency is high, the throughput is increased with the increase of the network scale, and the gain is higher. For the AAEERP algorithm, the network throughput is at the middle level and tends to be stable and the rising space is small. For the target event with normal distribution, the RCAP (normal) prediction algorithm has a small increase in throughput, but the network throughput rises and the rising trend is obvious subsequently.
Experiment 4: Comparison of data packet delivery ratio
AAEERP algorithm and the algorithms specified in this paper are used for data delivery under the circumstances of different event distribution, and the ratio of the data volume successfully delivered to the water gateway to the data volume actually generated in the network is calculated. The data statistics results are shown in Figure 8. It can be seen from Figure 8 that the data delivery ratio obtained by the RCAP (AR) prediction algorithm is always higher and the reduction ratio is slower for the target event with linear distribution. This is because when the node used for interacting with the AUV is polled, the algorithm can change the polling trajectory in time, while the AAEERP algorithm still moves in accordance with the elliptical trajectory, increasing the packet loss rate. The larger the scale of the network, the easier the change of the interaction node, and the difference in the transmission efficiency between the AAEERP algorithm and this algorithm will be larger. It further shows the usability of this algorithm in large-scale networks.
Experiment 5: Transmission delay comparison
AAEERP algorithm and the algorithms specified in this paper are used for data delivery under the circumstances of different event distribution, and the total amount of the data packet transmitted from source mode to water surface gateway is calculated. The data results are shown in Figure 9. Figure 9 shows the end-to-end delay of the AEERP and RCAP algorithms. In the AUV auxiliary data collection algorithm, the end-to-end delay depends primarily on the round-trip time and speed of the AUV. The AUV polling trajectory is fixed by AEERP algorithm, the round-trip time and speed are relatively fixed, and the overall delay tends to be at a moderate level. And the RCAP (AR) prediction algorithm makes the track of AUV change with the amount of data in the network, and the number of AUV polling is increased with the number of packets generated by the interaction node. On the contrary, for some nodes with few data packets, although the efficiency of AUV data collection is greatly enhanced due to fewer interaction numbers, the data stored by it will have to wait for a long time to be collected and delivered by AUV for the data packet generation node, which increases the network end-to-end delay to a certain extent.
Conclusion and expectation
In the current underwater acoustic sensor networks, the use of AUV polling to collect data has become an effective way to provide reliable transmission of data and extend the life cycle of the network. In this paper, a novel underwater polling method is proposed, namely RCAP (Reliable Collection based on AUV with Prediction). The AUV trajectory and its interaction time with each cluster head are scientifically developed, and the maximum number of propagation data and the minimization of the network delay are realized. The contributions of this paper are as follows: a. The online prediction model for AUV polling is designed.
b. The effectiveness of RCAP is discussed under different distributions of underwater target events.
c. Compared with AEERP, it is verified that RCAP has certain advantages in network energy consumption, throughput and packet transmission rate, and has certain promotion value. | 2019-04-16T13:21:50.475Z | 2017-06-23T00:00:00.000 | {
"year": 2017,
"sha1": "e22c5726d4805303da46f1fca8545faa54bb60cd",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/IRATJ/IRATJ-02-00030.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5afdb6461d7d7c125442849efd4d9fe003476083",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
6237117 | pes2o/s2orc | v3-fos-license | On the danger of detecting network states in white noise
The general idea of nonstationarity of brain activity or dependence of the dynamics on some, potentially unobserved, temporally changing or fluctuating parameter, has been familiar in the neuroscience community in contexts such as sleep dynamics or epileptology for a long time. However, recently it has been attracting increasing attention in the context of functional brain network analysis. This seems as a natural development of the field—once that functional connectivity as computed under the simplifying stationarity assumption has been well established, it is only logical to try to detect changes in brain functional connectivity over time. In general, detecting such nonstationarities in a reliable fashion is a methodologically challenging task, as changes in estimates of functional connectivity over time may be also due to random fluctuations, rather than genuine changes of the process. There is a wide array of approaches to studying such nonstationarities documented in literature (Hutchison et al., 2013), and an important but often neglected general methodological step is assessing the results against an appropriate null model corresponding to stationary process.
In the following, we give an illustrative example of how a typical nonstationarity analysis can generate spurious signs of nonstationary dynamics even when applied to stationary process. To show that this is not a purely theoretical issue, we closely follow the analysis procedure used in a recently published study by Betzel et al. (2012). We note that this particular paper have caught our attention by coincidence, while we believe the issue is pertinent to a substantial fraction of the literature.
In their paper, (Betzel et al., 2012) deal with characterizing the dynamics of brain activity measured by EEG. In particular, Betzel et al. report the detection of rapid transitions between intermittently stable states, explicitly saying that “As predicted, fast (~100 ms) dynamics of whole-brain synchronization were observed during resting-state EEG,” documenting the typical fast (~100 ms) time scale of these states in Figure 6B of their paper (see also Figures 4, 5). Their argument is based on the following data-processing scheme: First, for each time point of filtered EEG data, a functional connectivity matrix is computed using pairwise synchronization likelihood values and the time points are clustered based on similarity of the corresponding functional connectivity matrices. Next, contiguous stretches of time points that are members of the same cluster are interpreted as corresponding to a duration of an atomic brain state. Finally, the brain-state-representing functional connectivity matrices are pooled across subjects and clustered based on their similarity to define higher-order states.
Notably, the procedure applied by Betzel et al. is principally data-driven, rather than relying on some model testing or assumptions, and it includes band-pass filtering and sliding-window-like analysis. We therefore conjectured that the temporal structure of the observed functional connectivity dynamics might have been crucially affected by the procedure itself (as the authors tentatively admitted in their discussion, albeit unfortunately have not tested the results against stationary model data). To explore the viability of this alternative explanation, we applied a processing pipeline built according to the description given in the original manuscript to model data, consisting of 100 samples (each of length T = 2500 time points, representing mock 5 s epoch of EEG data) of a multivariate (N = 20) white noise process. The applied processing steps included application of frequency filtering (using elliptic filters corresponding to the four specified frequency bands; we applied zero-phase digital filtering by processing the input data in both the forward and reverse directions) and subsequent computation of the synchronization likelihood (Stam and van Dijk, 2002). The parameters of the synchronization likelihood l, m, w1, w2, nrec were set for each frequency band as in Betzel et al. (2012). The resulting functional connectivity matrices were clustered using the standard k-means clustering method (Lloyd, 1982). In Figure Figure11 you see that the typical duration of detected states closely corresponds to the distributions observed in the original paper (compare with Figures 6B, 4A,B in Betzel et al., 2012). In particular, the typical timescale is in the order of tens to hundreds of ms. Also, this time scale depends on the selected filtering in the same way as in the original work, with the time scales of the beta and theta bands markedly shorter and longer, respectively, than those of the broadband and alpha bands, the latter two being relatively close to each other.
Figure 1
Temporal dynamics of synchronization likelihood (SL) networks generated from realizations of stationary processes: white noise (A,C,E) and correlated noise [linear stationary (FFT) surrogates from EEG data] (B,D,F). The top, middle and bottom images were ...
Even though spatially and temporally independent (white) noise model used here is clearly not a realistic model for EEG data; such a simplistic stationary model reproduces the clustering time scales of the original paper with a surprising accuracy. Of course, due to spatial independence of the processes, it does not reproduce the spatiotemporal patterns corresponding to Figures 4A,B in Betzel et al. (2012). We have further repeated the procedure using multivariate Fourier transform surrogates generated from a single segment of EEG data (for more details on the data see Horacek et al., 2010. Such surrogates correspond to realizations of linear stationary process with conserved auto- and cross-correlation structure, see Prichard and Theiler, 1994). The results are shown in the right column of Figure Figure1.1. Moving from white noise to EEG surrogates, the time scales of the observed clustering hardly changed. However, as expected due to the introduced spatial dependence, the EEG surrogates show now a patchy spatiotemporal pattern Figures 1D,F more closely corresponding to those in the original paper. The similarity of the spatiotemporal patterns is of course only qualitative—range of differences may have arisen due to combination of different acquisition parameters as well as intra- and inter-individual variability.
Note that we applied the basic k-means clustering method instead of the evolutionary-clustering algorithm from the original paper; insufficient detail of description of the procedure in the original paper made reproducing it prohibitively difficult. The value k = 3 was chosen for display of the clustering results, however, the results proved to be quite insensitive to the choice of k.
Our numerical simulation above focused particularly on the observed time scales of the network states as obtained with the described analysis approach. One could indeed ask further, what evidence regarding “repertoire of states” can be provided by the detection of clusters per se—and whether the detection of (some) clusters could be merely a consequence of running a clustering algorithm. For a k-means clustering, the answer is obvious. Even for more complex approaches without fixed number of clusters such as the approach of Betzel et al. (2012), we conjecture that a repertoire could be observed even for a stationary process, however this depends on the details of applied analysis approach.
In summary, we aimed to illustrate the proposition that spurious nonstationarity manifesting itself as alternation of network states may appear due to methodological issues even in stationary processes such as white noise. In our example, we showed that for instance the observation of clustering of time points (more precisely, temporal windows) into consecutive clusters (“states”) of duration in the order of several hundred milliseconds (the time scale of putative brain microstates) might be reproduced by white noise to a remarkable detail. Of course, this does not disprove the existence of such states—it just suggests the evidence may not be sufficient.
From a wider perspective, one could see a parallel here with other examples of data analysis approaches that may lead to spurious observation of intriguing structures due to intrinsic bias of the methods—such as apparent signs of chaos in power-law spectra stochastic processes (Osborne and Provenzale, 1989) or small-world properties of functional connectivity graphs (Hlinka et al., 2012). Or, from an experimental point of view, with the role measurement artifacts such those as due to head motion might play in observed network properties (Hlinka et al., 2010; van Dijk et al., 2012).
The general idea of nonstationarity of brain activity or dependence of the dynamics on some, potentially unobserved, temporally changing or fluctuating parameter, has been familiar in the neuroscience community in contexts such as sleep dynamics or epileptology for a long time.However, recently it has been attracting increasing attention in the context of functional brain network analysis.This seems as a natural development of the field-once that functional connectivity as computed under the simplifying stationarity assumption has been well established, it is only logical to try to detect changes in brain functional connectivity over time.In general, detecting such nonstationarities in a reliable fashion is a methodologically challenging task, as changes in estimates of functional connectivity over time may be also due to random fluctuations, rather than genuine changes of the process.There is a wide array of approaches to studying such nonstationarities documented in literature (Hutchison et al., 2013), and an important but often neglected general methodological step is assessing the results against an appropriate null model corresponding to stationary process.
In the following, we give an illustrative example of how a typical nonstationarity analysis can generate spurious signs of nonstationary dynamics even when applied to stationary process.To show that this is not a purely theoretical issue, we closely follow the analysis procedure used in a recently published study by Betzel et al. (2012).We note that this particular paper have caught our attention by coincidence, while we believe the issue is pertinent to a substantial fraction of the literature.
In their paper, (Betzel et al., 2012) deal with characterizing the dynamics of brain activity measured by EEG.In particular, Betzel et al. report the detection of rapid transitions between intermittently stable states, explicitly saying that "As predicted, fast (∼100 ms) dynamics of wholebrain synchronization were observed during resting-state EEG," documenting the typical fast (∼100 ms) time scale of these states in Figure 6B of their paper (see also Figures 4, 5).Their argument is based on the following data-processing scheme: First, for each time point of filtered EEG data, a functional connectivity matrix is computed using pairwise synchronization likelihood values and the time points are clustered based on similarity of the corresponding functional connectivity matrices.Next, contiguous stretches of time points that are members of the same cluster are interpreted as corresponding to a duration of an atomic brain state.Finally, the brain-state-representing functional connectivity matrices are pooled across subjects and clustered based on their similarity to define higher-order states.
Notably, the procedure applied by Betzel et al. is principally data-driven, rather than relying on some model testing or assumptions, and it includes band-pass filtering and sliding-window-like analysis.We therefore conjectured that the temporal structure of the observed functional connectivity dynamics might have been crucially affected by the procedure itself (as the authors tentatively admitted in their discussion, albeit unfortunately have not tested the results against stationary model data).To explore the viability of this alternative explanation, we applied a processing pipeline built according to the description given in the original manuscript to model data, consisting of 100 samples (each of length T = 2500 time points, representing mock 5 s epoch of EEG data) of a multivariate (N = 20) white noise process.The applied processing steps included application of frequency filtering (using elliptic filters corresponding to the four specified frequency bands; we applied zero-phase digital filtering by processing the input data in both the forward and reverse directions) and subsequent computation of the synchronization likelihood (Stam and van Dijk, 2002).The parameters of the synchronization likelihood l, m, w 1 , w 2 , n rec were set for each frequency band as in Betzel et al. (2012).The resulting functional connectivity matrices were clustered using the standard k-means clustering method (Lloyd, 1982).
In Figure 1 you see that the typical duration of detected states closely corresponds to the distributions observed in the original paper (compare with Figures 6B, 4A,B in Betzel et al., 2012).In particular, the typical timescale is in the order of tens to hundreds of ms.Also, this time scale depends on the selected filtering in the same way as in the original work, with the time scales of the beta and theta bands We have further repeated the procedure using multivariate Fourier transform surrogates generated from a single segment of EEG data (for more details on the data see Horacek et al., 2010.Such surrogates correspond to realizations of linear stationary process with conserved auto-and cross-correlation structure, see Prichard and Theiler, 1994).The results are shown in the right column of Figure 1.Moving from white noise to EEG surrogates, the time scales of the observed clustering hardly changed.However, as expected due to the introduced spatial dependence, the EEG surrogates show now a patchy spatiotemporal pattern Figures 1D,F more closely corresponding to those in the original paper.The similarity of the spatiotemporal patterns is of course only qualitative-range of differences may have arisen due to combination of different acquisition parameters as well as intraand inter-individual variability.
Note that we applied the basic kmeans clustering method instead of the evolutionary-clustering algorithm from the original paper; insufficient detail of description of the procedure in the original paper made reproducing it prohibitively difficult.The value k = 3 was chosen for display of the clustering results, however, the results proved to be quite insensitive to the choice of k.
Our numerical simulation above focused particularly on the observed time scales of the network states as obtained with the described analysis approach.One could indeed ask further, what evidence regarding "repertoire of states" can be provided by the detection of clusters per se-and whether the detection of (some) clusters could be merely a consequence of running a clustering algorithm.For a k-means clustering, the answer is obvious.Even for more complex approaches without fixed number of clusters such as the approach of Betzel et al. (2012), we conjecture that a repertoire could be observed even for a stationary process, however this depends on the details of applied analysis approach.
In summary, we aimed to illustrate the proposition that spurious nonstationarity manifesting itself as alternation of network states may appear due to methodological issues even in stationary processes such as white noise.In our example, we showed that for instance the observation of clustering of time points (more precisely, temporal windows) into consecutive clusters ("states") of duration in the order of several hundred milliseconds (the time scale of putative brain microstates) might be reproduced by white noise to a remarkable detail.Of course, this does not disprove the existence of such states-it just suggests the evidence may not be sufficient.
From a wider perspective, one could see a parallel here with other examples of data analysis approaches that may lead to spurious observation of intriguing structures due to intrinsic bias of the methods-such as apparent signs of chaos in power-law spectra stochastic processes (Osborne and Provenzale, 1989) or small-world properties of functional connectivity graphs (Hlinka et al., 2012).Or, from an experimental point of view, with the role measurement artifacts such those as due to head motion might play in observed network properties (Hlinka et al., 2010;van Dijk et al., 2012).
FIGURE 1 |
FIGURE 1 | Temporal dynamics of synchronization likelihood (SL) networks generated from realizations of stationary processes: white noise (A,C,E) and correlated noise [linear stationary (FFT) surrogates from EEG data] (B,D,F).The top, middle and bottom images were created markedly shorter and longer, respectively, than those of the broadband and alpha bands, the latter two being relatively close to each other.Even though spatially and temporally independent (white) noise model used here is clearly not a realistic model for EEG data; such a simplistic stationary model reproduces the clustering time scales of the original paper with a surprising accuracy.Of course, due to spatial independence of the processes, it does not reproduce the spatiotemporal patterns corresponding to Figures 4A,B in Betzel et al. (2012). | 2015-07-17T21:52:16.000Z | 2015-02-12T00:00:00.000 | {
"year": 2015,
"sha1": "5c1c3289ed888be26b02efbbc56d8d0b19c8931c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncom.2015.00011/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c1c3289ed888be26b02efbbc56d8d0b19c8931c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
231900152 | pes2o/s2orc | v3-fos-license | Estradiol Removal by Adsorptive Coating of a Microfiltration Membrane
This work demonstrates the enhancement of the adsorption properties of polyethersulfone (PES) microfiltration membranes for 17β-estradiol (E2) from water. This compound represents a highly potent endocrine-disrupting chemical (EDC). The PES membranes were modified with a hydrophilic coating functionalized by amide groups. The modification was performed by the interfacial reaction between hexamethylenediamine (HMD) or piperazine (PIP) as the amine monomer and trimesoyl chloride (TMC) or adipoyl chloride (ADC) as the acid monomer on the surface of the membrane using electron beam irradiation. The modified membranes and the untreated PES membrane were characterized by scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), water permeance measurements, water contact angle measurements, and adsorption experiments. Furthermore, the effects of simultaneous changes in four modification parameters: amine monomer types (HMD or PIP), acid monomer types (TMC or ADC), irradiation dosage (150 or 200 kGy), and the addition of toluene as a swelling agent, on the E2 adsorption capacity were investigated. The results showed that the adsorption capacities of modified PES membranes toward E2 are >60%, while the unmodified PES membrane had an adsorption capacity up to 30% for E2 under similar experimental conditions, i.e., an enhancement of a factor of 2. Next to the superior adsorption properties, the modified PES membranes maintain high water permeability and no pore blockage was observed. The highlighted results pave the way to develop efficient low-cost, stable, and high-performance adsorber membranes.
Introduction
Humans and aquatic species are frequently exposed through the water to substances that cause disruption of the endocrine system, called endocrine-disrupting chemicals (EDC). This exposure has become a serious environmental and health problem worldwide [1,2]. Among these EDCs, natural estrogens (e.g., estrone (E1) and 17β-estradiol (E2)), as well as synthetic estrogen (17α-ethinylestradiol (EE2)), have been receiving increased attention as a class of emerging contaminants due to their high occurrence and persistence in the sewage treatment plants (STP) effluents and receiving natural waters [3][4][5]. Previous research studies have established a possible link between human exposure to estrogenic EDCs and decreasing male sperm counts and increases in several types of cancer [6][7][8].
The decline of fertilization rate and the alteration of the development and reproductive performances of fish and aquatic invertebrates have also been reported [9]. With the concerns regarding the high spread of estrogenic EDCs in water, the European Union has recently introduced a watch list mechanism to monitor the hormones E2 and EE2 amongst other substances to establish future standards for STP effluents discharge of estrogenic EDCs and pharmaceuticals as a part of the European Priority Substances Directive [10].
Nevertheless, concentrations of estrogenic hormones between 0.1 and 10 ng/L have been reported in domestic wastewater effluents and receiving natural water bodies in various cases around the world [3,[11][12][13][14]. Hence, developing an effective method for extracting EDCs from water is of vital importance.
Various methods such as catalytic degradation, photocatalytic degradation, biodegradation, advanced oxidation, liquid-liquid extraction (LLE), and ozone reactive LLE have been explored for the removal of estrogenic EDCs from water [15][16][17][18][19][20][21]. In comparison to the mentioned techniques, adsorption was addressed as a more efficient and effective method. In fact, adsorption is an environmentally friendly method that is at the same time efficient and easily accessible. Various adsorbents, e.g., granular activated carbon, chitin, chitosan, ion exchange resin, and carbon-based adsorbents made of industrial and agricultural waste are able to remove E2 from wastewater [22]. Yoon et al. [23] have applied different kinds of powder activated carbon for the removal of E2. Tagliavini et al. [24] studied the adsorption of steroid micropollutants on polymer-based spherical activated carbons. Other sorbents including single-walled carbon nanotubes and multi-walled carbon nanotubes have also shown good performance to remove E2 from aqueous systems [25][26][27][28]. However, high production and regeneration costs make these methods inefficient in water treatment and purification processes. It is, therefore, apparent that new adsorbents for removing estrogenic EDCs from water are necessitated.
Membrane technology, including microfiltration (MF), nanofiltration (NF), and reverse osmosis (RO), are considered viable technologies for the removal of EDCs including natural hormones from water. Past studies suggested that the rejection would be largely controlled by adsorption of hormones to the membrane [29][30][31][32][33][34]. Adsorption leads to the removal of hormones at a much higher level than would be expected based on the hormone's molecular size. The NF/RO membranes are predominantly prepared as thin-film composite (TFC) membranes. The TFCs are made of a thin polyamide (PA) active layer on the top surface of the membrane. In addition, it has a polyester (PET) backing layer on the bottom, while a polysulfone (PSf) or polyethersulfone (PES) support layer is between the top and bottom layers. Generally, the PET backing layer contributes slightly to the adsorption of EDCs. In contrast, there is still a debate on which of the other two layers play the predominant role in the adsorption of EDCs. It has been discussed in various studies that the main mechanism in the adsorption of hormones, e.g., E2 (containing both hydroxyl and carbonyl groups), is the formation of hydrogen bonds on the active polyamide layer of TFC membranes [31,33,35,36]. Others have postulated that the adsorption might be due to the hydrophobic interaction between hormones (log K OW of E2 = 4.1) and the membrane surface [32].
Steinle-Darling et al. [37] clarified that adsorption of fluoxetine on the PA-PSf layer is higher than that on a commercial PSf membrane. It indicates that the PA layer has a high affinity toward the hormone. In addition, Semiao and Schaefer [38] conducted diffusion cell experiments where the PA-PSf layer separated two similar hormone solutions in a way that the PA and PSf occurred in the opposite directions. The authors reported that the PA layer had shown higher adsorption capacity than the PSf layer in adsorption of hormones from water. PA modification membranes were also studied by Han et al. [34]. They showed that the strong compound binding affinity originates from the hydrogen bonding between PA amide groups and proton donating groups on target compound molecules.
On the other hand, Liu et al. [39] studied the adsorption kinetics of the PA layer by isolating the active PA layer of NF and RO membranes. They peeled off the PET backing layers and dissolved the PSf support layers and argued that the presence of the PSf layer had important impacts on the adsorption capacities and the time necessary to reach adsorption equilibrium. The authors suggested that the EDCs of different physicochemical properties had distinct adsorbed amounts on the two membranes in almost the same order, which mainly resulted from electrostatic attraction/repulsion and hydrophobic interactions.
McCallum et al. [27] applied intermediate stage products such as membrane without polyamide coating layer to carry out some batch and filtration experiments for removal of E2. They observed that the adsorption and desorption of E2 took place at the polysulfone support layer rather than at the polyamide active layer. They mentioned that this behavior is probably due to the hydrophobic interactions.
The published works reviewed above are not in agreement with adsorption mechanism of EDCs on the membranes. It can be said that the adsorption of various EDCs probably takes place at different locations of the membranes (both the membrane surface and inside the pores) [40,41]. It was previously proposed by Semiao and Schaefer that the surface properties of the membrane PA layer and pore size of the membrane have an important influence on the adsorption of the hormones [38]. Accordingly, it has been discussed that a membrane having larger porosity provides easier access to the adsorption sites inside the membrane, i.e., higher adsorption capacity compared to a membrane with smaller pore sizes [29,42].
However, the low water permeation of TFC and PA membranes makes them undesirable for water treatment processes at lower pressure. Additionally, the dense PA layer on the membrane surface leads to blockage of the membrane and lowers the access of the hormones to the adsorption sites and has to be improved.
In this work, we report a novel and efficient method to prepare an adsorber membrane by creating an amide functional coating on porous microfiltration (0.45 µm) PES membrane surface using the concept of interfacial polymerization reaction [43]. The porous support membrane provides the mechanical stability required for operating under high permeation rates. The amide coating was fabricated by means of interfacial reaction between hexamethylene diamine (HMD) or piperazine (PIP) as the amine monomer and trimesoyl chloride (TMC) or adipoyl chloride (ADC) as the acid monomer on the surface of the PES membrane. Electron beam (EB) irradiation was used to immobilize the amine monomers on the surface of the PES membrane via a grafting-to reaction. A subsequent reaction with an acid monomer resulted in the amide coating.
Toluene was added as a swelling agent to increase the surface area. The modification with the amide functionalities did not block the PES membrane pores and increased the E2 uptake without creating any defects or agglomerates.
E2 was chosen as the target molecule due to its high estrogenic potency and common presence in STP effluents [44]. The prepared modified membranes were characterized by means of scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), Fourier transform infrared spectroscopy (FTIR), water contact angle measurement, water permeance, and E2 adsorption studies.
The enhanced adsorption performance of the modified membranes toward E2 was attributed to the modifications with the functional groups of the membrane surface.
We found an interesting combined effect on the E2 adsorption capacity of the membranes after simultaneous changes of the modification parameters. Four important modification parameters, namely amine monomer type, acid monomer type, EB irradiation dosage, and addition of toluene, were discussed as influential factors.
Membrane Modification
In this work, different types of amide modifications were created on the surface of the PES membrane. Figure 1 illustrates the functionalization of the PES membrane by interfacial reaction.
Membrane Modification
In this work, different types of amide modifications were created on the surface of the PES membrane. Figure 1 illustrates the functionalization of the PES membrane by interfacial reaction. In brief, a PES membrane disk (47 mm diameter) was soaked in an aqueous solution containing the amine monomer (HMD or PIP, 2 wt. %) for 30 min followed by EB irradiation with a dosage of 150 or 200 kGy. The irradiation was performed by means of a homemade electron accelerator (10 mA, 160 kV) under N2 atmosphere with O2 quantities less than 10 ppm. Afterward, the amine immobilized membranes were rinsed with deionized water three times for 30 min and subsequently dried at room temperature for 60 min. Toluene was added to half of the pre-modified membranes at this stage to investigate the swelling effect. The amine immobilized membranes were immersed in TMC or ADC in nhexane solution (0.2 wt. %) for 2 min, where the interfacial reaction took place. All modified membranes were dried for 30 min to remove the n-hexane. Then, the membranes were rinsed three times with deionized water for 30 min. Finally, all membranes were dried at room temperature overnight.
The concentration of monomers, the respective irradiation dosage, and the amount of toluene are listed in Table 1. The modified membranes will hereafter be referred to PA-1 to 16. In brief, a PES membrane disk (47 mm diameter) was soaked in an aqueous solution containing the amine monomer (HMD or PIP, 2 wt.%) for 30 min followed by EB irradiation with a dosage of 150 or 200 kGy. The irradiation was performed by means of a home-made electron accelerator (10 mA, 160 kV) under N 2 atmosphere with O 2 quantities less than 10 ppm. Afterward, the amine immobilized membranes were rinsed with deionized water three times for 30 min and subsequently dried at room temperature for 60 min. Toluene was added to half of the pre-modified membranes at this stage to investigate the swelling effect. The amine immobilized membranes were immersed in TMC or ADC in n-hexane solution (0.2 wt.%) for 2 min, where the interfacial reaction took place. All modified membranes were dried for 30 min to remove the n-hexane. Then, the membranes were rinsed three times with deionized water for 30 min. Finally, all membranes were dried at room temperature overnight.
Water Permeance
The concentration of monomers, the respective irradiation dosage, and the amount of toluene are listed in Table 1. The modified membranes will hereafter be referred to PA-1 to 16.
. Water Permeance
A stainless steel filtration cell (16249, Sartorius Stedim Biotech, Göttingen, Germany) was applied to run filtration tests. Water permeance was then calculated by the results of the filtration tests. The permeation time for 50 mL of deionized water was recorded at the pressure of 1 bar. Permeation time was measured for five individual samples and an average of the trials was calculated. Water permeance J was calculated by Equation (1).
where V is the volume of water passing through the membrane, t denotes the permeation time of the water through the membrane, A addresses the active surface area, and p is the applied pressure. The bubble point of wet membranes was determined by continuously increasing pressure to the point at which the first stream of bubbles emerges.
X-ray Photoelectron Spectroscopy
The chemical compositions of the untreated and modified membranes were investigated by X-ray photoelectron spectroscopy (XPS, Kratos Axis Ultra, Kratos Analytical Ltd., Manchester, UK).
Scanning Electron Microscopy
The morphologies of the modified and untreated PES membranes were studied by scanning electron microscopy (SEM, Ultra 55, Carl Zeiss Microscopy GmbH, Oberkochen, Germany). Magnification ranged from 300-to 25,000-fold. The samples were cut manually and coated with a thin (30 nm) chromium layer by means of the Z400 sputtering system (Leybold, Hanau, Germany).
Water Contact Angle
The surface wettability of the modified and untreated membranes with water was investigated by a static contact angle measurement system (DSA 30E, Krüss, Hamburg, Germany) and the sessile drop method. An average of at least five different sample points was reported.
Adsorption of E2 on Modified Membranes
The adsorption capacities of the modified and untreated PES membranes were measured in a series of batch experiments. In brief, a stock solution of estradiol in ethanol with a concentration of 10 mg·mL −1 was prepared by adding 100 mg of estradiol in a 10 mL volumetric flask and making up to 10 mL with absolute ethanol. The stock solution was sonicated for 15 min. Fifty microliters of this solution was transferred to a 100 mL volumetric flask using an Eppendorf pipette and diluted by 100 mL with an aqueous ethanol solution (10% by volume) and sonicated for 15 min. Finally, an estradiol stock solution with a concentration of 5 mg·L −1 was obtained.
Ten-millimeter pieces of modified and untreated membrane disk samples were placed in a 48-well microtiter plate. To each sample, 200 µL of the aqueous E2 solution with an initial concentration of 5 mg·L −1 were added. The plates were shaken for 30 min at ambient temperature. The supernatant solution was collected and transferred to a new microtiter plate. The final concentration of E2 was measured by means of fluorescent detection (Infinite M200, Tecan, Germany) at an excitation wavelength of 273 nm and an emission wavelength of 305 nm. The adsorption capacities of the modified membranes were calculated by Equation (2), where C 0 is the initial concentration of E2 and C f is the final concentration after reaching the Equation equilibrium. An average of 5 individual experiments was calculated and reported.
Results and Discussion
The main purpose of this work is to enhance the E2 adsorption on the modified PES membranes, with improving the surface hydrophilicity of the membranes. At the same time, pore-blocking needs to be prevented during the modification reaction of the membranes. Then, the modified membranes were characterized by various techniques for determining the hydrophilicity, pore structure, and chemical composition. Finally, E2 removal experiments were carried out to evaluate the adsorption performance of the modified membranes for the removal of E2 from water.
Water Contact Angle
Water contact angle (WCA) analysis was performed to investigate the surface wettability of the polymer membranes. The water contact angle values for the different modifications and the untreated PES (referred to as REF) membrane are presented in Figure 2. The untreated PES membrane exhibits a hydrophilic surface with a water contact angle of 44 • . The modification with the thin PA coating resulted in a moderate decrease of the water contact angles in the range of 37-43 • . The lowest water contact angle was observed after modification with PIP and TMC and adding toluene (PA-4). The decrease in the contact angles reveals the enhancement of the wettability of the PES membrane after modification. The reason for this finding could be attributed to increased hydrophilicity due to the presence of hydrophilic amide units in the coating. It is assumed that the decrease in the contact angles discloses the successful formation of the thin amide coating on the surface of the PES membrane. Figure S1 in Supplementary Information shows the experiment for determining the surface wettability of PA-4. The effect of toluene on wettability was also investigated. No significant effect of adding toluene on the wettability of the untreated PES membrane was observed. The water contact angles are listed in Table S1. Adsorbed E2 % = 100 (2)
Results and Discussion
The main purpose of this work is to enhance the E2 adsorption on the modified PES membranes, with improving the surface hydrophilicity of the membranes. At the same time, pore-blocking needs to be prevented during the modification reaction of the membranes. Then, the modified membranes were characterized by various techniques for determining the hydrophilicity, pore structure, and chemical composition. Finally, E2 removal experiments were carried out to evaluate the adsorption performance of the modified membranes for the removal of E2 from water.
Water Contact Angle
Water contact angle (WCA) analysis was performed to investigate the surface wettability of the polymer membranes. The water contact angle values for the different modifications and the untreated PES (referred to as REF) membrane are presented in Figure 2. The untreated PES membrane exhibits a hydrophilic surface with a water contact angle of 44°. The modification with the thin PA coating resulted in a moderate decrease of the water contact angles in the range of 37-43°. The lowest water contact angle was observed after modification with PIP and TMC and adding toluene (PA-4). The decrease in the contact angles reveals the enhancement of the wettability of the PES membrane after modification. The reason for this finding could be attributed to increased hydrophilicity due to the presence of hydrophilic amide units in the coating. It is assumed that the decrease in the contact angles discloses the successful formation of the thin amide coating on the surface of the PES membrane. Figure S1 in Supplementary Information shows the experiment for determining the surface wettability of PA-4. The effect of toluene on wettability was also investigated. No significant effect of adding toluene on the wettability of the untreated PES membrane was observed. The water contact angles are listed in Table S1.
Water Permeance
Membrane performance in terms of permeance was determined by measuring the pure water permeability. The permeance values for the untreated PES membrane and the different modifications are summarized in Figure 3. The untreated PES membrane is already hydrophilic and has a permeance value of 40.1 mL·min −1 ·cm −2 ·bar −1 . All the PA modifications showed a slight increase in performance with an average permeance value of 41 mL·min −1 ·cm −2 ·bar −1 . PA-5 with modification parameters of PIP-ADC-150 kGy and without the addition of toluene showed the highest enhancement in permeance with a value of 42.5 mL·min −1 ·cm −2 ·bar −1 . The slight increase in water permeability can be attributed to the enhanced wettability of the membrane surface. It is assumed that the enhanced wettability of the surface results in the formation of a thin water film on the top of the polymer membrane. This water film helps to prevent hydrophobic interactions and can increase water permeability. Khorshidi et al. [45] reported an average water flux of 7-68 L·m −2 ·h −1 at a trans-membrane pressure of 1.52 MPa (equivalent to 0.001-0.01 mL·min −1 ·cm −2 ·bar −1 ) for thin-film composite polyamide coated on PES (0.2 µm) microfiltration membrane. A comparison between the permeance values obtained here with what was reported by Khorshidi et al. discloses that immobilizing the amine component by electron beam and the subsequent reaction with the acid reagent could be a better approach to maintain the high water permeability of the PES support. The values from water permeance experiments are presented in Table S2.
Water Permeance
Membrane performance in terms of permeance was determined by measuring the pure water permeability. The permeance values for the untreated PES membrane and the different modifications are summarized in Figure 3. The untreated PES membrane is already hydrophilic and has a permeance value of 40.1 mL•min −1 •cm −2 •bar −1 . All the PA modifications showed a slight increase in performance with an average permeance value of 41 mL•min −1 •cm −2 •bar −1 . PA-5 with modification parameters of PIP-ADC-150 kGy and without the addition of toluene showed the highest enhancement in permeance with a value of 42.5 mL•min −1 •cm −2 •bar −1 . The slight increase in water permeability can be attributed to the enhanced wettability of the membrane surface. It is assumed that the enhanced wettability of the surface results in the formation of a thin water film on the top of the polymer membrane. This water film helps to prevent hydrophobic interactions and can increase water permeability. Khorshidi et al. [45] reported an average water flux of 7-68 L•m −2 •h −1 at a trans-membrane pressure of 1.52 MPa (equivalent to 0.001-0.01 mL•min −1 •cm −2 •bar −1 ) for thin-film composite polyamide coated on PES (0.2 µm) microfiltration membrane. A comparison between the permeance values obtained here with what was reported by Khorshidi et al. discloses that immobilizing the amine component by electron beam and the subsequent reaction with the acid reagent could be a better approach to maintain the high water permeability of the PES support. The values from water permeance experiments are presented in Table S2.
Membrane Pore Structure
The morphology and pore structure of the untreated and modified PES membranes were investigated by SEM. A comparison of SEM images from the surface of the untreated PES and some selected modified membranes can be found in Figure 4. Please note that SEM images of the top surface and cross-section of the modified and reference PES membrane are illustrated in Figure S2 in the Supplementary Information. As it could be expected from the water permeation experiments, no pore blockage was observed upon
Membrane Pore Structure
The morphology and pore structure of the untreated and modified PES membranes were investigated by SEM. A comparison of SEM images from the surface of the untreated PES and some selected modified membranes can be found in Figure 4. Please note that SEM images of the top surface and cross-section of the modified and reference PES membrane are illustrated in Figure S2 in the Supplementary Information. As it could be expected from the water permeation experiments, no pore blockage was observed upon modification with the amide layer. It is observed that the modification does not adversely affect the morphology and no defects could be detected. Thus, the stability of the base membrane is not affected. This means that the modification is an appropriate approach to functionalize the PES membrane with the amide coating without altering the physical structure of the supporting membrane. The SEM results also revealed that this amide modification on the PES membrane is very thin and cannot be detected by SEM.
Membranes 2021, 11, 99 8 of 13 modification with the amide layer. It is observed that the modification does not adversely affect the morphology and no defects could be detected. Thus, the stability of the base membrane is not affected. This means that the modification is an appropriate approach to functionalize the PES membrane with the amide coating without altering the physical structure of the supporting membrane. The SEM results also revealed that this amide modification on the PES membrane is very thin and cannot be detected by SEM.
Membrane Chemical Composition
XPS analysis was carried out to prove the presence of the amide functionalities. Table 2 summarizes the chemical composition of the reference PES and the modified membranes. The untreated PES membrane is composed of 71.6% carbon, 24.4% oxygen, and 3.9% sulfur.
Membrane Chemical Composition
XPS analysis was carried out to prove the presence of the amide functionalities. Table 2 summarizes the chemical composition of the reference PES and the modified membranes. The untreated PES membrane is composed of 71.6% carbon, 24.4% oxygen, and 3.9% sulfur.
The application of the thin amide layer changed the composition measured at the membrane surface, and a significant increase in nitrogen on the surface of the membrane was detected. Since the reference PES membrane does not contain any nitrogen, this effect indicates that amide functionalities were formed on the membrane surface, i.e., the modification was successful. The formation of the amide coating can be further proved by the C1s spectra ( Figure 5). Three signals were observed for the reference and modified PES membranes: a major peak at 285 eV that corresponds to carbon atom without adjacent electron-withdrawing atoms (C−C and C−H), an intermediate peak at 286.5 eV which is assignable to a carbon in weak electron-withdrawing atoms (C−O−C), and a minor peak at 288.5 eV which is associated with carbons attached to strong electron-withdrawing atoms (carboxylic O=C−O and amides O=C−N) [46].
E2 Adsorption
The adsorption properties of the untreated and the modified PES membranes were examined by conducting batch adsorption tests for removal of E2 from aqueous solution. The effect of various synthesis parameters including types of monomers (HMD or PIP and TMC or ADC), irradiation dosages (150 or 200 kGy), and toluene as the swelling agent on the adsorption performance of the membranes were studied. Membrane disks (10 mm) were placed in a 48-well microtiter plate. Two hundred microliters of the E2 stock solution with an initial concentration of 5 mg·L −1 was added to each membrane disk. The depletion of the E2 concentration was evaluated after 30 min. Figure 6 shows the results of adsorption capacities calculated by Equation (2). membranes: a major peak at 285 eV that corresponds to carbon atom without adjac electron-withdrawing atoms (C−C and C−H), an intermediate peak at 286.5 eV whic assignable to a carbon in weak electron-withdrawing atoms (C−O−C), and a minor p at 288.5 eV which is associated with carbons attached to strong electron-withdraw atoms (carboxylic O=C−O and amides O=C−N) [46]. Membranes 2021, 11,99 the adsorption performance of the membranes were studied. Membrane disks ( were placed in a 48-well microtiter plate. Two hundred microliters of the E2 stock s with an initial concentration of 5 mg•L −1 was added to each membrane disk. The de of the E2 concentration was evaluated after 30 min. Figure 6 shows the results of tion capacities calculated by Equation 2. PA-6 and PA-10 exhibit the highest adsorption capacities toward E2. Please n in the case of PA-6 and PA-10 the amide functionalities were formed on the surfac PES membranes with the interfacial polymerization reaction between either PIP an or HMD and TMC, respectively. In 30 min, both PA-6 and PA-10 remove more th of E2 present in the solution, while only slightly more than 30% was removed w untreated PES membrane. These high enhancements in the adsorption capacities and PA-10 for E2 may indicate that hydrogen bonds between the hydroxyl grou and the amide functional groups on the modified membranes were formed. In ad the comparable enhancements in adsorption capacities of PA-6 and PA-10 probab PA-6 and PA-10 exhibit the highest adsorption capacities toward E2. Please note that in the case of PA-6 and PA-10 the amide functionalities were formed on the surface of the PES membranes with the interfacial polymerization reaction between either PIP and ADC or HMD and TMC, respectively. In 30 min, both PA-6 and PA-10 remove more than 60% of E2 present in the solution, while only slightly more than 30% was removed with the untreated PES membrane. These high enhancements in the adsorption capacities of PA-6 and PA-10 for E2 may indicate that hydrogen bonds between the hydroxyl group of E2 and the amide functional groups on the modified membranes were formed. In addition, the comparable enhancements in adsorption capacities of PA-6 and PA-10 probably indicate that the formation of hydrogen bonds is regardless of the aromatic or aliphatic character of the amide functional group created on the PA-6 and PA-10. On the other hand, a 20% difference in E2 adsorption capacity is observed by comparing adsorption performances of PA-10 and PA-9. No toluene was added in the process of modification in the case of PA-9. These results reveal the key role of toluene in the high adsorption performance of the modified membranes. The same behavior is observed for all the modified membranes, confirming the mentioned finding on the important effect of toluene on adsorption capacities of the modified membranes for E2. The higher adsorption capacity by adding toluene may be attributed to the swelling of the membrane. In fact, a swelling-driven effect of toluene can result in an increase in the surface area of the membranes, i.e., a higher concentration of amide groups is accessible. It is worth noting that only soaking an untreated PES membrane in toluene was not sufficient for increasing the adsorption performance of the membrane for E2. Therefore, toluene plays an important role in the amide modification by swelling the membrane. The E2 adsorption results also revealed that lower electron beam irradiation dosage is more successful to immobilize the amine monomer on the surface of the PES membrane. The average E2 adsorption capacity measured for all the modified membranes was 0.58 µg cm −2 (mass adsorbed per unit membrane area), which is nearly a two-fold increase compared to Koyuncu et al. [47], who reported an E2 adsorption capacity of 0.34 µg cm −2 on a polyamide thin film composite nanofiltration membrane (NF200). PA-6 and PA-10 with adsorption capacity of 0.82 µg cm −2 had the highest E2 adsorption capacity. This value is slightly higher than the maximum adsorption capacity (78 µg cm −2 ) of an ultrafiltration PES membrane for E2 which was reported by Jermann et al. [48]. The results here are comparable with the work of Han et al. [34] who demonstrated that the high adsorption capacity originates from the hydrogen bonding between PA amide groups and proton-donating moieties on E2 molecules. Values of E2 adsorption [%] on Ref and modified PES membranes are presented in Table S3.
Conclusions
This work demonstrates the efficient removal of E2 from water by PES microfiltration membranes modified with amide functional groups. In fact, the microfiltration PES membrane surfaces were successfully modified with an amide functional coating. The modified membranes showed a high E2 adsorption capacity. Interestingly, membrane surface modification by both alkyl and aromatic amide functionalities resulted in comparable E2 adsorption properties. We, therefore, conclude that hydrophobic interactions were not significantly involved in the adsorption process. It can rather be discussed that the successful formation of hydrogen bonds between E2 and amide coating is responsible for such high adsorption capacities of the modified membranes toward E2. The modified membranes also had a slightly higher water wettability and water permeance compared to those of the untreated PES membranes. The pore structure on the other hand was not changed which indicates a very thin or even monomolecular layer of the amide modification.
The effects of synthetic parameters on the modified membranes were also studied and compared. Adding toluene was found to have the strongest effect on creating amide functional groups on the surface of the PES membrane with an adsorption capacity of 0.82 µg cm −2 probably by swelling the membrane.
The present study clarifies that the surface modification by amide functionalities is an efficient and inexpensive method to generate stable and high-performance adsorber membranes. In contrast to traditional PA thin-film composite membranes, the amide coated membranes retain and, in some cases, even improve their original microfiltration permeation performances.
Supplementary Materials:
The following are available online at https://www.mdpi.com/2077-037 5/11/2/99/s1, Figure S1: Water contact angle test on a modified membrane (PA-4), Figure S2: SEM images of top surface (top) and cross-section (bottom) of modified and pristine PES membrane at magnifications of 10,000 (top) and 500 (bottom), Table S1: Values of Water contact angle for reference and modified PES membrane, Table S2: Values of water permeation tests of modified and reference PES membranes, Table S3: Adsorption (%) and adsorption capacity (µg cm −2 ) values for modified and pristine PES membrane. | 2021-02-13T06:16:37.920Z | 2021-01-30T00:00:00.000 | {
"year": 2021,
"sha1": "1b91a482b90156ee9876b26676f9327005f83082",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/11/2/99/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d4ddb4193c11d253a10838ed7b4df8f41a53574",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54815772 | pes2o/s2orc | v3-fos-license | Considering the Significance of Food Insecurity and Nutrition in New Brunswick Communities Publication History :
Recent data gathered by the New Brunswick Health Council [1] speaks to a significant prevalence of food insecurity (often 7% prevalence) and poor nutritional practices within many communities in New Brunswick. In terms of food insecurity, in addition to the experience of hunger, community members have reported that affected families are also forced to consume poor nutrition food. The high cost of high nutrition, locally available food represents a barrier to healthy eating for many community members and families. Local consultation with the food banks also speaks to a much less than optimal food quality being distributed to meet the basic needs of families serviced.
Introduction
Recent data gathered by the New Brunswick Health Council [1] speaks to a significant prevalence of food insecurity (often 7% prevalence) and poor nutritional practices within many communities in New Brunswick.In terms of food insecurity, in addition to the experience of hunger, community members have reported that affected families are also forced to consume poor nutrition food.The high cost of high nutrition, locally available food represents a barrier to healthy eating for many community members and families.Local consultation with the food banks also speaks to a much less than optimal food quality being distributed to meet the basic needs of families serviced.This paper speaks to the likelihood of poor nutrition food and insecurity creating significant current and downstream negative health impacts.From the perspective of children and youth, developmental delays are argued to be a direct consequence of poor nutrition, leading to a negative cascade of poor learning, cognitive and emotional deficits, all pointing to more difficulties and less fulfilling adults lives.Three common issues encountered with poor nutrition are diabetes mellitus [2], iron and Vitamin D deficiencies; in addition to contributing to chronic disease, daily levels of functioning and productivity are argued to be impaired among children and adults alike.
Evidence strongly supporting interventions currently at work with
New Brunswick communities such as prenatal programs and school breakfast programs is also given in this paper.An indication of the significant benefits given the relatively low cost of such programs is provided to inform continued and improved support to these programs.
Discussion
The World Health Organization (WHO) in 2005 [3] predicted that chronic diseases would account for 80% of all deaths in Canada.The WHO speaks to this further from a prevention perspective:
"At least 80% of premature heart disease, stroke and type 2 diabetes, and 40% of cancer could be prevented through healthy diet, regular physical activity and avoidance of tobacco products. Cost-effective interventions exist: the most successful strategies have employed a range of populationwide approaches combined with interventions for individuals. "
It has been reported for nearly 20 years that brain development (including in utero) is clearly linked with nutrition.It has also been known since the mid-nineties that school feeding programs not only increase attendance, academic achievement levels are also higher.School feeding programs erase the disadvantages that young children experience when they grow up in marginalized neighbourhoods characterized by poverty, hunger and malnutrition, broken families and crime [4].
Ten years ago, another study found that better nourished children perform significantly better in school, mostly because of greater learning productivity per year of schooling [5].A cost-benefit analysis suggests that a dollar invested in an early childhood nutritionprogram could potentially return at least three dollars' worth of gains in academic achievement, and perhaps much more.Similar long term studies have demonstrated similar findings with cost benefit ratios of 1:1.5 to 1:6 [6].
Prenatal nutrition programs that targeted low income, high risk pregnancies in Montreal have been shown to improve long term health outcomes in children, saving at least $8 for each dollar invested [7].This program works with individual pregnant mothers through a dietician risk assessment and development of a rehabilitative nutritional diet.The Canadian Prenatal Nutrition Program (CPNP) has shown considerable success nationally in reducing the incidence of low birth weight (LBW) infants and in increasing the proportion of mothers who breastfeed their infants.This in turn, has been shown to decrease the incidence of failure to thrive (FTT) infants and respiratory illnesses, conditions endemic to and the cause of considerable disability among low income Canadians [7,8].
Colon cancer is one of the most common forms of cancer seen among adult males.Prevention of this cancer has been demonstrated to be most amenable to nutritional and dietary therapy.For the aggregate of cancer cases, a reduction of one third of cases can be achieved through healthy diet and lifestyle choices [8].
Population health promotion strategies that address healthy eating and active lifestyles have been shown of benefit in reducing the incidence and severity of chronic disease.Studies of primary healthcare interventions that involved nutrition demonstrated positive results and support the individual interventions involving skilled nutrition educators.Risks for cardiovascular disease including dyslipidemia and hypertension have been demonstrated to be substantially reduced through dietary intervention [9][10][11][12][13].
Diabetes poses a serious health concern for a large proportion of the Canadian population.Lifestyle interventions by dietitians have been shown to reduce the risk of developing diabetes by 58% and benefits remain following completion of specific care.Additionally, educational care given by dietitians to diabetics has been shown to be more cost effective than management by medications [13].
Also highly relevant to New Brunswick, dietary intervention has been shown to substantially improve mental health outcomes [14,15].Poverty and food insecurity among New Brunswickers is recognized to be an endemic problem for which ongoing strategies and initiatives are being designed and implemented.This will require a long term effort given the complexities and costs of nutritious food availability.Food insecurity consequences are further intensified by the relatively low costs associated with low nutritional value, high fat and carbohydrate foods (junk food).Good evidence exists regarding nutrition support both at the policy and client level to increase food intake in food insecure households and communities [16,17].In simple terms, nutritionists and health promotion specialists may act as effective advocates in addressing food insecurity to decision makers, create population based solutions to mitigate the nutrition impacts and work individually with clients and other care givers to optimize nutrition intake in the context of food insecurity.
Obesity remains a growing epidemic among Canadians and New Brunswickers.The correlation with other chronic diseases and the socioeconomic impact is well documented [14,17].For this problem, medical interventions have not yet been shown effective.Rather, for treatment of overweight and obese people, a multidisciplinary approach employing nutritional therapy delivered by dieticians is recognized as a best practice in this regard [17].
A prevalence of Vitamin D and iron deficiency associated with food insecurity may well be indicative of ominous, chronic health issues adversely impacting child development, healthy pregnancies, population well-being and life expectancy of New Brunswickers.
Regarding Vitamin D hypovitaminosis, this condition is associated with [6,7] On a national basis, it has been estimated that the death rate could fall by 37 000 deaths (22 300-52 300 deaths), representing 16.1% (9.7-22.7%) of annuals deaths and the economic burden by 6.9% (3.8-10.0%)or $14.4 billion ($8.0 billion-$20.1 billion) less the cost of the program [6].European studies have also shown positive cost benefits for Vitamin D therapeutic interventions [7].Given the significant prevalence of food insecurity and poor nutritional practices in New Brunswick, positive economic benefits of addressing this issue may be expected to be highly significant.
Regarding iron deficiency, this condition is associated with [18,19] In a study on the economic benefits of iron fortification in 10 countries, the cost benefit median ratio is 6:1 for the 10 countries examined and rises to 36:1 including the discounted future benefits attributable to cognitive improvements.These benefits were calculated solely on the basis of economic implications of motor and mental impairment in children and low work productivity in adults.Other implications, such as costs associated with complications in pregnant and postpartum mothers, were not available [18].
Additional literature suggests the most cost effective approach to alleviating iron deficiency is through food based approaches [19].Given the likely high levels of iron deficiency and associated conditions among New Brunswickers, interventions aimed at remedying this condition could be expected to be highly significant.
The importance of good nutrition and the role of dietitians in institutionalized care are well documented.Nutrition intervention on malnourished patients has demonstrated a twofold beneficial effect on health of the patient and healthcare system budget.Firstly, oral nutrition supplement (ONS) interventions allow malnourished patients to gain weight [20], and to improve their functional status such as activity of daily living (ADL) or muscle strength [21].Secondly, in hospitalized malnourished patients, nutritional intervention has been shown to reduce mortality by 24%, complications rate by 56% and length of hospital stay by 2 days in surgical patients to 33 days in orthopaedic patients [22].In addition, high protein ONS given to acutely ill older people lowers the number of patients readmitted to hospital at six months [23].
In 1997, Smith and Smith [24] in a survey of hospital practices on nutrition screening and intervention demonstrated that for every $1USD spent on provision of high-quality nutritional care, it resulted in a $5USD savings for the facility.
Conclusion
"Food security exists when all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life" [25].Information gathered to date through the Community Health Needs Assessments (CHNA) in New Brunswick indicates that the prevalence of food security across and within communities is far from uniformly assured.Given the health outcome data gathered for New Brunswick, there are few conditions that cannot be linked in some form or manner to the quality, quantity and nutritional value of food consumed within this province.Food security is considered one of the most predictive social determinants of health and is directly linked to other determinants including income.Information provided in this brief review, while far from comprehensive, may assist decision makers and community members in their prioritization process to address community health needs. | 2018-12-05T11:19:50.775Z | 2014-12-25T00:00:00.000 | {
"year": 2014,
"sha1": "a44143d13383c77dd23340b8851169265ca855dd",
"oa_license": "CCBY",
"oa_url": "https://www.graphyonline.com/archives/IJDCD/2014/IJDCD-108/article.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a44143d13383c77dd23340b8851169265ca855dd",
"s2fieldsofstudy": [
"Medicine",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
128111892 | pes2o/s2orc | v3-fos-license | The Monte Alen–Monts De Cristal Landscape
Typical scene from the MA–MC landscape Populations The two main ethnic groups in the landscape are the Fang, who live primarily in mountainous areas, and the Ndowe, in the coastal basin in Equatorial Guinea. The Beyele pygmies, who used to live in the Altos de Nsork area, moved to southern Cameroon and other areas of the forest two decades ago. According to the most recent available statistics (2006), the average population density is 16–18 inhabitants/ km2 on the Equatoguinean side and 0.6 inhabitants/km2 on the Gabonese side.
Introduction
The Monte Alén-Monts de Cristal (MA-MC) landscape covers 26 747 km 2 , nearly half of which is in the northwest of Gabon with the remainder in the southeast of Equatorial Guinea.It is made up of three ecoregions: Atlantic Congolese forest ecoregion, the central West equatorial coastal ecoregion and the southwest equatorial coastal ecoregion.
The landscape occupies a plateau and mountain ranges.Annual rainfall ranges from 2000 mm in the east to 2800 mm in the west.There is a period of drought between July and September that is greatly softened by the presence of low cloud cover over a substantial area.
Overall, the forest concessions cover 65% of the area, the protected areas cover 18%, with 27% of these protected areas in Equatorial Guinea; only 3% of the area is used for agriculture.On the Gabonese side, two hydroelectric dams have been built in the Mbé Valley to supply the capital city, Libreville (Devers and Vande weghe 2007).
Populations
The two main ethnic groups in the landscape are the Fang, who live primarily in mountainous areas, and the Ndowe, in the coastal basin in Equatorial Guinea.The Beyele pygmies, who used to live in the Altos de Nsork area, moved to southern Cameroon and other areas of the forest two decades ago.According to the most recent available statistics (2006), the average population density is 16-18 inhabitants/ km 2 on the Equatoguinean side and 0.6 inhabitants/km 2 on the Gabonese side.
Forest coverage
Forest covers 26 101 km 2 of the MA-MC landscape (de Wasseige and Devers 2009), in which vegetation is very diverse and rich.The main types of landscape are humid lowland forest, degraded humid lowland forest, mountainous forest, degraded mountainous forest, secondary forest and savannah, as well as small extension of abandoned gaboon forest.
Deforestation
Deforestation is a major threat to the landscape.Between 1990 and 2000, 128 km 2 of forest, or 0.49% of the total, was lost (de Wasseige and Devers 2009).Although some of this deforestation is attributable to agricultural activities, mainly subsistence agriculture, the main driver is industrial exploitation of the timber by forest companies and small-scale exploitation of wood for handicrafts and energy needs.
Biodiversity
The MA-MC landscape is characterised by exceptionally rich biodiversity, featuring:
Threats to biodiversity conservation
The main threats to biodiversity in the landscape are: Bushmeat for sale in a village
Land administration and land use
The following land use types are present in the landscape.
Institutional partners
The main institutional partners working in the MA-MC landscape are:
Opportunities, adaptation and REDD+
The following characteristics of the landscape are of relevance to adaptation and REDD+ activities.
Following
are the main threats to the MA-MC landscape.• Local people have little involvement in landscape management and conservation, particularly in protected areas, making it difficult to engage them in communitybased adaptation and mitigation activities based on natural resources management.• Multiple organisations and projects are operating in the landscape on an individual basis.A major challenge is to coordinate various activities and organisations effectively, in order to achieve synergistic results and consistent interventions, and hence improve adaptive capacity.• A cause of conflict is the lack of clear and fixed national borders in the MA-MC landscape.Having clear borders between the two countries will help avoid misunderstandings and lead to better management of the landscape and its reserves and parks.• Local people's lack of awareness of existing laws and regulations and companies' failure to comply with these laws lead to abusive exploitation of resources, by both groups, thus hampering the success of adaptation and mitigation interventions.
based natural resource management areas
• Monts de Cristal National Park in Gabon, which spans 120 000 ha is largely inaccessible to humans because of its dense river forests, widely variable topography (elevation ranges from 200 to 900 metres) and constant mists and cloud cover.Consequently, it is almost virgin territory., namely the Kougouleu-Medouneu-Mbéarea, Equatorial Guinea National Forest and the Abanga River area; and • Three areas for natural resource extraction, namely Lonmin, the Société équatoriale d'exploitation forestière (SEEF) and Rougier.
Devers, D. and J.P. Vande weghe (eds).2007.The forests of the Congo Basin -state of the forest 2006.Publications Office of the European Union, Luxembourg.de Wasseige, C. and D. Devers (eds).2009.The forests of the Congo Basin -state of the forest 2008.Publications Office of the European Union, Luxembourg.de Wasseige, C. and D. Devers (eds).2011.The forests of the Congo Basin -state of the forest 2010.Publications Office of the European Union, Luxembourg. | 2018-12-18T00:10:48.798Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "25c28c13a4a87ee2901ab756917396b80ddfa8f2",
"oa_license": "CCBY",
"oa_url": "https://www.cifor.org/publications/pdf_files/brief/4089-brief.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "25c28c13a4a87ee2901ab756917396b80ddfa8f2",
"s2fieldsofstudy": [
"Geography",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
261759217 | pes2o/s2orc | v3-fos-license | The coloniality of Italian fascist architecture
This article traces the modern history of the Piazza di Porta Capena in Rome. It begins with the design of a modernist building for the square by the architects Ridolfi and Cafiero in 1938 created to celebrate the empire of fascist Italy. The building was intended to host the Ministry of the Colonies and to be flanked by an ancient stele looted from Aksum in Ethiopia. With the shift to a new world order after 1945, the building was completed to serve as the headquarters of the United Nations Food and Agriculture Organization. As part of Italy's belated commitment to reparations for colonial crimes, the stele was dismantled and reinstalled in Aksum in 2008. The void thus created in the piazza was filled by a new monument, this time a memorial to the victims of the 9/11 attack in New York in 2001. Through the lens of the ‘coloniality of architecture', this article explores the changing aesthetics of the square, uncovering a history of shared rationalities between Italian fascist colonialism and its afterlife, as forms of government of ‘others'. By investigating the pìazza’s architectural configurations, it assists the re-orientation of narratives around the history of Italian fascist architecture.
Introduction
In the 1930s, Mussolini designated Rome´s Piazza di Porta Capena as the stage for celebrating Italy's fascist empire.There, the ancient vestiges of the Roman imperial past would unite with the new symbols of Italy's colonial power: a stele (an ancient funerary monument) looted from Aksum at the time of the fascist occupation of Ethiopia was to be flanked by the headquarters of Italy's Ministry of the Colonies (also known as the Ministry of Italian Africa).Designed by modernist architects Mario Ridolfi and Vittorio Cafiero, construction began in 1938, only to be suspended once the Second World War brokeout.After Italy's defeat and the end of fascism, Italy lost control of its colonies in Libya, the Horn of Africa, and the Mediterranean (Albania and Greece).However, these events did not impede the construction of the building.Work resumed in 1950-1951, but instead of housing the Ministry of the Colonies it became the home of the United Nations Food and Agriculture Organization (FAO), thus symbolising the rise of the United Nations and a newly established world order under American hegemony.Today, what is left of the fascist imperial cityscape is a clear space, with the FAO headquarters facing a memorial to the victims of the 9/11 attacks in New York.The stele is not to be seen, having been returned in stages to Ethiopia between 2003 and 2005 (and finally reinstalled in situ in 2008) as part of reparations for colonial crimes.
This article illustrates how, since the fall of Italian fascism and the end of empire, the square has become a symbol of shifting world orders, under the influence of global multilateralism.From the FAO architecture to the 9/11 iconography, the square represents the conversion of fascist colonial relations of power into contemporary multi-scalar apparatuses of world governance.
The article explains the ways in which the shifts from colonial practices of government to post-colonial modes of development, humanitarianism, and pre-emptive military operations, which have been aesthetically and architecturally played out.Today, while the issue of governing and control over the former colonial world remains, it revolves around the creation of new topical 'emergencies' within the ex-colonised hemisphere.These include the ongoing global War on Terror and the protection of 'human security' and 'food security'.The changing aesthetics of the square offers a symbolic representation of today's world governance, one in which military and civilian powers converge and are used interchangeably to justify current interventions.These transformations, however, do not simply bear witness to geopolitical shifts and historical (dis)continuities.For the spatial transformation of the square symbolises the general process of cultural forgetfulness and Italy's weak historical consciousness around its colonial and fascist past.
In this article I read these entangled histories through the lens of the 'coloniality of architecture'.This refers to a method of inquiry that addresses the afterlife of colonialism by investigating its material configurations to uncover the rationalities of colonial rule that have survived in the absence of formal colonialism.It builds on Anibal Qujiano's theorisation of coloniality which addresses European modernity as a construct and a blueprint of differentiations from other cultures, as a system that relies on the structures of colonial domination over the rest of the world. 1 This relation has been shown by Walter Mignolo to be the specific core of the modernity complex that makes coloniality the darker side of modernity and modernity an unfinished project that ripples across global inequalities and injustices to this very day. 2 As a way to avoid 'the myth of the "postcolonial" as the notion that the elimination of colonial administrations amounted to the decolonisation of the world', 3 such continuity can, indeed, be read through the notion of 'coloniality'.Nelson Maldonado-Torres has argued that coloniality, unlike colonialism, refers to long-standing patterns of power that emerged as a result of colonialism, but that define culture, labour, intersubjective relations, and knowledge production well beyond the strict limits of colonial administrations. 4y discussing the case of the square, I intend to stretch further this argument and extend it to architectural knowledge and heritage.Architecturebefore and beyond design, practice, and constructionis a dynamic epistemic field, a mode of interpretation of reality, and a documentary form.Hence, this article understands the built environment and architecture as a medium for the establishment of power relations across different temporalities, as a mode of registering and containing the effect of historical transformations into material and spatial configurations.In the debates around architectural histories and theories, Samia Henni has correctly pointed out that acknowledging such relations 'is to expose the coloniality of history-writing and policy-making and simultaneously urge for an intersectional analysis of architecture and the political'. 5Critical discussion of this precise tension and intersection in the space of Piazza di Porta Capena is the core theme of this intervention.
To do so, the article is divided into four sections.I first introduce the debate around the afterlife of fascist architecture in relation to the question of decolonisation and defascistisation in Italy.Then I situate the loot and return of the stele of Aksum to Ethiopia within the Italian postcolonial and postfascist context, and how this event has stirred a growing debate around the unsolved question of fascist colonial legacies.I then move to the story of the square to unfold it in two ways: firstly, by situating the work of Ridolfi and Cafiero, who as modern architects retained their role as public servants in post-fascist Italy, at the intersection of changing world orders; and secondly, I consider the square as the stage on which the symbolic transfer of power from fascist colonialism to postwar global multilateralism has been enacted.The conclusion reiterates the need to imagine possible ways of re-thinking the presence of the past by fostering critical approaches to re-orient narratives around fascist colonial architectural heritage.
The question of the afterlife of Italian fascist architecture
By 1943 Italy had 'lost' all its colonies as the result of a series of military defeats in the Second World War.In the immediate aftermath of the war the 1946 general amnesty for fascist crimes (in Italy and abroad) saved from prosecution many war criminals and perpetrators of massacres in Libya and Ethiopia.Although the 1947 Paris Peace Treaties forced Italy to fully renounce claims to its colonies, decolonisation was not a closed chapter.While the Ministry of the Colonies was formally abolished in 1950, the UN partition of the Somali territories gave Italy a ten-year mandate of trusteeship in Somalia, commonly known as AFIS (Amministrazione Fiduciaria Italiana or 'Italian Trusteeship') to 'lead' the country to independence and support development.This kept the old fascist colonial administration in power until 1960.Meanwhile, in Libya, Italian settlers were finally expelled only after Muammar al-Qaddafi overthrew the monarchy in 1969.Overall, despite Italy's obligation to commit to reparations for its crimes, it took more than seventy years for Italy to meet its obligations towards Libya and Ethiopia, marked by the controversial Treaty on Friendship, Partnership and Cooperation with Qaddafi in 2009 and a reparations process built around the restitution of the Aksum stele to Ethiopia. 6s many scholars, journalists, and writers have discussed, since the end of Second World War, the Italian debate around decolonisation has too often been reduced to questions of a geopolitical transition which has relegated colonialism to a historical footnote, as merely a minor event in Italian history.Simultaneously, Italy's disenchantment with the past could be presented alongside the idea that fascism had to be understood as a mere caesura in the national history, a pause which, as famously theorised by the Italian philosopher Benedetto Croce, was simply a 'parenthesis' within liberalism and a linear path of European civilisation and progress.
Alongside this dual metabolisation of the past, architectural and urban heritage has stood still as a silent background.Since 1945, architecture built under the fascist regime has often kept a functional role and has been re-used by post-fascist republican governments to host new national and global institutions.In this regard, historian Ruth Ben-Ghiat reopened a long-lasting debate on the issue that in Italy it is very common to find fascist buildings, monuments, and memorials that have been left untouched or normalised.Why, unlike Germany, she asks, have Italian urban spaces been permitted to preserve so many traces of the fascist past? 7The lack of a real historicisation of this heritage and the a-critical celebration of its aesthetic 'beauty', argues Ben-Ghiat, have failed to clearly recognise that these remnants areafter allliving monuments glorifying violence and nationalism.Not surprisingly, her intervention has stirred a fierce opposition in the Italian public, as well as from architectural historians and critics. 8The argument of functionality has been commonly waived to explain that the continued use of many buildings is because this architecture does not reflect fascist ideology and is not anymore connected to ethnonational mythologies.Among many critics, architect and theorist Paolo Portoghesi has strongly claimed that Ben-Ghiat's 'simplistic polarizations between good and evil' risks to deliberately erase the 'cultural depth' of the architecture built under the regime and of its interpreters, and the influence that this had in making Italy a modern country. 9hat becomes clear from these exchanges is that the contestations have proven the sensitivity of this topic in the Italian public debate over its colonial and fascist past as well as the pertinence of posing difficult questions.Since then, critical scholarship on fascist architecture has grown and has taken on the challenge seriously by asking more questions: is this architecture retaining the essence of fascism?Can the immanent presence of fascist colonial architectural and monumental remnants make the case for 'continuities' between colonial/fascist and postcolonial/postfascist histories?Is there such a thing like a perpetual design? 10s an attempt to formulate answers to these questions, my contribution looks at the story, vicissitudes, aesthetics, and today's lay-out of the architecture of Piazza di Porta Capena to make the argument for political-historic convergences between colonialism, fascism, and their afterlives.It reiterates the warning and denunciation madealready in 1936by W. E. B. Dubois and C. L. R. James in the immediate aftermath of Mussolini's invasion of Ethiopia: Italian fascism was not to be considered as a deviation from the European march of progress.On the contrary it was the logical development of Western civilisation, the consequence of slavery and imperialism, and the drive of a capitalist economy and racist ideology, alll of which constitute the very premises of European and Western modernity and civilisational ethos. 11n his essay 'Discourse on Colonialism' (1950), Aime ́Ce ́saire also claimed that fascism could not be detached from other Western ideological traditions, such as liberalism, and their significance in the expansion of Europe's modern project.In line with this tradition, I seek to show that the entire space of Piazza di Porta Capena with its architecture, design, and aesthetics, rather than visualising a radical political divorce between Italian colonial fascism and its aftermath, offers the possibility to trace a history of shared rationalities between different forms of government of the 'others' and civilising missions that are intimately connected to each other. 12herefore, I argue that architectural heritage can serve as a tool not only to understand the historic territorial and expansionist dimensions of Italy's colonial project but also the ways in which certain systems of world governance had ramified after colonialism and fascism ended.
The looting and restitution of the stele of Aksum
By 1936, after Italy had occupied Ethiopia, fascism had reached the peak of popularity and consensus in Italy.Slogans such as the 'civilising mission' and the wish for a 'place in the sun' fostered the idea that, with the conquest of Ethiopia, Italy could now reach the maximum expression of its power.In 1937, Italy announced the creation of the Africa Italiana Orientale [Italian East Africa] through the unification of Ethiopia with its previously occupied colonies, Eritrea and Somalia, and further expanded the Italian settler colonial project in the Horn of Africa.David Rifkind points out that the fascist desire to emulate the Roman Empire inspired Mussolini to deploy state power for the realisation of communication infrastructure and, most importantly of all, looting of antiquities. 13Once Italy completed the military occupation of Ethiopia, a large stele from the city of Aksum was plundered by Italian colonial troops and transported to Rome.In antiquity, the Kingdom of Aksum (100 BC -700 AD) represented the earliest form of Ethiopian civilisation. 14The city hosted (and still does so in its role as a UNESCO heritage site) an impressive number of stelae originally carved and erected to mark the location of underground burial chambers.The theft of the stele and its re-erection in Rome in 1937 allowed the fascist regime to proclaim itself as the successor of Ancient Roman conquerors. 15The placement of the stele at the ancient heart of the city was meant to create vistas and axes that would connect the Circus Maximus to the Colosseum and the pyramid of Cestius, as well as creating an artery that would link the city to the newly built EUR42 imperial quarter and the nearby port of Ostia, permitting access to the Mediterranean Sea. 16he construction, facing the Aksum stele, of a new building to host the Ministry of Italian Africa, was therefore meant to celebrate Italy's imperial geography.After two rounds of architectural competition, Mussolini assigned the commission to a team of architects led by Mario Ridolfi and Vittorio Cafiero.The works were inaugurated in 1938, but the war caused their suspension in 1943.When the fascist government collapsed the four buildings (A, B, C, and D) that had been designed to compose the Ministry's complex were far from being complete.In the immediate aftermath of the conflict, part of the complex was firstly occupied by Rome's central post office.In the early 1950s, the works were re-started and the site was co-shared with the United Nations Food and Agriculture Organization (FAO).Construction of buildings C and D was completed in the 1960s, and by 1980, FAO became the only institution occupying the site.
As the building took shape, the 1947 Paris Peace Treaties set the terms for the return of the stele to Ethiopia.Article 37 stipulated the restitution of looted works of art, objects of religious, and historical value to their legitimate owners.But, despite these obligations, Italy kept the restitution on hold until 2002. 17By then new bilateral agreements were signed between Italy and Ethiopia, focussing around business, infrastructural projects, and development aid.These agreements, and through UNESCO mediation, eventually led to the complete restitution of the stele in Aksum in 2008, thus leaving the square empty and the FAO headquarters in Rome standing alone.
However, after the removal, the apparent sense of emptiness led to a new mutation.On 11 September 2009, the Mayor of Rome, the right-winger Gianni Alemanno, inaugurated a memorial to the victims of the terrorist attacks on the World Trade Centre in New York eight years earlier.The memorial, placed nearby the original location of the stele, consists of a plaque placed between two columns taken from the fountain of Curia Innocenziana in the Piazza di Montecitorio in Rome.This new spatial intervention clearly reproduces the profile of New York's Twin Towers.Paradoxically, on the plaque are carved the words of the philosopher George Santayana: 'Those who cannot remember the past are condemned to repeat it'. 18n recent decades, scholarly work in the fields of Italian studies and postcolonial and memory studies has focused on the case of Piazza di Porta Capena and the restitution of the stele as a way to draw new lines for mapping the geography of 'amnesia' of postcolonial Italy. 19While, as many scholars have claimed, the return of the stele has provided an important precedent in the current process of deriving new principles of international law in the field of cultural heritage and restitution, the erasure of the traces and symbols of colonial violence from public spaces has also reinforced Italy's 'postcolonial politics of disappearance'. 20In the postwar era, the piazza has progressively seen the imperial aura fading away.The area where the stele stood vertically became a small pedestrian passage while FAO was under construction.Surrounded by busy traffic of cars and public transport, the stele was swiftly anonymised and enstranged from its recent history.Rather than an imperial marker, for years it mostly served the everyday needs of Rome's oblivious dwellers and commuters as a recognisable point of reference.After the restitution of the stele in the 2000s, additional public works changed the whole ecology of Piazza di Porta Capena.Today's 'piazza'while nominally keeping the title of a 'square'does not hold any effective public function.As if it was a total 'outsider', as a forcibly exiled space from Italy's imperial, colonial, and fascist history, it functions as a huge junction for vehicles in between the Terme di Caracalla and Circus Maximus.What was originally designed as a large space between the Via d'Africa (what is now the Viale Aventino) and the nearby Roman ruins is now a constellation of noisy traffic islands, scattered in between multi-lane service roads, that seems spatially (and conceptually) detached from the FAO headquarters.
However, beyond the scholarly debate about such 'urban voids' as the embodiment of Italy's amnesia around its colonial and fascist past, people's everyday lived experiences have also offered new lenses of interpretation.In this direction, the most significant work has come from the novelist and academic Igiaba Scego with her Roma Negata.Percorsi postcoloniali nella città (2014).Offering an alternative sightseeing experience of Rome, narrating and displaying 'forgotten' fascist and colonial spaces, monuments, squares and buildings, Scego starts her tour from Piazza Capena, exactly where, instead of the Aksum stele, the 9/11 memorial is laid; it is a place where everything can be present and representedfrom FAO to the victims of global terrornow seemingly separate from being associated with Africa and the victims of Italian colonialism. 21This work importantly brings into an Italian context some of the key questions raised by the postcolonial critique, which are around the sense and stigma of 'disappearance'.Striking back against such absence, Scego produces a visual storytelling that juxtaposes Italy's space of oblivion with the images and portraits of the heirs of colonialism: the migrants and refugees now inhabiting the city.By illustrating a city (and a country) with a weak memory and responsibility towards the victims of colonial violence and brutality, readers are taken on a visual journey where the spaces designed by fascism become animated, marked, and re-claimed by those migrants, refugees, second-generation Italians and postcolonial diasporas that have been stigmatised as the 'invisible others'. 22ut, while most of the critique has focussed on the departure and restitution of the stele as a way to discuss the Italian postcolonial condition of 'absence', 'oblivion', and 'amnesia', this article argues that the presence of the fascist colonial building still standing in the Piazza, under the guise of a supranational and global modernising ethos, continues to shape how we understand international relations and politics in the present.In that sense, the building cannot simply be considered as a trace and a remnant of the past.Instead, it functions as an 'optical device' through which it is possible to observe the manifold ways in which power structures and relations of power evolve, change, and (re)consolidate across time.Beyond the scholarly debate around Italian Rationalism and the architectural styles that differentiated modernist architecture under fascism, 23 my analysis reveals the 'piazza' with its architecture and spatial lay-out as a mediator and generator of power relations at the interchange of world orders.To do so, the next sections will first explore the ways in which the two modern architects Ridolfi and Cafiero interpreted architecture during and after fascism, then the institutions they gave a home to in Piazza di Porta Capena (Figs. 1 and 2).
From fascism to democracy: architects across world orders
The architecture of Mario Ridolfi and Vittorio Cafieroas with many other architects who served under the fascist regimesurvived in different ways the end of fascism and lived on into a new epoch.Under the regime, Ridolfi was a representative of Rationalism, the Italian modernist avant-garde, while Cafiero came to fame as an envoy to Eritrea, supervising the enlargement of the city plan of Asmara after Mussolini announced the creation of the empire in 1938.
Under the leadership and guidance of such prominent representatives of Rationalism as Adalberto Libera, Ridolfi believed in functional architecture and in the correspondence between structure and the purpose of a building.He combined this with scenic effects, curvilinear aesthetics, and an emphasis on the nationalist heroic ethos and monumentality. 24In 1934, Ridolfi started his collaboration with Cafiero, submitting a project to the architectural competition for the Palazzo del Littorio of the Mostra della Rivoluzione Fascista in Rome. 25 One year later, he successfully completed the construction of a post office in Nomentana in Romeuntil today perceived as an iconic symbol of Italian rationalist architecture.
In the postwar years, Ridolfi took on the new roles as a public architect and an innovator in architectural pedagogy.Together with Pier Luigi Nervi, Luigi Piccinato, Aldo della Rocca, and Bruno Zevi (upon his return to Italy from the US after escaping the fascist racial laws), he was a founding member of the Scuola tects who previously served under the regime, the years of the reconstruction were an occasion to claim their disassociation (both cultural and architectural) from fascism, and lend theirs skills to find solutions to the social, industrial, and environmental destruction brought by the war.At a time when American hegemony played an important role in the reconstruction of Italy, Ridolfi was commissioned in 1946 by the American Information Service Institute (the overseas arm of the Office of War Information during the Second World War) to head a research team and write the Manuale dell'architetto. 26This Manuale was written to teach architects how to deploy 'a systematic approach, through modular dimensional systems and prefabricated elements, to the urgent need for housing and other civic structures'. 27Most importantly, it represented a political need to dis-enfranchise modern and modular building technique from a totalitarian logic. 28In this spirit, Ridolfi designed new social housing projects in 1949 in Nomentana in Rome.Known as the Tiburtino houses, these projects resembled traditional rural construction to promote the relocation to urban areas of those fleeing poverty in rural areas. 29This urban model was the neighbourhood unit experimentally developed first by the Tennessee Valley Authority during the American New Deal, which became a model for other social housing projects inaugurating the so called 'neorealist' architecture of post-war Italy. 30This urbanism, which reflected a distortion of the previous monumentalism and rationalism, represented the new tendency among architects to break the ties with the past and align themselves and their designs to the new democratic governance.
However, unlike Ridolfi who in fact abandoned the monumental project of the FAO building, for his part, Vittorio Cafiero never gave up on monumentality during and after fascism.He started his career as a young architect working as a scenographer for the regime's cinema industry.In 1926, he set up Gli Ultimi Giorni di Pompei, a propaganda movie that aimed at creating a fascist aesthetics grounded on the myth of a continuum between the glory days of the Roman empire and the fascist present. 31Under this influence, his architecture developed and grew as an attempt to merge classicism with modernist styles, combining baroque decorations with futurist traits.In addition to the collaborations with Ridolfi, Cafiero retained his monumentalist style after fascism, typified by his design of the Olympic Village in Rome.
Cafiero played a key role delivering the last master plan of Asmara, capital of occupied Eritrea.In 1936 and 1937, after the chief architect of the regime, Marcello Piacentini, developed the general plan for the empire, fascist modern architecture wanted to experiment in the colonies with the method of zoning promoted by the CIAM [Congrès internationaux d'architecture moderne] and implement urban segregations similar to those undertaken by the French architect Henri Prost in colonised Morocco. 32In 1937, an architectural and planning competition was announced to update Odoardo Cavagnari's old plan of Asmara (1914-1918).Cafiero arrived in Eritrea in 1938 on a mission to modernise, to make Asmara 'look more fascist', to test modern functionalism and zoning, and to implement stricter racial separation of Eritreans from Italians and other Europeans. 33To do so, the master plan FAO headquarters), Rome, 1937-1939, then 1947-1951, designed by Mario Ridolfi, with Vittorio Cafiero, Giulio Rinaldi, Ettore Rossi (first and second degree competition), Volfango Frankl, Alberto Legnani, and Armando Sabatini, courtesy of Accademia Nazionale di San Luca, Archivio contemporaneo, Fondo Ridolfi-Frankl-Malagricci, Roma <www.fondoridolfi.org>focused on racialised allocations of private and public spaces in accordance with Italy's Racial Laws (1938) and the Penal sanctions for the Defence of Racial Prestige against the Natives of Italian Africa (1939). 34Cafiero was a demolition architect, charged with expanding segregation and developing public spaces accessible only to white settlers.He intended to completely remove the native quarters in the northeast of Asmara, build perimeter walls and barracks around the 'White City', and transform the hilly area of Abba Shawl into a green barrier to serve as a buffer zone to prevent rebellions. 35his plan was eventually ditched by the then Governor Diodace, fearing it would further antagonise the colonised population.
From 1937 to 1939, Cafiero and Ridolfi worked on finalising the design of the Ministry of Italian Africa.The works started in 1938, but were suspended in 1943.The building site was re-opened in 1947 as the war ended, then was completed in the early 1950s, with a fifteen-year delay.This time, because of Ridolfi's move away from the project, the building became real under the sole supervision of Cafiero and his team.Still under the aegis of a modernist monumentality (with minimal changes from the original drawings), but with an inverted world order, the completion of the FAO building can be read as part of a novel architectural trajectory.In the immediate decade following the end of the war, a new liaison between power and architecture took shape: Oscar Niemeyer (with Le Corbusier) completed the United Nations building in New York in 1950.The design of Mario Ridolfi and Vittorio Cafiero followed in 1952, giving a 'home' to the FAO.Marcel Breuer, Bernard Zehrfuss, and Pier Luigi Nervi made the UNESCO headquarters in Paris in 1958.New claims and values were invoked to legitimise an historic geopolitical transition.Architects themselves played a key role in celebrating and popularising the symbols of democracy, the end of nationalisms, and the rise of multilateralism.Indeed, Niemeyer in 1947 stated that it was the duty of an architect to make 'something representing the true spirit of our age, of comprehension and solidarity' to reflect an organisation that 'set the nations of the world in a common direction and gives to the world security'. 36Similarly, the project of the UNESCO headquarters in Paris envisioned a larger Y-shaped structure that could host important and universal art collections as a way 'to evoke the peace that the institution has sought to establish and preserve throughout the world'. 37These projects bear evidence of the new challenges that modernists had to face in a postfascist world.As Lucia Allais explains, these consisted of keeping modernism as an international style, protecting the monumental aesthetics, and making sure that the UN headquarters would keep tight the analogy between design and diplomacy. 38gainst this model, the case of the FAO headquarters represents an anomaly.With the realisation in a post-fascist world of a design that was originally imagined for the celebration of fascist supremacy, the dilemma stays un-answered: what values this building stands for and what image of society it aims to be representing?Assuming that we should think of any architectural project in its broader physical framework and not exclusively by its design and its material form, 39 I will elaborate an answer to the previous question by discussing the building as a secular cathedral under which different powers constantly flow and shift.To do so, we now explore the interrelations of these powers in the wider space of Piazza di Porta Capena (Figs. 3, 4, and 5).
Genealogies of power in the square of the empire
The specific case of the square and its chameleonic architecture offers an interesting spatial configuration of Italy's political and cultural 'repression' of this heritage.By raising the question of 'repression', I do not intend to reiterate the debate on Italy's psycho-social amnesia around its colonial and fascist past.Instead, I refer to the theorisation offered by Giorgio Agamben in opposition to the notion of 'profanation'.Agamben borrows the term 'profanation' from the lexicon of theology and transforms it into a political operation that 'deactivates the apparatuses of power and returns to common use the spaces that power had seized'.As the counterpart of profanation, Agamben introduces 'secularisation' as a form of repression which traces some linearity between relations of power in transition: It leaves intact the forces it deals with by simply moving them from one place to another.Thus, the political secularisation of theological concepts (the transcen- dence of God as a paradigm of sovereign power) does nothing but displace the heavenly monarchy onto an earthly monarchy, leaving its power intact. 40 that sense, the coloniality of architecture is traceable by the ways the building itself and its surroundings embody an 'earthly' transition of power structures.After the war, the aura of the vertical power of the Ministry gave way to the new power of the UN, in particular FAO, embodying the transition from fascist and colonial verticality to the multi-scalar developmental technologies of world governance.Since the San Francisco conference in 1945, the UN has symbolised the foundation of a new order grounded on the end of nationalism, promotion of economic growth and capitalism, the spread of liberal-democracy, and creation of new institutions and norms.FAO was established between 1943 and 1946 to tackle hunger, food crises, and poverty on a global scale, and it became central at the dawn of the Cold War in the fight against communism.With the end of empires, the FAO embarked on missions around the globe training 'Third World counterparts in American expertise', contributing to creating solid foundations for the contemporary development industry, at the core of which are an elite of supposedly 'apolitical technical experts'. 41In the 1960s, FAO and the UN General Assembly established the World Food Programme, combining the fight against world hunger with the reality that food is part of foreign policy and diplomacy.Since then, it has progressively grown as the symbol of a sort of liberal internationalism that merges humanitarian principles with development.From the 'Universal Declaration of Human Rights' (1948) to the '2030 Agenda for Sustainable Development' (2015), the right to food and the elimination of hunger stands at the foreground of FAO's agenda.Since the 1980s, the concept of 'food security' has become hegemonic in global liberal discourse.Additionally, in 1994, FAO adopted the concept of 'human security' as a way to complete its set of technologies of security and containment.According to Mark Duffield, human security depends on the optimisation, calculation, and pre-emption of those potential dangers brought about by the uncontrolled circulatory effects of crisis and emergencies, whether humanitarian disasters, famines, poverty, or mass migration. 42Hence, the invention of 'human security' was moving the value of 'security' from its traditional reference point represented by nation states to one of populations.This shift sought to address issues such as the prevention of potential risks that could transform human societies into ones in distress.Duffield continues to argue that human security brought to completion the theorisation of development as a biopolitical practice of government. 43The invention of statistical expertise, professions, and sciences, together with studies on human reproduction, health care, epidemics, and nutritional status of a population, became the core around which political rule and government were coalescing to promote the goal of protecting human life as the conditio sine qua non for the safety of the sovereign. 44In this way, human security expands our understanding of Foucault's biopolitics through a series of practices that work against the proliferation of international security threats for a system of governance that has FAO as one of its significant icons.
By epitomising a move from a fascist colonialism to liberal world governance, I argue that we can look at the establishment of the FAO headquarters as an important step in the transition from historic colonialism to a coloniality of power.Angelo Del Boca and Giorgio Rochat, among many other scholars of Italian colonialism, have documented how Italy is accountable for many atroci-ties: including mass deportation from Eritrea, Libya, and Ethiopia; internment in concentration camps in Cyrenaica in Libya as well as in Somalia; the use of chemical weapons and poison gas to exterminate Ethiopian resistance fighters between 1935 and 1936, and the indiscriminate killing of civilians as a form of retaliation used by the army and armed settlers. 45These 'warfare practices' inscribe Italian colonialism within the sphere of a modern necropower.Achilles Mbembe, in his critique of Foucault's notion of biopolitics, argues that, unlike in Europe where the government of people operates through sets of technologies which protect and foster 'life' for the creation of spaces of security, the example of the colonised world is emblematic of the creation of 'lawless' spaces as a way to justify the fabrication of death. 46At this intersection, the building keeps a functional role, mustering evidence of the historical and political transitions andwith Agambenthe 'secularisation' between two forms of rule: from the necropolitical 'right over life and death' to postcolonial expressions of biopower that meet and converge in the modernist shape of today's FAO headquarters.
Furthermore, the substitution in the late 2000s of the Aksum stele with a 9/ 11 memorial has reinforced the coloniality of power, making room for other memorialisations and new civilisational messages.This latest intervention has triggered the attempted making of new global historical memories that are not necessarily bound to the sense of belonging to the nation or an ethnos, but, on the contrary, they inform the identity of those communities who did not directly experience specific traumatic historical events. 47The experience of catastrophe following the Holocaust and the Second World War came to create a global political and moral space where collective trauma is held hostage by an exclusive Western interpretation, providing inspiration and justification for military and non-military interventions to prevent outbreaks of major threats to the global hegemonic order.Within this context, 9/11 acted as a historical turning point for the West out of which a transnational sense of 'vulnerability' started spreading.This has generated, and continues to generate, justifications of military aggression and war on a global scales a way to protect Western interests and concerns.
Through the FAO headquarters and the new memorial, the 'coloniality of architecture' materialises through an enduring civilisational aesthetics that marks a convergence between colonial and the postcolonial forms of government of non-European others.Under a new guise, I have shown how the modernity-coloniality complex survives the end of fascist colonialism by perpetuating slogans such as the 'civilising mission'.In Italy, the civilising mission originally relied on the fascist equation made between civilisation, architecture, and archaeology, the supposed line of separation between 'moderns and barbarians'.As Mia Fuller explains, the civilisation and architecture equation of the Italian nationalist discourse in the 1930s aimed to trace a continuous line through Italian history, tying Roman ruins to the great expectations of a future fascist empire.The ancient Roman architecture and vestiges were mobilised by fascist propaganda to symbolise a modern outpost in the African colonies. 48So, while the looting of the ancient funerary stele gave to the fascist regime the illusion of following in the footsteps of Ancient Roman civilisation, the replacement of the stele epitomises a new Western civilisational spirit, one that has garnered international consent to launch transnational acts of violent retribution most notably in Afghanistan, Iraq, the Levant and the Horn of Africa to justify the War onTerror as a global crusade 'against barbarity'.
In so doing, I argue that the square bear witness to the coloniality of architecture in the ways it weds fascist-colonial spatial transformations to postcolonial architecture and monumantality, connecting historic colonialism to postcolonial government of Western extraterritorial crises and emergencies.This introduces ideological substance to new relations of power, representing the nexus between the War on Terror and the right to food as evidence of post 9/11 global security concerns, the call for international intervention along the axis of security and development, and the connection between military and developmental and humanitarian concerns.This has given the international public a pragmatic interpretation of poverty as dangerous, as the perilous path leading to invisible threats, risks, and fear; victims can turn into enemies and today's victims can morph into tomorrow's perpetrators, becoming the object of concern and the potential target of both civilian and military technologies.The square, despite the transition from fascism to global liberalism, preserves traces of 'convergences' among Western civilisational narratives.
Conclusion
The case of the Piazza di Porta Capena has produced many narrative threads, which showed importantly the possibility to read more than a single history of heritage.On the one hand, it gives us the tools to critically engage with the issue of Italy's flattering self-representation as a nation guilty of a less evil form of colonialism.On the other hand, the FAO headquarters bears testimony to the links between past and present, embodied by an architecture and designs that 'perpetuate' across different epochs, complementing civilisational narratives and practices.
As I have shown, spatial analysis and interpretations are combined with biographical evidence to reinterpret the histories of Italian fascist colonial architecture.Too often this architecture and heritage have been deemed superfluous and redundant to understanding the political complexities of the (post)colonial present.The case of the FAO building and its surrounding demonstrates the exact opposite.Despite Ridolfi and Cafiero's limited collaborations, the FAO headquarters in Rome inscribe them in the exclusive elite of modern architects that, within the same historical period, designed and built the new centres of global power of the United Nations and UNESCO, respectively in New York and Paris.By flagging the secularisation process overtaking their architecture, this article points to the need to develop the terms, concepts, as well as the content of the conversations around Italian colonial and fascist architecturebeyond debates on formalism and architectural styles.It argues for an epistemic shift that enables this architecture to be understood as the basis of developing entangled analysis around modernity, colonialism, fascism, and their aftermaths.By using architecture as both the object and the method of inquiry, I develop the concept of the 'coloniality of architecture' as a way to discuss through architecture and heritage the long-lasting legacies of colonialism, especially 'after' its formal end.In this way, the space of Piazza di Porta Capena offers multiple ways to read the inextricable combination of the civilisational rhetoric of modernity (civilisation and development) and the logic of fascism and colonialism.
Disclosure statement
No potential conflict of interest was reported by the author.
Figure 4 .
Figure 4.The 9/11 memorial in Piazza di Porta Capena, photographed by the author, 2021 Figure 5.The FAO building, photographed by the author, 2021 | 2023-09-14T15:19:52.476Z | 2023-05-19T00:00:00.000 | {
"year": 2023,
"sha1": "7f4a00678b33bef23c5975f617ea2db8d904bc74",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/13602365.2023.2238284",
"oa_status": "CLOSED",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "80a332a5ad50045f0700a8821e8c25510c9dfc15",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
18162481 | pes2o/s2orc | v3-fos-license | Simulating High-Redshift Disk Galaxies: Applications to Long Duration Gamma-Ray Burst Hosts
The efficiency of star formation governs many observable properties of the cosmological galaxy population, yet many current models of galaxy formation largely ignore the important physics of star formation and the interstellar medium (ISM). Using hydrodynamical simulations of disk galaxies that include a treatment of the molecular ISM and star formation in molecular clouds (Robertson&Kravtsov 2008), we study the influence of star formation efficiency and molecular hydrogen abundance on the properties of high-redshift galaxy populations. In this work, we focus on a model of low-mass, star forming galaxies at 1<~z<~2 that may host long duration gamma-ray bursts (GRBs). Observations of GRB hosts have revealed a population of faint systems with star formation properties that often differ from Lyman-break galaxies (LBGs) and more luminous high-redshift field galaxies. Observed GRB sightlines are deficient in molecular hydrogen, but it is unclear to what degree this deficiency owes to intrinsic properties of the galaxy or the impact the GRB has on its environment. We find that hydrodynamical simulations of low-stellar mass systems at high-redshifts can reproduce the observed star formation rates and efficiencies of GRB host galaxies at redshifts 1<~z<~2. We show that the compact structure of low-mass high-redshift GRB hosts may lead to a molecular ISM fraction of a few tenths, well above that observed in individual GRB sightlines. However, the star formation rates of observed GRB host galaxies imply molecular gas masses of 10^8 - 10^9 M_sun similar to those produced in the simulations, and may therefore imply fairly large average H_2 fractions in their ISM.
Introduction
To improve the physical description of star formation in hydrodynamical simulations of galaxies, Robertson & Kravtsov (2008) implemented a new model for the ISM that includes low-temperature (T < 10 4 K) cooling, directly ties the star formation rate to the molecular gas density, and accounts for the destruction of molecular hydrogen by an interstellar radiation field (ISRF) from young stars.They used simulations to study the relation between star formation and the ISM in galaxies and demonstrated that, for the first time, their new model simultaneously reproduces the molecular gas and total gas Kennicutt-Schmidt (KS) relations, the connection between star formation and disk rotation, and the relation between interstellar pressure and the fraction of gas in molecular form (e.g.Wong & Blitz 2002, Blitz & Rosolowsky 2006).The capability of this model to reproduce both the star formation efficiency and molecular abundance of nearby systems makes it useful for simulating low-mass galaxies that have suppressed H 2 abundances (and whose star formation rates would be overestimated in common treatments of star 2 Brant E. Robertson formation based on the KS relation) and high-redshift galaxies whose structural properties may vary substantially from local systems (and may therefore not have the same KS relation normalization).The model should be especially useful for studying low-mass galaxies at high-redshift, such as long duration gamma-ray burst (GRB) host galaxies at 1 z 2, which is the focus of this work.
The highly-energetic phenomena known as GRBs were discovered over forty years ago (Klebesadel et al. 1973), but their extragalactic origin was confirmed only in the last decade (e.g., Metzger et al. 1997).Since then, the properties of the cosmological population of galaxies that host GRBs have been increasingly well-studied (e.g., Bloom et al. 2002, Le Floc'h et al. 2006, Prochaska et al. 2006, Berger et al. 2007a,b).Recently, interest in long duration GRB galaxy hosts as possible tracers of the global star formation history of the universe has motivated systematic studies of their star formation efficiencies and stellar masses (Castro Cerón et al. 2008, Savaglio et al. 2008).These studies have found that high-redshift GRB hosts have small stellar masses (log M ⋆ ∼ 9.3) and moderate star formation rates (SFR ∼ 2.5 M ⊙ yr −1 ).Compared with other high-redshift galaxy populations, GRB hosts tend to have lower star formation rates at fixed stellar mass compared with Lyman-break galaxies and lower stellar masses at fixed star formation rate compared with field galaxies (for details, see Savaglio et al. 2008).
Spectroscopic studies of GRB sightlines have provided additional information about the post-explosion character of the host galaxy ISM.Tumlinson et al. (2007) failed to detect H 2 in five GRB sightlines and suggested that low metallicity and large far ultraviolet ISRF strengths (10 − 100× the Milky Way value) were responsible for destroying molecular hydrogen in GRB hosts.They interpreted the lack of vibrationally excited H 2 lines as evidence against the GRB destroying its parent molecular cloud, but noted various caveats to this conclusion such as the parent cloud size or cloud photodissociation before to the GRB.Whalen et al. (2008) used one-dimensional radiative hydrodynamical calculations to show that GRBs can ionize nearby neutral hydrogen, but suggested that an additional ISRF is necessary to remove molecular hydrogen from the nearby ISM.Prochaska et al. (2008) studied NV absorption in GRB sightlines, and argued that if nitrogen ionization by GRB afterglows leads to NV absorption then the observations support a scenario where dense, molecular cloud-like environments serve as the sites of GRBs.
Given the increasingly detailed studies of GRB hosts, their interesting ISM and star formation properties, and their low stellar masses, a theoretical study of GRB host galaxy analogues using hydrodynamical simulations that include a treatment of the molecular ISM is warranted.Below, we present simulations of a model GRB host galaxy that include a prescription for the molecular ISM and star formation in molecular clouds (Robertson & Kravtsov 2008).We use the simulations to examine the star formation efficiency and molecular hydrogen content of galaxies with structural properties similar to those expected for low-mass galaxies at 1 z 2. Below, we discuss our methodology and present some initial results.
Methodology
To study the properties of long duration GRB host galaxies, we simulate a numerical model of an isolated galaxy using a version of the N-body/Smoothed Particle Hydrodynamics code GADGET (Springel et al. 2001, Springel 2005b) that incorporates a model for the molecular ISM (Robertson & Kravtsov 2008).For details regarding the numerical galaxy models, simulation methodology, and ISM model, we refer the reader to Springel The numerical galaxy model is designed to approximate the properties of 1 z 2 GRB host galaxies as determined by Savaglio et al. 2008.The stellar disk mass of the system is set to log M ⋆ = 9.3, with a gas fraction of f gas = 0.5 (appropriate for highredshift, see Erb et al. 2006), which implies a total virial mass of log M vir = 10.9 for a typical disk baryon fraction of f b = 0.05.The virial radius is set appropriately for a halo with virial mass M vir at z ∼ 2. The exponential disk scale length was fixed according to the Mo et al. (1998) formalism, including the adjustment for an effective Navarro et al. (1996) dark matter halo concentration of c NFW = 6 (also appropriate for the chosen virial mass and redshift, see Bullock et al. 2001) and a spin of λ = 0.05.The density field of the dark matter halo follows the Hernquist (1990) profile, while the velocity fields of the dark matter halo and the exponential stellar disk are set using the Hernquist (1990) distribution function and the epicyclical approximation, respectively.The numerical realizations of the stellar disk, gaseous disk, and dark matter halo are initialized with N disk,⋆ = 4 × 10 5 , N disk,gas = 4 × 10 5 , and N DM = 4 × 10 5 particles, and are evolved with a gravitational softening of ǫ = 70 pc.The simulation is calculated for a duration of t ∼ 1 Gyr, or about the time between redshift z ∼ 2 and z ∼ 1.5.
The simulation includes a treatment of the physics of the ISM and star formation following the model presented by Robertson & Kravtsov (2008), and interested readers should examine that work for details.The photoionization code CLOUDY (Ferland et al. 1998) is used to tabulate the cooling rate, heating rate, molecular abundance, and related properties of gas as a function of density, temperature, metallicity, and local interstellar radiation field (ISRF) strength.The star formation rate is calculated by converting the molecular gas density to stars on a timescale that scales with the local dynamical time, with an efficiency set to match the star formation efficiency per free fall time in local molecular clouds (e.g., Krumholz & McKee 2005, Krumholz & Tan 2007).The local ISRF spectral shape is fixed to the local Milky Way ISRF inferred by Mathis et al. (1983), but the ISRF strength scales with the local star formation rate density (i.e., young, massive stars supply the local ultraviolet radiation field).The abundance of molecular gas tracked using CLOUDY includes the photodissociative and heating effects of this ISRF, and thereby includes a coarse accounting of the regulatory impact of the ISRF on star formation in molecular clouds.
Results
Figure 1 shows the gaseous structure of the GRB host galaxy model at z ∼ 1.5 (after 900 Myr of evolution).The figure shows the gas surface density of the system (image intensity) and the median temperature of the local ISM (purple regions have temperatures T 10 3 K, while blue regions have T 10 4 K).The system has a rotational velocity of v rot ≈ 100 km s −1 and a disk scale length of R d ≈ 1.5 kpc.During the simulation the star formation rate of the system varies in the range SFR ≈ 0.5 − 2.5 M ⊙ yr −1 , while the specific star formation rate is SFR/M ⋆ ≈ 0.17 − 1.1 Gyr −1 .These properties are consistent with the properties of high-redshift GRB host galaxies determined by Savaglio et al. (2008), who find star formation rates of SFR ∼ 0.1 − 10 and specific star formation rates of M ⋆ /SFR ∼ 0.1 − 10 Gyr.
The compactness of the system leads to a dense ISM and a considerable molecular gas fraction.The global, mass-weighted molecular abundance declines from f H2 ∼ 0.5 at z ∼ 2 to f H2 ∼ 0.3 at z ∼ 1.5.As a function of radius, the molecular fraction declines from f H2 ∼ 1 near the center of the galaxy to f H2 ∼ 0.1 − 0.3 beyond a disk scale radius.The typical star formation rate-weighted radius of the system is r SFR ∼ 0.8 kpc, where the ISM molecular fraction is f H2 ∼ 0.6.Hence, if observed high-redshift GRB host galaxies are similar in nature to this simulated system, GRBs will likely occur in molecular-rich regions.While these molecular abundances are consistent with the spectroscopic studies by Prochaska et al. (2008), they are well above the observed H 2 abundance along GRB sightlines (e.g., Tumlinson et al. 2007).In this model, the compact and dense structure of the high-redshift GRB host prevents the diffuse ISRF from suppress the H 2 to levels observed in GRB sightlines.If an ISRF is responsible for suppressing H 2 to observed levels in GRB sightlines, it may be generated by discrete point sources nearby the GRB in a manner not captured by the diffuse ISRF included in these simulations.
We note that in order to supply the star formation efficiency for GRB hosts determined by Savaglio et al. (2008), GRB hosts may need to be fairly molecule rich if their structure is similar to the simulated high-redshift galaxy analogues presented here.Figure 2 shows the total gas Kennicutt-Schmidt (KS) relation for the GRB host galaxy analogue, measured in annuli with a width of ∆r = 100 pc.Plotted is the star formation rate density Σ SFR as a function of the total gas surface density Σ gas (blue points), compared with the mean disk-averaged trend determined by Kennicutt (1998, dashed-line).The central concentration of molecular gas causes the total gas KS relation of the simulated GRB host galaxy analogue to be steeper than the disk-averaged relation.In order to supply the observed star formation rate of SFR ∼ 1 − 10 M ⊙ yr −1 , as this simulated galaxy does, the typical consumption timescales of ∼ 100 Myr for molecular gas imply a reservoir of roughly M H2 ∼ 0.1 − 1 × 10 9 M ⊙ (the simulated system has M H2 ∼ 2 − 7 × 10 8 M ⊙ during its evolution).Since observed GRB hosts have stellar masses of only log M ⋆ ∼ 9.3 (Savaglio et al. 2008), the inferred molecular fraction of the ISM should be large even for very gas rich systems.
Overall, we find that under standard assumptions about the mass and redshift scalings of galaxy structure hydrodynamical simulations of disk galaxies with stellar masses of log M ⋆ ∼ 9 that utilize a model for the molecular ISM and star formation in molecular clouds (Robertson & Kravtsov 2008) can reproduce the observed star formation rates and efficiencies of GRB hosts (e.g., Castro Cerón et al. 2008, Savaglio et al. 2008).The star formation in both observed GRB hosts and the simulated GRB host analogue presented here is efficient for their low stellar masses, and to supply the observed range of star formation rates (SFR ∼ 0.1 − 10 M ⊙ yr −1 ) the molecular gas content such systems may need to be considerable (f H2 0.1).While this result is consistent with observations that suggest GRBs occur in dense, potentially molecular-rich regions of the ISM (e.g.Prochaska et al. 2008), more work is needed to reconcile such results with the low molecular abundance observed in GRB sightlines (e.g.Tumlinson et al. 2007) if GRBs cannot efficiently destroy H 2 in the ISM (Whalen et al. 2008).
Figure 1 .
Figure1.Simulated long duration Gamma-Ray Burst (GRB) host galaxy analogue at z ∼ 1.5.Shown is the gas surface density of the GRB host (image intensity), color coded by the median interstellar medium temperature (purple regions have T < 10 3 K, while blue regions have T 10 4 K).The simulated galaxy has a stellar mass log M⋆ ≈ 9.3 and a star formation rate SFR ≈ 1.2M⊙yr −1 , similar to high-redshift GRB host galaxies(e.g., Castro Cerón et al. 2008, Savaglio et al. 2008).The simulations include theRobertson & Kravtsov (2008) model of the molecular ISM, enabling a study of the connection between star formation rate, galaxy properties, and H2 abundance in GRB hosts..et al. (2005a),Robertson et al. (2006a,b), andRobertson & Kravtsov (2008), but a brief summary follows.The numerical galaxy model is designed to approximate the properties of 1 z 2 GRB host galaxies as determined bySavaglio et al. 2008.The stellar disk mass of the system is set to log M ⋆ = 9.3, with a gas fraction of f gas = 0.5 (appropriate for highredshift, seeErb et al. 2006), which implies a total virial mass of log M vir = 10.9 for a typical disk baryon fraction of f b = 0.05.The virial radius is set appropriately for a halo with virial mass M vir at z ∼ 2. The exponential disk scale length was fixed according to theMo et al. (1998) formalism, including the adjustment for an effectiveNavarro et al. (1996) dark matter halo concentration of c NFW = 6 (also appropriate for the chosen virial mass and redshift, seeBullock et al. 2001) and a spin of λ = 0.05.The density field of the dark matter halo follows theHernquist (1990) profile, while the velocity fields of the dark matter halo and the exponential stellar disk are set using theHernquist (1990) distribution function and the epicyclical approximation, respectively.The numerical realizations of the stellar disk, gaseous disk, and dark matter halo are initialized with N disk,⋆ = 4 × 10 5 , N disk,gas = 4 × 10 5 , and N DM = 4 × 10 5 particles, and are evolved with a gravitational softening of ǫ = 70 pc.The simulation is calculated for a duration of t ∼ 1 Gyr, or about the time between redshift z ∼ 2 and z ∼ 1.5.The simulation includes a treatment of the physics of the ISM and star formation following the model presented byRobertson & Kravtsov (2008), and interested readers should examine that work for details.The photoionization code CLOUDY(Ferland et al. 1998) is used to tabulate the cooling rate, heating rate, molecular abundance, and related properties of gas as a function of density, temperature, metallicity, and local interstellar radiation field (ISRF) strength.The star formation rate is calculated by
Figure 2 .
Figure2.Kennicutt-Schmidt relation for a simulated GRB host galaxy at z ∼ 1.5.Shown is the star formation rate surface density ΣSFR as a function of total gas surface density Σgas, measured in annuli (blue dots).The average Kennicutt-Schmidt relation of the GRB host has a steeper power-law index (α ∼ 3.0, green line) than the disk-averaged relation measured byKennicutt (1998; α ∼ 1.4, dashed line), owing to the suppression of H2 in the galaxy exterior by the interstellar radiation field and the low ISM metallicity (for a detailed discussion, seeRobertson & Kravtsov 2008). | 2008-08-07T20:07:25.000Z | 2008-06-01T00:00:00.000 | {
"year": 2008,
"sha1": "cbd0f0699920b3af572d651eda152ac0e9d60421",
"oa_license": null,
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/CC241CDBAB264715C11DEF944E6CB319/S1743921308027361a.pdf/div-class-title-simulating-high-redshift-disk-galaxies-applications-to-long-duration-gamma-ray-burst-hosts-div.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "cbd0f0699920b3af572d651eda152ac0e9d60421",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
213181196 | pes2o/s2orc | v3-fos-license | Impact of instruction with concept cartoons on students’ academic achievement in science lessons
In this study, the impact of concept cartoons on students’ academic achievement in science lessons was investigated. The research was carried out in 2018-2019 spring term. The study group consisted of 49 4th grade students in Zonguldak Devrek Çaydeğirmeni TOKİ Primary School. 23 of the students were in the experimental group, and 26 of them were in the control group. Quasi-experimental design with pretest and posttest control group was employed in the study. The unit “The Earth Crust and Movements of The Earth” was taught with concept cartoons to the experimental group students, and with conventional method (current instructional program) to the control group students. The research lasted for 4 weeks. The students in the experimental and control groups received 12 h of education (3 h per week). Achievement test and concept cartoons were used as data collection tools. Arithmetic mean, standard deviation, normality test, KMO test and independent groups t-test were used for data analysis. A statistically significant difference was found between academic achievements of experimental group students on whom instruction was made with concept cartoons and of control group students on whom instruction was carried out with conventional method. The difference was in favor of the experimental group students.
INTRODUCTION
The studies in the area of education and training have always tried to find answer to this question: How can people learn better and more easily? While searching answer to this question, new instructional theories, approaches, methods and techniques were obtained. Scientific knowledge obtained in the area of education is a result of these studies. However, information about how people can learn better has not been found yet. Final changes in curriculum have promoted students to be active in education. Reaching knowledge and constructing it in mind have been missions of students whereas teachers have taken the role of guiding students.
Constructivist approach was adopted while making changes on Science curriculum in Turkey (MEB, 2005). According to constructivist approach, learning is shaped with regard to individual's prior knowledge, his/her personal characteristics and learning environment (Özmen, 2004). Constructivist approach argues that E-mail: muammeryilmaz66@gmail.com. Tel: 0090 505 3196019.
Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License learning happens as a result of an active learning process which is constructed by an individual through interpersonal variations and by interaction with physical phenomena (Watts, 1997;Spigner-Littles and Anderson, 1999). Constructivist approach suggests that learning is a process that includes association of prior and new knowledge of an individual (Liang and Gabel, 2005). Activities in which students are required to be active have gained importance with the changes in the curriculum (Gürol, 2003). In this context, learning environment in which students are included and instructional methods and techniques are pretty important in increasing quality (Hançer et al., 2003).
Although a constructivist approach has been adopted in the science curriculum, the lessons are still taught using traditional methods.This situation adversely affects the students' active participation in the lesson.However, the constructivist approach requires methods and techniques that ensure the active participation of the students in the lesson and supports individual differences. From this perspective methods and techniques are needed ın the Science lesson to ensure the active participation of the students in lesson. Teaching methods and techniques used in teaching process have great importance in students' active participation in classes, in focusing their attention to classes, in their producing original ideas, in their improving creativity skills, in their assessing course contents, briefly, in enabling permanent learning. In this regard, it can be suggested that instruction with concept cartoons is effective in teaching process. Concept is a general name of an object or thought in mind. Concept refers to the word based on information obtained about an object or a topic. Learning concepts accurately facilitate reaching information about concepts; however, learning them inaccurately may cause misconceptions.
Misconceptions are information that hinder teaching concepts that happen as a result of individual experiences and that are scientifically verified (Çakır and Yürük, 1999). Another description identifies them as behaviors that occur in consequence of students' false beliefs and experiences (Baki, 1999). Students' learning concepts about content of science lessons is important in terms of course learning outcomes. A concept which is learnt inaccurately or incompletely can lead to misconceptions.
True learning becomes quite difficult after mislearning. Therefore, while teachers teach a new concept, they should arrange teaching process efficiently (Yürümezoğlu et al., 2009). When misconceptions are analyzed, it can be seen that meanings of these concepts are pretty different from their real meanings. These mislearned concepts affect students' true learning negatively and decrease their academic achievement (Driver and Easley, 1978; as cited by Yağbasan and Gülçiçek, 2003). Students' active participation in classes is crucial in terms of true and sustainable learning.
Learning approaches that enable students' active participation in science teaching should be used (Köseoğlu and Kavak, 2001). Instruction aided with concept cartoons improves students' active participation in the teaching process. Concept cartoons were developed by Brenda Keogh and Stuart Naylor in 1992. They were created to meet in-service teachers' needs of finding new instructional methods in science education (Van der Mark, 2011). Concept cartoons are visual tools which tell a scientific event with cartoons and give different points of view (Coll, 2005;Stephenson and Warwick, 2002;Naylor et al., 2001;Keogh and Naylor, 2000). Concept cartoons are drawings which consist of written texts in visual or oral forms and express daily life events in cartoon-shape (Keogh et al., 1998;Keogh and Naylor, 1999). Each concept cartoon shows a group of children in a speech bubble based on daily life and children express different opinions on a topic. The alternatives shown in speech bubbles are based on real events, classroom scenarios, common thoughts or misconceptions (Samkova and Hospesova, 2016). Concept cartoons are really effective on visualization of topics, active participation of students and justification of ideas (Morris et al., 2007). Concept cartoons encourage students to search and help them see scientific truths while searching (Kabapınar, 2009;Keogh and Naylor, 2000). Different ways of thinking with concept cartoons are conveyed to students through visual tools; misconceptions of students who have similar ideas are revealed, and reasons of these misconceptions are discussed in the classroom. The fact that concept cartoons include visual elements related to the subject to be taught raises students' attention in the subject and provides students' learning with fun (Balım et al., 2008). Concept cartoon teaching strategy has the potential to increase creativity and innovation as well as students' interest in understanding concepts. It is considered as a method that encourages students to continue exploring issues raised and seeking solutions (Jamal et al., 2019). Concept cartoons have a positive effect on students' critical thinking skills (Demirci and Özyurek, 2017;Yin and Fitzgerald, 2017). Concept cartoons are suggested as teaching materials to be used in science education with respect to the fact that they create learning environment suitable for constructivist approach and overcome problems to be experienced in teaching process (Keogh and Naylor, 1997;Keogh et al., 1998;Naylor and McMudro, 1990). Using concept cartoons in classroom settings help students discuss their opinions in classrooms, question their knowledge and make arrangements in their cognitive structures (Evrekli, 2010). Concept cartoons can be used for improving conceptual understandings of the students and for revealing their misconceptions (Stephenson and Warwick, 2002). Concept cartoons arouse curiosity in young students and develop their investigation and questioning skills (Long and Marson, 2003). Additionally, concept cartoons are assistant tools used in attracting students' attention to classes and improving their interest in them (Roesky and Kennepohl, 2008). The cartoon concept has succeeded in showing its importance in modern teaching and learning strategies (Koutnikova, 2017). Hence, impact of concept cartoons on academic achievement of primary school 4 th grade students in the unit "The Earth Crust and Movements of The Earth" in science lesson was investigated.
Aim of the research
Aim of the research was to analyze effects of instruction with concept cartoons on students' academic achievement in the unit of "The Earth Crust and Movements of The Earth" in primary school 4 th grade science lesson. Answers for the following questions were searched to achieve this aim: 1. Are there any significant differences between pretest scores of the experimental group students on whom instruction was carried out with concept cartoons and of the control group students on whom instruction was made with conventional method (instruction based on current curriculum) in the unit of "The Earth Crust and Movements of The Earth" in primary school 4 th grade science lesson? 2. Are there any significant differences between posttest scores of the experimental group students and of the control group students in the unit of "The Earth Crust and Movements of The Earth" in primary school 4 th grade science lesson?
Model of the research
Quasi-experimental design with pretest and posttest control group was employed in the study. This design provides great statistical potential to the researcher about testing the effect of intervention on dependent variable, and helps interpretation of findings obtained within the context of cause and effect (Büyüköztürk, 2011).
Study group
The study group of the research was created via convenient sampling in line with the aim of the study. Convenient sampling is described as a suitable method to fasten and ease research when there are problems related to time and expense, and it is a sampling method in which people close and convenient to the researcher are selected (Yıldırım and Şimşek, 2003).
The study group consisted of 4 th grade students studying at Zonguldak Devrek Çaydeğirmeni TOKİ Primary School in the second semester of 2018-2019 academic year. 49 students, 23 of whom were appointed to the experimental group and 26 of whom were appointed to the control group, were included in the study.
Data collection tools
Achievement test and concept cartoons were used as data Yilmaz 97 collection tools in the research. These data collection instruments were developed by the researcher. Information regarding development of these tools is listed by titles below.
Development of test questions
The test developed by the researcher consisted of 33 items about a unit in science lesson which was "The Earth Crust and Movements of The Earth". Subject area experts were consulted in confirming items' suitability to the students' levels, their being clearunderstandable and their content validity. This test which included 33 items was applied as a pilot study on 100 4 th grade students in a different school from the school where the research was conducted. The reason of choosing 4 th graders was that they had studied this subject previously. Afterwards, validity and reliability process of the study were carried out, and factor analysis was made. Before the factor analysis, appropriateness of the data for factor analysis was tested via Kaiser-Meyer-Olkin (KMO) test. KMO value of the 33 items was found as 0.75. Minimum KMO value required for factor analysis is suggested as 0.50 (Sharma, 1996; as cited by Eroğlu, 2008). The KMO value obtained was found higher than the suggested value. This showed that the data were suitable for factor analysis. 13 items were removed from the test since their eigenvalues were beneath 0.45. The rest 20 items were included in the final form of the test. Cronbach-Alpha reliability coefficient of the test with 20 items was found as 0.84. The final form of the achievement test was applied to 49 4 th grade students before and after the intervention.
Creation of concept cartoons
The concept cartoons were prepared in relation with the unit "The Earth Crust and Movements of The Earth" in the 4 th grade Science lesson. The concept cartoons were developed regarding students' misconceptions about "The Earth Crust and Movements of The Earth" unit in primary school 4 th grade science lesson. With this aim, the misconceptions that the students mostly did were determined by analyzing the studies carried out on this topic. 12 concept cartoons were developed by considering the misconceptions determined and by using ToonDoo cartoon tool. For suitability of the concept cartoons to the students' levels, academic staff working in this area and teachers of science, of class, of visual arts and of information technologies were asked to get their opinions. The students in the experimental group were given training with the concept cartoons developed during 12 h (3 h a week) for four weeks.
Data collection
The data were obtained from the scores the students received from pretest and posttest. The data collection was performed in 3 steps:
Obtaining pretest scores
In the beginning of the study, the test with 20 items developed by the researcher about the unit "The Earth Crust and Movements of The Earth" in primary school 4 th grade science lesson was implemented on the students in the experimental and control groups. Pretest scores were determined as a result of the answers that the students gave.
Implementation of the research
Two weeks of the research was spent for the assessment of pretest and posttest results, and 4 weeks were spent on implementation. The students in the experimental group were taught according to the instruction with concept cartoons while the students in the control group were taught in accordance with the current instructional program. The study lasted for 4 weeks -3 h per week. The total implementation period was 12 h.
In the classes with the experimental group, the concept cartoons about the lesson were shown to the students. The students spoke on the related concept cartoons, and they discussed them together. At the end of the classes, the students were given the printed concept cartoons, and they were asked to answer the activity questions below the related cartoons. The students who responded incorrectly were corrected, and they were given correct feedbacks.
Obtaining posttest scores
The test with 20 items which was used at the beginning of the research was applied once more to the experimental and control groups as posttest. The students' posttest scores were found as a result of their answers to the posttest questions. Then, pretest and posttest scores of the students in the experimental and control groups were compared. The sample pretest and posttest questions used for the study were as follows: (1) Which of the following ideas about shape of the earth is proved to be wrong with the fact that an airplane going continuously in the same direction arrives at the same departure after a period of time?
(i) It is round (ii) It is spherical (iii) It looks like a ball (iv) It is flat (2) Which of the followings is a sign for the fact that the earth is similar to sphere? (i) Firstly the funnel of a distant ship is seen (ii) The moon revolves around the earth (iii) The earth is surrounded by seas (iv) The earth revolves around the sun (3) Which of the followings are correct? (i) Day and night occurs because the earth rotates on its axis. (ii) Seasons happen because the earth revolves around sun. (iii) When we see the sunlight it is day, and when we do not it is night. (iv) I and II (b) I, II and III (c) II and III (d) I and III (4) Which of the followings cause the creation of day and night? (i) That the earth revolves around the sun (ii) That the moon revolves around the earth (iii) That the earth rotates on its axis (iv) That the moon rotates on its axis (5) What is the reason why we see the sun as if it is moving during the day when we look at the sky? (i) That the earth revolves around the sun (ii) That the earth is immobile (iii) That the sun revolves around itself (iv) That the earth rotates on its axis In "The Earth Crust and Movements of the Earth" unit of science course, each incorrect answer of the students was scored 0 point and their each correct answer was scored 1 point while evaluating their academic success (Table 1).
Analysis of data
Arithmetic mean, standard deviation, normality test, KMO test and independent groups T-test were employed in data analysis process. Normality test was applied in order to understand if the pretest scores of the students in the experimental and control groups showed normal distribution or not. The experimental group pretest value of skewness was 0,846 and value of kurtosis was -0,290 while the control group value of skewness was 0,472 and value of kurtosis was -0,628. It was regarded that the data showed normal distribution since the pretest values of skewness and kurtosis were between -1 and +1.
Normality test was applied in order to understand if the posttest scores of the students in the experimental and control groups showed normal distribution or not. The experimental group posttest value of skewness was -0,802 and value of kurtosis was 0,450 while the control group posttest value of skewness was -0,459 and value of kurtosis was -0,829.It was considered that the data showed normal distribution since the posttest values of skewness and kurtosis were between -1 and +1.
RESULTS
In this section, the findings related to the impact of instruction with concept cartoons on the students' academic achievement were included.
Results and interpretations related to the first subproblem
The findings related to the first sub-problem which was "Are there any significant differences between pretest scores of the experimental group students on whom instruction was carried out with concept cartoons and of the control group students on whom instruction was made with conventional method (instruction based on current curriculum) in the unit of "The Earth Crust and Movements of The Earth" in primary school 4 th grade science lesson?" are presented in Table 2.
In Table 2, there were not any significant differences between pretest scores of the experimental group students on whom instruction was carried out with concept cartoons and of the control group students on whom instruction was made with conventional method (instruction based on current curriculum) (T (47) =-0.91; p=0.36). Thus, it can be stated that the experimental and control group students were equal before the intervention.
Results and interpretations related to the second sub-problem
The findings related to the second sub-problem which was "Are there any significant differences between posttest scores of the experimental group students and of the control group students in the unit of "The Earth Crust and Movements of The Earth" in primary school 4 th grade science lesson?" are shown in Table 3.
In Table 3, it was found that there were significant differences between posttest mean scores of the experimental group and of the control group in favor of the experimental group (T (47) =-2.74; p=0.00). Therefore, it can be claimed that the experimental group was more successful than the control group. Additionally, it can be stated that academic achievement of the students in the experimental group on whom instruction was made with concept cartoons was higher than of the students in the control group on whom instruction was carried out with conventional instructional program (Appendix 1).
DISCUSSION
As a consequence of the research, it was found that instruction with concept cartoons was effective in increasing academic achievement of primary school 4 th graders in science lesson. When the scores received from the achievement test on "The Earth Crust and Movements of The Earth" by the students in the experimental and control groups were compared, a significant difference was observed in favor of the experimental group. Thus, it is possible to suggest that instruction with concept cartoons was efficient in increasing academic achievement of the students in science lesson.
Several studies revealed that instruction with concept cartoons was efficient (Foley et al., 2011;Rule and Auge, 2005;Chen et al., 2009;Balım et al., 2015). These conclusions are similar to the conclusions of the current study. It can be stated that instruction with concept cartoons is effective because of the fact that instruction with concept cartoons is fun; they are instructing while entertaining, they encourage students to participate in the classes actively and they keep students' attention alive.
Consequently, both the findings of the current study and the findings of the previous studies have suggested that instruction with concept cartoons in science lesson improved students' achievement. From this aspect, the conclusion of the present study is similar to the conclusions of the previous studies. Concept cartoons make topics visual, increase students' motivation towards lessons, make them active in lessons and make lessons more enjoyable. Thus, instruction supported with concept cartoons is recommended in science lessons to provide students' permanent learning. | 2020-03-19T08:10:39.545Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "905a633d684994dd6db40099bc4659cfa0a472e4",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/ERR/article-full-text-pdf/ACD69CA63219.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "905a633d684994dd6db40099bc4659cfa0a472e4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
5045369 | pes2o/s2orc | v3-fos-license | A Stronger Foundation for Computer Science and P=NP
This article constructs a Turing Machine which can solve for $\beta^{'}$ which is RE-complete. Such a machine is only possible if there is something wrong with the foundations of computer science and mathematics. We therefore check our work by looking very closely at Cantor's diagonalization and construct a novel formal language as an Abelian group which allows us, through equivalence relations, to provide a non-trivial counterexample to Cantor's argument. As if that wasn't enough, we then discover that the impredicative nature of G\"odel's diagonalization lemma leads to logical tautology, invalidating any meaning behind the method, leaving no doubt that diagonalization is flawed. Our discovery in regards to these foundational arguments opens the door to solving the P vs NP problem.
Turing's Proof on the Entscheidungsproblem has a Fatal Flaw 1.1 Overview
Context
Turing's monumental 1936 paper set the mechanistic description of computation which directly lead to the development of programmable computers. His motivation was the logic problem known as the Entscheidungsproblem, which asks if there exists an algorithm which can determine if any input of first order logic is valid or invalid. After defining automated computing, he described a program called an H Machine which can validate or invalidate its inputs based on reading a description number of that program. However, he did not provide an actual construction of such a machine, he only presumed its possibility to exist. His proof then depended upon this H Machine not being able to validate itself. He gives a detailed description as to why it can not validate itself.
However, a close reading of his paper shows an added assumption by Turing when he constructs his H Machine. While this assumption does not effect the construction or effectiveness of a Universal Turing Machine, it does have an effect on the overall results regarding the Halting problem and its sister problem, the Entscheidungsproblem, as well as any related results having to do with computational complexity.
The significance of this discovery of a fatal assumption in Turing's work, should not be taken lightly. It effects more than just the proof on the Entscheidungsproblem, which should remain open after this discovery, but it also effects existing, accepted results on computability and the very foundations of key concepts in computer science such as complexity. This means we must write new textbooks.
In this article, we point out Turing's assumption and then construct a description of a machine which does exactly what the assumption assumes is impossible. Because we have found an erroneous assumption, we must disregard Turing's final results and any results which are derived from his method, including the Space Hierarchy Theorem which derives from Turing's method using the same kind of assumption.
The construction of such a machine may have application in fault-tolerance of run-time self-correcting code validation in artificial intelligence implementations. It may also lead to better understanding of complexity relationships between complexity classes, as several results will have to be modified in response.
Furthermore, in a following section, we review methods which are differently constructed than Turing's method, but are considered to be reducible to Turing's method, such as Cantor's Diagonalization Argument (CDA), and also Gödel's Diagonalization Lemma (GDL). By addressing all three methods for proof in one article, we have effectively deconstructed the current foundations in computer science when it comes to the limits of computability. If we only deconstructed one method, then there might be reason to ignore our results on the foundation that it does not agree with the other two methods. However, we can and will address all three, and invalidate them all by providing non-trivial counter examples to the false assumptions in the methods.
Preliminary Considerations
The terms Circular Machine and Circle-free Machine are suitable for our description and we will use Turing's own definition of a computing machine. It is convenient to note here that a Circular Machine is deemed by Turing to be unsatisfactory due to forever looping, redundantly over a repeating pattern. Also, a Circle-free Machine is deemed by Turing to be satisfactory because of its ability to continue deciding indefinitely, without entering an infinite loop. We have chosen to keep Turing's original terminology for the sake of clarity when comparing the work of this article to that of Turing's original paper. Also, we choose his terminology because Turing's description of the Halting problem is completely mechanical, while many modern descriptions rely on an oracle, Cantor's Diagonalization or logic similar to Gödel's Diagonalization Lemma. Turing's description is independent from these reductions in significant ways as a mechanical process. This helps the reader directly compare this article with the original proof without intermediary interpretations or simplifications. [7] A Standard Description or S.D. is the rule set for any given Turing Machine M in a standard form. By creating a standard, the rule sets themselves can be used to create a Description Number or D.N. which itself may be readable by a Universal Turing Machine, U , as an instruction set. [7] From Turing's paper: "Let D be the Turing Machine which when supplied with the Standard Description (S.D.) of any computing machine M will test this S.D. and if M is circular will mark the S.D. with the symbol 'u' and if it is circle free, will mark it with 's' for 'unsatisfactory' and 'satisfactory' respectively. By combining machines D and U , we could construct a machine H to compute the sequence of β " [7] 1.1.3 Turing's Claim Turing claims that while H is circle free by construction, when H is given the description number for H , it becomes circular. [7] In the eighth section of Turing's paper on the Entscheidungsproblem, Turing claims that β can not be determined because of the following reason: "The instructions for calculating the R(K)-th [figure] would amount to 'calculate the first R(K)-th figures computed by H and write down the R(K)-th'. This R(K)-th would never be found. I.e. H is circular..." [7] This is because, since H relies on certain subroutines to make its determination, when it reaches and tries to evaluate K, it must call itself, which provides instructions on reading inputs from 1 to K-1 in order to call the R(K)-th figure, but it can never get there, because it keeps repeating its own instruction loop. [7] 1.
Turing's False Assumption
Turing assumed that there is no program that exists which can recognize itself arbitrarily and move to a circle free state upon this recognition. He assumed that any program would have to be programmed in such a way that when it reads itself, and calls its own instructions, it must be circular when trying to determine if it is circular or not, as described in the previous subsection. However, this is not necessarily the case and if we can provide an example of a program which does recognize itself arbitrarily so that it can switch to a circle-free state, then we've discovered a means to write H machine in such a way that it may solve for β .
The question is, then, is there a Turing Machine which can recognize itself arbitrarily 1 when it reaches its own Description Number (D.N.), such that some H machine configuration prints β ?
We will, in the next subsection, construct a Supermachine that can recognize itself as its own input which is then instructed to change to a circle free state upon this recognition. Because such a construction exists, and because it is arbitrary for any construction of this class of Turing Machines, we may solve for β non-trivially. 1 by recognizing itself arbitrarily, we mean that it can recognize its own program or description number even if K is not fixed. Also, there may not exist an initializer that feeds a fixed K to be recognized by a single read instruction that skips K and just rubber stamps approval. Such "rubber stamping" is considered a trivial case and is not of concern to this article.
Supermachine
Let us consider that H is a controller machine with a D.N. of K . It controls two different H machines: H 0 and H 1 . H 0 and H 1 each have the ability to determine "u" or "s" on a D.N. input, except H 0 tests as Turing describes, from D.N. 1 counting upwards (Each D.N. is a natural number) and H 1 tests from a certain twos complement of whatever number is being tested by H 0 as a simultaneous parallel input, such that its subsequent D.N. is one less than the previously tested D.N. Let us represent each D.N. by some integer i. H 0 and H 1 have a unique D.N. of K 0 or K 1 respectively. 2 Upon input of i 0 to be read by H 0 , let H store the value pair (i 0 , z) until i 0 is determined to be satisfactory or unsatisfactory. When the output is determined, let H replace the (i 0 , z) with the respective (i 0 , s) or (i 0 , u) in the data store, such that there is no longer a data store of (i 0 , z). Let the same process occur for i 1 , such that H also initially stores each D.N. input with (i 1 , z) and H 1 reads i 1 to determine satisfactory or unsatisfactory, subsequently replacing the initial value pair with the respective value pair (i 1 , s) or (i 1 , u) depending on the output of H 1 . A redundancy occurs when some i 0 = i 1 .
Let H have the ability to compare value pairs such that the machine may recognize a redundancy when it occurs, and may also recognize when a value pair contains a z value on the condition of such a redundancy. Let's call this a z-check ability.
Let H s be the supermachine that is the configuration of all three H Machines as described above and let K s be the D.N. for the supermachine.
Initialize the identifier strings such that K 1 < K 0 . Let the number of bits in K 0 = n. Let the twos complement of the first D.N. input to H 0 , which is 1, be determined by n such that it satisfies the equation c = 2 n −1.
Lemma. H s proceeds circle free, until it reads K s . If c − K 0 > K 1 , then re-initialize the D.N. 3 in either K 0 or K 1 such that c − K 0 < K 1 . This guarantees that H 0 will read K 1 before H 1 reads K 1 and also guarantees H 1 will read K 0 before H 0 reads K 0 . Let the controller H contain a memory command which stores the decision value pairs of H 0 and H 1 . The controller may routinely check for a redundancy on the next input. Now consider when H 0 reads K 1 , and K 1 calls the D.N. for H 0 : H 0 will call H 1 , which will call H 0 which will result in a z-check, recognizing that the value pair (K 0 , z) is already stored in memory, and therefore, since K 0 < c, we know that K 0 is the description number for itself, which is impossible to call by construction without calling H 1 first, which means it must be checking the description number for a machine which calls itself, namely H 1 , which allows us to correctly store the value pair (K 1 , s). This same reasoning can be applied for when H 1 reads K 0 , correctly storing the value pair (K 0 , s).
If however, the machine has determined a redundancy occurred on a value pair where the value is either (i, s) or (i, u) (i.e., a negative evaluation on the z-check, but the redundancy check is positive), then we have already evaluated this D.N. from the other H machine at the top level, and we no longer have to continue within the range 1 to c, since they will all have been decided. The supermachine, at this point proceeds to utilize machine H 0 and proceeds from D.N. input value c + 1, and continues through the rest of all Description Numbers, c + 2, c + 3, etc... at least until it reaches its own D.N., K s , for no other D.N. should be problematic in determining the output decision. Thus, H s proceeds circle free, at least until it reaches K s which is easily constructed to be larger than c.
β is Decidable
Proof. β is Decidable. At the point K is received as an input, it is determined satisfactory by either H 0 or H 1 . Neither H 0 nor H 1 are called during this phase of the process.
By lemma, K 0 is decided by H 1 , K 1 is decided by H 0 and H s continues indefinitely until we reach K s , which describes H s . K s is read by H and as before, its Description Number is stored along with its temporary pair value of z until H 0 or H 1 returns a value for β at that location. K s is sent to be verified by H 0 , which when H calls K s for a second time, under the given recursive property of K s which will eventually call itself, the z-check for value pair (K s , z) is recognized as both redundant and with a z value, stored by H in the data store, but because the associated value is z, the z-check ability tells us this process has already occurred, sends K s to H 1 , which self-verifies repeated z-check values. By construction, the only value K i which can provide this multiple z-check values where K i > c is K s , so H s now self-verifies the input K s as its own D.N., provides a value of "s" for satisfactory, and changes state to evaluate K s + 1 to continue indefinitely as a Circle-free Turing Machine.
Therefore, given some Universal Turing Machine which can emulate the H s Machine, β is decidable over the set of all Description Numbers. S
Consequences
Solving for β is RE-complete. If Turing were correct, it would be impossible to solve for this output. Since we have solved for β , we must consider the possibility that there is a problem with our configuration which makes it impossible, or that there is something fundamentally incorrect with our current foundations of mathematics. For this reason, we continue in the next section to examine Cantor's Diagonal Argument, of which Turing's method is equivalent, as they can reduce the same results to each other. If there is nothing fundamentally wrong with the foundations of mathematics and computer science, we should expect that any counter examples to CDA will not exist, or will be trivial. However, if we can find counterexamples for arbitrary Figure 1: A supermachine configuration appears to exist solutions, then such solutions will not be trivial and will reinforce the findings of this first section on the Entscheidungsproblem.
2 A Non-Trivial Counterexample to Cantor's Diagonalization Argument 2.1 Overview
Method and Foundations
Conventional thought on CDA is that we would need an entirely new axiom schema to find a counterexample to the method [2]. However, no new axioms are needed in this article to find a counterexample to CDA. We present in this section, a new grammar to generate a formal language for representation of an ω-regular language through accepted foundations in set theory and the principles of formal languages.
For logical consistency, in order to examine CDA, we can not assume outright that CDA is a method which produces theorems, nor can we rely on any theorems derived from CDA or those theorems that reduce to it. This includes Turing's proof on the Entscheidungsproblem, as discussed in a pervious section, in our exploration of it, as well as the downward Lowenheim-Skolem Theorem, or complexity results which rely on diagonalization such as the Space Hierarchy Theorem, et al. We must also exclude consequences of Cantor's First Proof of Uncountability (CFPU) because of its similarity to the method of CDA. This requires that we assume the cardinality of the Real numbers as it relates to the Natural numbers is unknown, since CDA and CFPU are the very foundation that prove the Reals are strictly larger than the Natural numbers. This is for logical consistency only, our proof does not directly concern us with the cardinality of the Real numbers. We are currently only concerned with the foundations that lead to these prior results, and the counterexamples which call them into sincere question.
We will re-explore CDA by creating an Abelian group which can be used as a representation of an ω-regular language, and its set of well formed strings, explore this representation's properties and expressive power as a language, and create a class of constructions of CDA which lead to non-trivial counterexamples to CDA. The resulting counterexamples to CDA are found through a transformation of equivalent statements over the construction.
Notation and Preliminaries
Definition Let the Natural numbers be denoted by N. Definition Let ω be the rank of N, from the Von Neumann Hierarchy. Definition Let ℵ 0 be the cardinality of N and the cardinality of any set bijective to N.
Definition An ω-expansion is an unbounded expansion of symbols on a string Lim →ω, and is denoted, for some symbol, z, as z ω such that the cardinality of the set of the number of symbols represented by the ω-expansion z ω is ℵ 0 . Definition An iteration of symbols has gone through ω-completion when an unbounded set of recursions of symbols in a string has a symbol or the empty string, at the rank of the set of iterations. Definition Let exhaustion be a necessary change in output of a recursion after ωcompletion of such recursion of symbols, where s at exhaustion is the symbol at the rank of the unbounded iteration and s = i when i is the first symbol of the set of symbols in the recursion.
For example, let x be defined as an infinite word having both 0 and 1's written on an infinite tape, however, we know that the starting symbol is 1, and there are an ω number of 1s on the tape prior to any 0 appearing. We can prove by exhaustion that the ω th symbol of the infinite word is 0, since ω is the rank of the total number of 1s on the infinite tape and the symbol must change necessarily at that point, as we know the word contains a 0. Definition Let a special order of some set S be some ordering of S such that we may determine the value of any distinct element of S by some operation on S which guarantees the result is in S; i.e. an order which is arbitrary under closure. Definition Let be a set of terminal symbols which is its alphabet. Definition Let V be a set of non-terminal starting symbols and R be a set of rules, each in the form xAx−→ xw for some strings w ∈ , x ∈ V when A ∈ V. Definition Let be an alphabet with at least 6 elements and the empty string . Definition Let Definition The alphabet 4 must also have a subset of at least three elements not a member of 2 with at least one of these symbols to distinguish context between other symbols of value. Remark Note that the total cardinality of the set of strings in ω 3 is ℵ 2 0 and fully representable by an * 4 mapping via Cantor pairing between the sets. Definition Let * 2 be the set of finite length strings over 2 . For any language L in the class T T T, the set of strings * over L,
A Grammar for a Language in T T T
Chomsky, Backus and Naur laid the foundation for work in generative grammars and our understanding of computer syntax with Backus-Naur form and the Chomsky hierarchy among other means of representing and categorizing generative grammar structures. Here we utilize formal language theory to produce the following grammar which yields a recursively enumerable context sensitive language. [3] Definition Let the grammar L T (G) be a tuple, {V, , R, S}. S ∈ V, 2 ⊂ , 3 ⊂ , 4 ⊂ , such that: Remark It is easy to allow addition to be associative in ρ * such that the equation (a + b) + c = a + (b + c) holds for all a, b and c in ρ * .
Remark The identity element exists as 0, for all w: PLUS(w, 0)=w Definition Let String Equality be the condition where expanded strings from a formal language, either by Kleene star expansions or ω-expansions, have equality with strings from which they expanded.
Definition Let P ω be all the strings in ρ * union all ω-expansions and Kleene star expansions of symbols in P * . Because of string equality, for w 1 in ρ * and for w 2 in P ω PLUS(w 1 , 0) := PLUS(w 2 , 0).
Definition Let P be all strings in P ω which are not in ρ * .
Proposition Addition in ρ * is closed. PLUS( ) will only yield answers in ρ * or P by the function PLUS( ). For any strings w 1 ∈ ρ * , w 2 ∈ ρ * , PLUS(w 1 , w 2 ):=w a , if w a ∈ P , there exists a String Equality in ρ * , such that w a = w b where w b is in ρ * and PLUS(w 1 , w 2 ):=w b through reflexivity. Since w b / ∈ P , it must be in ρ * . Therefore, addition is closed in ρ * .
Proposition There is an additive inverse for each w ∈ ρ * . For every w x , there exists some w y through iteration in PLUS(w x , w y ) such that the output string will be some combination of terminals 0* and 0 0 0 *, which through String Equality=0.
Lemma. ρ * is an Abelian Group under addition. From the previous two propositions, and because or definition of PLUS( ) includes commutativity, by the definition of an Abelian Group as having closure, associativity, an identity element, an inverse element and commutativity, ρ * is an Abelian Group under addition.
Formalizing a Diagonal Argument Function
For Cantor's Diagonal Argument, one may generalize an argument function, for proof by contradiction, through constructing an arbitrary sequence of infinite length strings in any order into an N by N matrix of symbols and assuming all rows of the matrix contains all string values. Then, one proceeds to the i th column, j th row of the matrix, where (i, j ) moves along the diagonal of the matrix-(1,1), (2,2), (3,3)... etc. The symbol at that position is then changed to a different symbol within the language of the system and concatenated to produce a new string, which when so constructed, the new string of symbols can't be found in any row or column of the matrix, thus providing a string of a value not listed in the matrix. The new string is considered transcendental in Cantor's Universe and the proof by contradiction tells us that not every string value can be calculated in Cantor's Universe through iterative process, and as such, R is "uncountable" and has a cardinality strictly larger than N. This is necessarily true for all constructions of the diagonal argument prior to the method employed in this paper because the i th string must contain the symbol found at (i, j ) which can not be in the constructed string at (i, j ) from the diagonal and there did not yet exist a mapping relationship suitable for counterexample to CDA. [3] The following construction maps the set ρ * with equivalent strings in P ω , showing that the diagonal argument applied to a special ordering of subsets in P ω , yields a non-trivial counterexample to Cantor's argument function.
Remark It should be noted that because the elements of P ω are not the Real numbers, and thus, the trivial exception where .9=1 is not a concern because this evaluation or any equivalent evaluation does not exist in P ω by construction and this exception simply cannot occur. We will not concern ourselves with this exception, since it is a special case, it is considered trivial. However, if such a case could be generalized, it would no longer be trivial.
Additional Notations and Preliminaries
Consider a proof by contradiction as an argument function f (S ) over a set S yielding a domain f (S) → g(S) and some countable set of matrices reducible via Cantor Pairing to size N × N, {f (S )}, to which we will apply an argument function f (S ).
Definition A counterexample to an argument function (which is a proof by contradiciton), is the condition when there exists f (S )→g(S ) such that for any g(S )∈ {f (S )} iff Definition Let right-concatenation be the concatenation operator * such that when w 1 right concatenates over w 2 , w 1 * w 2 =w 2 w 1 and likewise, w 1 * w 2 * w 3 * w 4 =w 4 w 3 w 2 w 1 Definition Let str (f ( x j ) ) be the cumulative string of the output of the function . Similarly for str (g( x j ) ).
Definition Let the argument function f (R) be a construction of Cantor's diagonal method on the N × N matrix {f (R)} of arbitrary unbounded binary strings. They are listed in any order, assuming that all possible string values of R are listed. Let g(R)=str (g(x j )) where x j is the value, 0 or 1 at the coordinate (i,j ) for all j on the constructed matrix at row j such that j =i. It is well established that in Cantor's Universe, g(R) / ∈ {f (R)}, and we can produce a string which is in R but not in {f (R)}. For counterexample, since we accept g(R) / ∈ {f (R)}, we must find some set of recursively enumerable sets or set of uncountable sets S where g(S) ∈ {f (S)}.
Definition Let the argument function f (P ω 0 ) be an equivalent construction of Cantor's diagonal method where a series of N × N matrices, {f (P ω 0 )}, is partitioned into two specially ordered matrices, {f (P ω 1 )} and {f (P ω 2 )}. {f (P ω 1 )} ⊂ {f (P ω 0 )} and {f (P ω 2 )} ⊂ {f (P ω 0 )}. These two partitions will be proper disjoint subsets of {f (P ω 0 )} such that Construct the argument function for application of CDA to P ω 0 , {f (P ω 0 )}, in the following manner: Let ρ * map to two sets, ρ * 1 and ρ * 2 , through choice, choosing the strings in ρ * 1 as all the strings where each string is denoted by the initial with Kleene star expansions, from left to right as follows: 0w, 0 0 0 w, [ * 0w or [ * 0 0 0 w. And likewise, choose the strings in ρ * 2 as all the strings where each string is denoted by the initial, from left to right, 1w, 1 1 1 w, [ * 1w or [ * 1 1 1 w, thus creating a strictly bijective disjunction between the sets whose union is ρ * . Choose some string set in ρ * 1 such that it maps to an infinite length string by String Equality in P ω to form the set P ω 1 . Choose some string set in ρ * 2 such that it maps to some infinite length string by String Equality in P ω to form the set P ω 2 . Let the union of P ω 1 and P ω 2 be P ω 0 .
We will begin the proof of the existence of the counterexample to CDA by proceeding with a construction of f (P ω 1 ) and f (P ω 2 ) individually, and then providing a retrograde on str(g(P ω 2 )) to construct the diagonal of {f (P ω 0 )}.
Theorem
Proof. There is a Non-trivial Counterexample to Cantor's Diagonal Argument. 1. ρ * is an Abelian group by Lemma. 2. By String Equality we can create an Abelian Relationship in all extended sets related to ρ * by providing an operation on a string in ρ * to yield another result in ρ * , which can then be made equal to some string in P ω and its subsets, thus allowing for special order, utilizing exhaustion, if necessary. 3. We will choose the elements of P ω 1 and P ω 2 in such a way, that when proceeding with the argument function down the diagonal of the respected matrices, {f (P ω 1 )} and {f (P ω 2 )}, we will encounter only the symbols 0 and 1, respectively. This can be ensured by expanding other symbols when necessary. We will reserve the axiom of replacement if necessary when forming the sets to aid in our choice for the rows in the respective matrix sets. 4. Proceeding to f (P ω 1 ), we choose to write the row, j = 1, as some 01 1 1 as it's defined mapped string, 01 1 1 1 ω and by the argument function, utilize the symbol from the 1 st column of this row such that f (x 1 ):=0 and str (g(x 1 )):=1. 5. Choose in the set such that the second row in {f (P ω 1 )} is 0 0 01 1 1 =0 0 0 0 ω 1 ω 1 1 1 and f (x 2 ):=0, and str (g(x 2 )):=11 in accordance with the argument function. 6. Thus, through ω-completion of the recursive iterations resulting in an unbounded series, for all j in {f (P ω 1 )}, f (x j ):=0; g(x j ):=1. 7. It follows through String Equality that g(P ω 1 )=1 1 1 . 8. Likewise, proceeding to f (P ω 2 ), we choose to write the row, j = 1, 10 0 0 as it's defined mapped string as some 10 0 0 0 ω and by the argument function, utilize the symbol from the 1 st column of this row such that f (x 1 ):=1 and str (g(x 1 )):=0. 9. Choose in the set such that the second row in {f (P ω 2 )} is 1 1 10 0 0 =1 1 1 1 ω 0 ω 0 0 0 and f (x 2 ):=1, and str (g(x 2 )):=00 in accordance with the argument function. 10. Thus, through ω-completion of the recursive iterations resulting in an unbounded series, for all j in {f (P ω 2 )}, f (x j ):=1; g(x j ):=0. 11. It follows through String Equality that g(P ω 2 )=0 0 0 . 12. The retrograde of g(P ω 2 )=0 0 0 is reflexive, 0 0 0 . 13. The special order of {f (P ω 0 )} is the special order of {f (P ω 1 )} union the retrograde of the special order of {f (P ω 2 )}, allowing for an inversion on the strings in the diagonal, because of retrograde, for consistency in the argument function. 14. Through String Equality, g(P ω 0 )=1 1 10 0 0 15. By choice of {f (P ω 2 )} which is a subset of {f (P ω 0 )}, as illustrated by step 9 in this proof, the string 1 1 10 0 0 =1 1 1 1 ω 0 ω 0 0 0 is in {f (P ω 0 )}. 16. Therefore, there exists a counterexample; when g(R) / ∈ R, g(P ω 0 ) ∈ P ω 0 . S We can easily generalize this counterexample for all elements in ρ * by choosing the appropriate ordering of the list of elements used for constructing the diagonal. Finding such a non-trivial counterexample to CDA provides a new understanding about the limits of computability and these results confirm the flaw found in the Entscheidungsproblem. It would be wise for logicians, mathematicians and computer scientists to revise all their texts accordingly. However, I recognize that this is quite an ordeal, so we will venture one step further, beyond what would be required of any other paper or result, and provide a third argument as it relates to Gödel's Diagonalization Lemma, which is a third method that differs from the previous two already discussed, but is reducible to the same result.
Some Impredicative Statements Reduce to Circular Logic
An impredicative statement is a statement with self-referencing. Self-reference alone is not enough to invalidate a proposition, however, as we will show, a certain class of impredicative statements are intrinsically circular and tautological. Such circular logic is not well-formed for proof of a proposition which depends on its circular nature, and is thus not logically contingent.
In this section, we will show that Gödel's Diagonalization Lemma is within this class of impredicative statements. We will then review this lemma and show that his utilization of substitution is not strict enough for a strong foundation in logic. Furthermore, we will actually construct the Gödel numbers suitable for his proof, and carry out the operations to see if indeed such arithmetic is outside of a formal system, Q.
First, we must determine a means to identify when a tautology is present in an impredicative definition. Let's start with an example of a tautology. We would not accept the following as a logical statement as true in any meaningful way (it is, in fact, "true" semantically), because there is no proof or causal relationship, only tautology: We can determine if a statement is a tautology or not, by creating a truth table over the distinct valuations for the formula. If all valuations for each variable lead to truth, then the statement is a tautology.
For (S ∨ ¬S), When S is true, (S ∨ ¬S) is true. When S is false, (S ∨ ¬S) is true.
So what about impredicativity? Intuitively, it is easy to see how we risk creating a tautology by using such a tactic to define one's terms: Self reference intrinsically eliminates the possibility for that variable to contradict itself, increasing the likelihood of tautology.
However, self-reference or impredicativity alone, is not enough to prove tautology. For example, the greatest lower bound (glb) of a set, can be defined impredicatively, y = glb(X) if and only if for all elements x of X, y is less than or equal to x, and any z less than or equal to all elements of X is less than or equal to y. [8] This definition is impredicative, but it is not tautological.
But now, intuitively, before we prove for GDL, I would like to claim that Russell's paradox uses an impredicative definition that is itself actually tautological.
Define S as the set of all sets that do not contain themselves. This is impredicative, and it is tautological, because there seems to be this infinite self-referencing that goes on when we apply S to itself.
So what differentiates these two classes of impredicative definitions from each other? Heuristically, in the first instance, we are using the set of X to define y, which is the greatest lowest bound of X. In the second instance, we are defining S as a set defined by sets. When we apply the distinguishing property to the definition that makes the term unique, in the instance of the greatest lowest bound, the property is not applied to the impredicative portion of the statement. Rather, the property is applied to an independent portion. That is, the property of being an element is distinguished from the property of being a set, the set, in this instance, being the impredicative portion of the statement.
In Russell's paradox, the property of being a set that does not contain itself, is a property applied to a dependent portion of the impredicative statement, that is, the property when applied to sets in general, also applies to the specific set we are defining. This dependence of the property of the set on the set, is the distinguishing factor. I believe it was Wittgenstein, at a dinner party with Russell, who first noted that Russell's paradox is easily overcome by defining a class of sets that do not contain themselves. When you define a class of sets, you break the dependence of it as a set (since it is no longer a set, but a class) with given properties that apply to sets.
But I believe this heuristic falls short. It is not just this dependence that creates a tautologial impredicative statement, it is also the nature of the property itself. We could easily define the set of all sets which contain the letter A. Such a definition has a dependence on impredicative portion of the statement, yet does not seem to create create a tautology or an infinite regression or anything of that sort. That is, in order to create a tautology, A itself would have to be defined, not only in terms of sets, but in terms of the class of sets in question. As such, in order for impredicatives to be a problem for logic, the property itself must point back to the impredicative dependence in a self referential way.
Definition Let impredicative dependence be the condition of a statement S, whose property P depends on self-referencing. ∃x|P (x) ↔ {x → P (x)} Definition Let impredicative pointing be a condition of self-reference where a dependent property also references an impredicative dependence, i.e. the existence of S depends on S containing an impredicative dependence. The final statement S is always true, even when S is assumed to be false.
S
x P(S) ↔ {x → P(x)} → S true true true true false true false true true false false true
Gödel's Diagonalization Lemma Depends on a Tautological Impredicative
Gödel's Diagonalization Lemma states that there is a sentence Ψ such that Ψ ↔ is the Gödel number for Ψ and F is some well formed formula provable in a formal system Q.
It is clear to see that the statement concerning Ψ is a tautological impredicative, as Gödel derives Ψ by the following proof.
Gödel's Diagonalization Lemma
Given a formula with one free variable, F(x) in Q and a number n, we may substitute some number n for x. We may also represent the formula F(x) by the Gödel number of the formula, o #(F (x)) and of a number n, and that it is possible that n := o #(F (x)), but note that it is impossible that n := o #(F (n)) We can refer to this process of substitution by the function substn( o #(F (x)), n) := o #(F (n)). We can also let S(x, y, z) be a formula which strongly represents this operation in the language of Q if and only if: x = o #(F (x)), y = n, and z = o #(F (n)) Nothing prevents y := x, such that we may have the formula S(x, x, z). Given any formula F(x), we may also create the formula ∃z[F (z) ∧ S(x, x, z)] with one free variable, x. This formula has the Gödel number k = o #(∃z[F (z) ∧ S(x, x, z)]) At this point, we may substitute k for x, such that ∃z[F (z) ∧ S(k, k, z)] Remark. Bew() is short for the german word beweisbar, which means "provable". Note that "Bew(x)" is merely an abbreviation that represents a particular, very long, formula in the original language of Q; the string "Bew" itself is not claimed to be part of this language. [9] Ok, fine. This seems logical, but what happens when F (x) := ¬Bew(x)?
We get Q Ψ ↔ ¬Bew( o #(Ψ)). This is a contradiction. The proper question, given the previous two section in this article, is to ask, why do we have such a contradiction arise? Is it because of incompleteness, or is it a result which only exists because of a fundamental flaw in the diagonalization lemma, which creates a logical tautology?
I wish to point out, that if ¬Bew(x) applies to Ψ, then, this is the function which must also apply to z.
which means that the fundamental assumption we made about there existing a function F (x) with one free variable, is what is incorrect... that it is through improperly initiating substitutions, when we create tautology, which allows contradiction. This is a fundamental problem with the logic used in Gödel's proof, not necessarily Q. There could be some Q where stronger foundations in logic is mandatory. Furthermore, we can easily see that Q allows tautological impredicatives by proving Ψ.
Discussion
Did Gödel overstep in his logic by taking too many liberties with free variables?
The answer to such a question, is perhaps a matter of opinion. It is either OK to prove the existence of statement which is a logical tautology in a formal system, or it is not. If it is OK, I think we would have to also assume it would be difficult to prevent such a tautology from occurring, or that tautologies must be desirable in logic. However, because tautologies are not desirable, as they open up systems to paradox, meaninglessness and contradiction, I believe it would be of a higher opinion to seek out a rule which invalidates their existence in a consistent system; that any system which can prove tautologies, is in fact inconsistent.
Such a rule is relatively easy to find. It seems, unless one can offer an additional example than the one provided by Gödel, that tautologies in a formal system such as Q, are only made possible by misusing substitution over a free variable. Substitution, being an important foundational element in logic, is not itself in question. But rather, a limit on how substitution may be applied, perhaps should be.
We could just create a postulate or axiom which makes x a bounded variable after substitution.
Such a postulate will allow some impredicative statements, all of which can avoid tautology, but through preventing re-substitution on x, will prevent impredicative pointing, and thus prevent tautologies from forming in Q. Thus, creating a much stronger foundation to our systems of logic and computability.
Or, if such a postulate is too restrictive, we can modify the limit of substitution over bounded x, not to substitution in general, but to statements from the free variable x, formally as follows: Axiom. For any formal system Q, with a free variable x in formula A(x), after substitution is applied to x, x is bounded by the substitution function such that subst(subst(x)) != subst(x).
Such a rule will strengthen, not weaken the logical prowess of humanity.
Conclusion, P=NP
While only one method was ever truly necessary, by three novel, independent and rigorous methods, we have been able to deconstruct one of the key foundations of mathematics and computer science, diagonalization. In Section 1, we constructed an impossible Supermachine which should not exist, yet it does. In section 2, we constructed a formal language which yields a non-trivial counterexample to Cantor's Diagonal Argument. And in Section 3, we tackled the impredicative nature of Gödel's diagonalization lemma, revealing a tautology, which is logically unfounded. Again, any one method should have been enough for proof, but because of the significance of such a finding, I decided to combine over a decades worth of research and original thought into this single cohesive, self-contained paper.
Finally with diagonalization methods invalid, the entire complexity hierarchy collapses, and we now have a foundation in computer science where there is enough information to solve the P vs. NP problem. Without diagonalization, our reasons for not using an oracle to solve P vs. NP vanish, as the contradictions which formally prevented the use of such oracle no longer exist. It was inconcsistency, through the use of diagonalization that lead to Hierarchy results which created oracle contradictions.
Lemma. If P SP ACE = EXP SP ACE, P = N P . If the SP ACE of a problem increases polynomially as with any P SP ACE-complete problem, this is comparable to the T IM E of a problem increasing polynomially, such that given an oracle, = poly , which solves polynomial equivalence between SPACE and TIME, P SP ACE = poly P . Similarly, if the SP ACE of a problem increases exponentially as with any EXP SP ACEcomplete problem, this is comparable to NP which is at maximum, in exponential TIME, such that EXP SP ACE >= poly N P . If P SP ACE = EXP SP ACE, then P SP ACE >= poly N P . Since P SP ACE = poly P , P >= poly N P , which is the same as P = N P .
Solving for β in Section 1 is RE complete, and because the Space Hierarchy Theorem relies fully on the now defunct diagonalization method, its results must be discarded.
And as such, with the Entschiedungsproblem being RE-complete, and since we may solve for arbitrary β using the Supermachine configuration, we may now conclude: Proof. Since by definition, P SP ACE ⊆ RE, and since any given Recursively Enumerable set is contained in PSPACE, and β solves for all Recursively Enumerable sets in PSPACE, and since we can no longer accept the Space Hierarchy Theorem, RE ⊆ P SP ACE ... RE = P SP ACE, such that EXP SP ACE ⊆ RE and RE = P SP ACE, implies P SP ACE = EXP SP ACE, proves through the above Lemma ... P = N P S Mark Inman, Ph.D. Prague, Czech Republic | 2017-08-18T22:36:07.000Z | 2017-08-18T00:00:00.000 | {
"year": 2017,
"sha1": "0b482d6a9fa7969b4dcbd6aabce1950935fbe38d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0b482d6a9fa7969b4dcbd6aabce1950935fbe38d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
237769014 | pes2o/s2orc | v3-fos-license | A Case for Pedagogic Writing Instruction for Pre-Service Teachers to Learn Applied Grammar in the Context of Their Own Writing
Revived interest in grammar as a tool to teach writing is a phenomenon of the 21st century since inquiry in the 50s determined it to have “no positive impact on” writing instruction (Locke, 2005; Qtd. in McCormack-Colbert, Ware, & Jones, 2018, p. 165). Yet in the past two and a half decades, the concepts of Contextualized Grammar and Pedagogic Grammar have earned recognition in English and Language Education as a new kind of instruction shown to enhance writing when providing learners strategic mini lessons in grammar. This article also proposes the converse: in a college grammar course, strategic writing instruction assists students to learn grammar and usage in an applied setting of creating, revising, and editing their own texts. This article first reviews the premises and bases for the reappearance of grammar to teach writing and then describes the writer’s parallel approach to grammar instruction through the strategic use of writing assignments referred to here as Pedagogic Writing. The article closes with an account of the performances and perceptions of pre-service teachers sent to English by the School of Education to take ENG G 207, Grammar and Usage, showing preliminarily their successful application across three semesters, from spring 2020 through spring 2021.
Introduction
Teaching traditional grammar as part of writing instruction faded with studies in the mid-last century for lack of evidence that it helped writers. Because it was decontextualized from the act of writing itself, www.scholink.org/ojs/index.php/selt Studies in English Language Teaching Vol. 9, No. 4, 2021 2 Published by SCHOLINK INC. grammar was something "done to" a text, after the fact. It has consisted of hunting for applications of grammar and usage that violate the rules or norms conveniently gathered in in English handbooks. But especially since the 90s, English and Language Education scholars and researchers have revived and transformed grammar instruction by integrating it into the writing process as students generate and revise their texts. English educators call it Contextualized Grammar, while Language educators use the term Pedagogical Grammar. But they share the practice of delivering grammar instruction through strategic mini lessons to enhance writing instruction.
Shaped by an English PhD concentrating in rhetoric and composition, I have mainly taught college courses in those areas. But I have also enjoyed periodic opportunities to teach ENG G 207, Grammar and Usage, for which the School of Education sends its pre-service teachers to the English Department.
My first time occurred as three consecutive single-semester sections just before taking on administrative assignments as a school dean and as a vice chancellor in Academic Affairs for some years. Near the end of that sequence, I had grown dissatisfied with the traditional pedagogy represented by departmental sample syllabi: a grammar text and its unit materials, exercises, and tests. By the third iteration, I began to seek options within the 21st century literature.
I wanted to find methods more in keeping with my philosophy of teaching language to account for its intimate connections to both grammar and rhetoric. During that time, I found potential resources that signaled respect for the rule-relevance of grammar, while building students' confidence in their literacy abilities. Although I lacked the time then to implement change, I have applied these resources since returning to the classroom in spring 2020, refreshing my excitement to teach grammar again-with an important difference. ENG G 207 is not a writing course. It uses writing to contextualize grammar instruction, not vice versa. Consequently, I call the approach Pedagogic Writing, in its parallels to Pedagogic Grammar. I have found it a viable way to cultivate interest and a willingness to invest effort in a subject that some learners may perceive as dull, too difficult, or irrelevant to their lives. Now, I am applying these discoveries to teach ENG G 207. My experiences in teaching writing and grammar have shown me first-hand that-with the right approach-they make ideal classroom companions. This article will describe the pedagogy and include preliminary evidence of student successes in spring and fall 2020 and spring 2021.
Viability of Traditional Grammar
Traditional grammar instruction has declined in the writing classroom without going away. Writing instructors still ask their students to remove errors from their writing, often in a final draft. They may or may not require a handbook, but if they do, it is likely to be used like a dictionary-as a resource for students to look up elements on their own, like commas or sentence-errors like fragments. Accordingly, learning is passive, at best, as a writer may well commit the same errors on the next occasion. Kolln has criticizing traditional grammar in "failing" to help learners to reflect on their writing "and the grammar that shape[s] the meaning of the message" (p. 722).
It would be unusual for an instructor under age 40 to have been exposed to non-traditional methods in teaching grammar as they have yet to make their way into the education of future teachers. I used this approach in ENG G 207 not only prior to 2006 but also in 2020 when I resumed teaching. Grammar was not a subject occupying space in my PhD writing courses even for a composition and rhetoric concentration. So, it was no surprise that the sample syllabi I received from the department that spring still relied on a grammar and a pedagogy of unit exercises and tests that involved no writing. The majority of faculty who teach our grammar course are part-time, many of them with an English education comprised of an M.A. often with credits heavy in literature. But whether credentialed by a masters or a PhD, we have largely been left to teach as we were taught-and many of us probably last studied grammar in middle or high school.
In contrast to traditional grammar instruction, writing instruction has evolved. For instance, critical thinking (CT) is now an expectation even in freshman composition. To paraphrase a widely cited definition among CT professionals, one who thinks critically arrives at an intentional "judgment" through interpretive, analytical, and/or evaluative acts based upon "fair-minded, "reasonable," "honest," and "clear" evidence (Facione, 1990, p. 2). To teach critical thinking, two approaches are common: "infusion" -which calls on students to directly examine and apply CT concepts and principles; or by "immersion"-relying on the content-course curriculum for learners to reach these outcomes naturally (Ennis 1989). Unfortunately, CT is divorced from grammar considerations even in a writing course, except insofar as the writing process may succeed in "immersion." Micciche (2004) argues that the same "driving commitment to teach critical thinking" should be the task of "rhetorical grammar," which includes "reading rhetorically" (p. 718). Otherwise, where, when, and how may college freshmen and sophomores have learned to think critically such that it naturally flows into their use of grammar in their written texts? Moreover, even if grammar were taught by infusion, the content of a traditional grammar course lacks substance, without intentional applications to a text and its context.
Reppen and Richards (2014) have argued for two perspectives on grammar: one as "knowledge" and the other as "ability." In the first case, grammar knowledge may "focus on rules for sentence formation." In the latter case, it contemplates active learning to observe and experience how students (or other writers) use grammar in their "spoken and written texts" (p. 5).
Beginnings of Substantive Grammar
Several voices have advocated teaching grammar as a matter of substance. Strate (2020) the ancient Greek's word gramma, both referring to "any single letter in the alphabet" and "any written document of record" (p. 67). Further, Strate has urged us to remember that "grammar originally referred to the study of language and literature, . . . encompassing both literacy and literary criticism, poetics and the interpretation of texts" (p. 67). In that respect, grammar is "a substantive matter of textual interpretation, analysis and evaluation" (p. 67).
A recent grammar text with substance is A.K. Barry's English Grammar: Language as a Human Behavior (2013). Aimed for a one-semester course for preservice teachers, its Preface cautions students not to expect a traditional text. It takes an organic approach in drawing "insights from modern linguistics" (xi). In part, this fact means that readers should anticipate topics which, once raised, will reappear elsewhere to study more deeply or in another context. Barry's arrangement also demonstrates how grammar elements "interrelate and function together, rather than being wholly separate" (xii). The focus on language as a human behavior emphasizes the "complex interaction between language rules" and their applications to writing (xii). So, instead of using terms like "correctness," the text speaks of "appropriateness" in what is said, heard, or read. Although the Barry raises "usage and usage questions whenever relevant" (xii), the Preface immediately affirms students' capabilities as language users, assuring them that in their study of grammar, they will "build . . . on what they already know, to develop an appreciation for how language works" (xi). I have been using Barry primarily as a teaching resource and for the section devoted to prospective English teachers.
In English Education, grammar instruction was invigorated by Teaching Grammar in Context (Weaver, 1996) and Teaching Grammar to Enrich and Enhance Writing (Weaver & Bush, 2008). Attesting to their value, Richard Nordquist (2020), a teacher-scholar with texts to his own credit, has unabashedly recommended them to practicing and prospective English teachers. Also, he has helped to disseminate their principles for teaching in relation to writing. Below I include those which have been especially useful to me, first as I looked for an alternative to traditional grammar and then as I have designed and implemented the new curriculum for ENG G 207.
"Teaching grammar divorced from writing doesn't strengthen writing and therefore wastes time." "Sophisticated grammar is fostered in literacy-rich and language-rich environments." "Grammar options are best expanded through reading and in conjunction with writing." "Grammar conventions taught in isolation seldom transfer to writing." "Marking 'corrections' on students' papers does little good." "Grammar instruction should be included during various phases of writing (Qtd. in Nordquist, 2020).
In the first book, Weaver helpfully synthesizes four ways to view grammar, deduced from a variety of texts and contexts: grammar as a) syntactic structure, b) prescriptions for the use of that structure, c) rhetorical effectiveness in that use of structure, and d) fundamental "sentence sense," the ability to "comprehend and generate language" (Weaver, 1996, pp. 1-2 the theoretical grounding of traditional and instruction: the one is "reductive," requires students to learn grammar rules before writing, assesses their learning through tests apart from a context, and emphasizes "correctness" of "mistakes," thus associated with the prospect of failure; the other is "productive," involves students in writing as a context for mini grammar lessons, assesses learning through a variety of texts and their contexts, and treats errors as natural to learning, thereby associated with success (Weaver & Bush, 2008, pp. 81-82).
Among Weaver's advocates, Chin (2000) has recommended and reprinted with permission Weaver's list of how to attain a "Minimum of Grammar for Maximum Benefits." The one below is a partially adapted list of those most useful to my approach in ENG G 207: Editing is a good time to teach "concepts on subject, verb, sentence, clause, and phrase." Style can readily be taught "through sentence combining and sentence generating." Sentence-sense benefits from having students manipulate their "syntactic elements."
In approaching "dialects of power," it is wise to also teach the "power of dialects."
Teaching "for convention, clarity, and style" is a prime time to teach "punctuation and mechanics." Chin herself (2000) has also endorsed Weaver's classroom strategies that are now widely practiced in teaching grammar in the context of writing, including sentence combining and sentence generation activities, interactions learner-to-learner, learner-to-teacher, peer partnering, as well as conferencing with students on their progress or challenges when they may arise. As a regular practice, Chin encourages students by speaking to them in terms of "working together to expand their repertoire of syntactic and verbal styles," as opposed to their hunting and "fixing" errors. Over the three semesters since I have shifted away from practices of traditional grammar, I have worked deliberately to excise my use of terms like "correcting mistakes" in favor of "finding opportunities for revision and editing." Extending this line of thinking, Weaver, Bush, Anderson, and Bills (2006) have urged writing instructors to intertwine grammar mini lessons throughout the writing process-not just by integrating them but employing them with the right kind of activities. For instance, to make writing itself more impactful, learners should consider their "purposes and audiences" (p. 78) if they are to discover how their sentences "create meaning" for them (p. 99). To this advice, Ediger (2018) has suggested grounding lessons in "real-life activities" and "varied kinds of writing" to deepen knowledge of how grammar works in context (p. 147).
Emergence of Rhetorical Grammar
Reppen and Soetaert (2012) have acclaimed the historical ties between grammar and rhetoric as still much alive. In the two decades in which I have taught writing, assignments have included rhetorical strategies. For instance, in fall 2020, my freshman composition students did a rhetorical analysis of Cady Stanton's "Declaration of Sentiments" as their first major paper and compared it to the "Declaration of Independence." For that purpose, they learned to identify and discuss how the documents related to their audiences, using Aristotle's appeals to reason (logos), feeling (pathos), and character (ethos) (Furley & Nehamas, 1994). But until I implemented Pedagogical Writing in teaching ENG G 207, grammar students did not engage substantive matters, as they did no writing. However, Kolln and Gray's (2011) text Rhetorical Grammar gave me tools to incorporate rhetorical analysis and evaluation in my recent classes. As a result, assignments help learners acquire a "conscious knowledge of [their] sentence structure" as a necessary part of rhetorical strategy. To illustrate, rhetoric and grammar occupy one space when writers address issues like cohesiveness and conciseness. Applying Kollln and Gray's work has given me ways to build students' confidence in recognizing and appreciating "their own language ability" as part of "the intuitive grammar expertise" innate in all human beings (xiii).
Another sensible account of rhetorical grammar appears in Joty, Carenini, and Ng (2015), whose thinking resembles that of Kolln and Gray. "Clauses and sentences rarely stand on their own in an actual discourse"; consequently, rhetorical analysis enables students to "uncover" how the parts of their texts combine to create a coherent structure (p. 385) to serve their purpose and audience. By incorporating Pedagogic Writing in ENG G 207, to this end and others, my grammar students write three short narratives during the semester, each focused on a different aspect of literacy, broadly defined, as detailed this article's third and fourth sections.
Relationship to Literacy
Among the positive educational forces having been recently brought to bear on grammar, another is the close relationship of substantive grammar instruction to teaching writing, and thus to cultivating literacy skills. The American Institutes for Research (2021) has singled out literacy as "the fundamental skill that unlocks learning and provides individuals with the means to pursue knowledge and enjoyment independently." Experience shows that skills like writing develop gradually over time; however, it makes good sense that those privileged to teach students in courses closely related to literacy skills, provide the best kind of instruction, including in writing and grammar. The World Literacy Organization (2021) has related student development of literacy skills to "rich discussions in the classroom" in which they apply learning "to new and different contexts." In this way they can "analyze, synthesize, and evaluate what they learn." As a result, "literate students" position themselves "to challenge the assumptions and implications of ideas and institutions" and to apply it "to take action on important issues. . . . to change the world." In this context, it stands to reason leaders in public education would support literacy education.
But it is not that simple. In my state since the 90s, policy-driven forces of government have taken an unanticipated toll on the capacity of four-year public colleges and universities to facilitate the literacy of students with basic needs. By implementing a community college system and limiting four-year degree programs to a set number of credit hours (typically 120), four-year institutions have found themselves legally and/or morally unable to meet the literacy needs of some of their admits. In theory, these policies constructively aimed to make associate and baccalaureate degrees available to more residents, taking less time and money. Yet they have deprived some students an immediate opportunity Director of the Writing Center. As mentioned earlier, the problem is that any grammar attempted in service of writing remains decontextualized, awaiting opportunities for faculty to learn and implement writing as a substantive way to teach grammar.
Language Studies of Pedagogic Grammar Instruction
Akin to the earlier discussion of teaching grammar with substance in the service of writing instruction, in a Wales secondary school to implement their module with a group of five learners with dyslexia. In the lesson, they analyzed a piece of nonfiction writing and applied what they saw there as useful to write their own "scientific magazine article." Shortly after the experiment, the instructor shared her belief with the researchers that applying the model's patterns to their own work seemed to stretch learners' comfort zone and to induce "greater confidence" (p.180).
Similarly, Robinson and Feng (2016) have affirmed the strategic use of targeted grammar instruction in key moments of students actively writing. Concerned that fewer than a third of American elementary students have "effective strategies to access what they know" about language in order "to build on that knowledge," they found that subjects increased the quality of their writing skills in receiving direct grammar instruction through well-timed mini lessons to facilitate their sentence structure, mechanics, and usage. Writing also provides opportunities to discover and generate ideas, organize them coherently, think through them to solve problems, and reflect on them from a distance. Gray and Smithers (2019) drew a comparable conclusion to Robinson and Feng (2016) in testing the efficacy of task-based language teaching (TBLT) to promote second and foreign language skills development. They noted that L2 learners "often lack the functional support for accurate, fluent output," but when they replaced traditional grammar explanations with a pedagogical grammar known as MAP-"a semantic meaning-order approach"-subjects strengthened their "form-to-meaning understanding" (p. 88).
MAP first includes "a synthetic approach, in which "grammar is synthesized for communicative use" and then "an analytic approach that analyzes communicative use as served by students' grammatical forms" (p. 90). The investigators concluded that although using TBLT and MAP separately had increased "syntactic complexity, "when they were combined, greater "gains in accuracy and fluency" occurred, as the treatment directed learners "attention to a sequence of functional choices," thereby "simplifying or eliminating" a need "for metalinguistic explanation" (p. 104).
The most innovative approach to Pedagogic Grammar in my review of this literature came from Rule (2017), who blended a MAP and rhetorical approach to sentence style with the "neuroscientific concept of embodied simulation," which involves "visual, motor, and spatial modalities of the body" to "attune writers to the felt effects of written language" and the prospect of revision (p. 19). She concluded that the subjects' exposure to this mode of "invigorated instruction" enabled them to transcend a gap between "knowing about grammar" and "knowing how to do grammar," when invited to refine their meaning through a change of syntax (p. 19). I use this approach to revision and editing throughout the semester in ENG G 207.
G 207's Strategic Use of Students' Own Writing to Learn Grammar and Usage
Notably, the strong interest in grammar instruction found within the research and scholarship discussed above has aimed to improve writing through carefully targeted grammar instruction. But G 207 is obliged to teach grammar to its pre-service future teachers. So, how then does writing become a means and not the end of course assignments that include three papers with multiple drafts stretching over the semester? The rest of the article will illustrate how G 207 had addressed this question through the concepts, principles, and methods alluded to in the literature review.
Course Materials
Instead of a textbook, students learn grammar through writing their own texts (i.e., the course papers identified below).
Illustrated Task-Set on Sentence Types
Beginning with the second draft of paper 2, I provide students the following task-set to learn the features of sentence types and to apply them to identify those in their texts for later revision.
Task 5: Using the handout advice on "Good Practices for When to Intentionally Use a Specific
Sentence-type"a) closely examine the sentences original to your draft 2's first two paragraphs and highlight the number in yellow of any sentences you want to revise based on the advice; and b) just below the yellow text throughout the 2 paragraphs, revise the sentence into the recommended type, using green text. Finally, write 2-3 sentences on any "patterns" you have discovered in your sentence type use to better serve your sentence-level revision for the final draft.
Course Papers
Students write the texts to which they will apply varied course guidance during the term. For example, each of three papers is a short (3-4 page) narrative essay in three drafts, unified by a literacy theme.
In writing their literacy narratives, students tell stories from their lived experience in relation to a given literacy concept. to the presence of something they experience. If it is threatening, they instantly either "flee" or "fight." However, only human beings have the capacity to interpret symbols, thereby freeing them to "make something out of an experience, through "conceptualizing." In that process, they may take account of the present, as well as reflect on prior experience and project their thoughts into the future to consider goals and values. Langer has spoken of the decisions emerging from our conceptualizing as becoming part of the larger "history of human culture-of intelligence and morality. . . and religion. . . always and only, found in human societies" (p. 56). These are the foundational ideas for the three course papers.
Narratives call for learners to be both the main character and the narrator who conceives and unfolds a plot, sets it in a scene, and peoples it with other characters who interact through periodic dialogue, as the action rises to a high point of tension before resolving.
Biographical models of literacy narratives are analyzed for these features and for their significant content on coming to language by Malcolm X, Helen Keller, David Raymond, and a student paper, all anthologized in Eschholz, Rosa, and Clark (2005). G 207 students are free to choose their topics within the context of the literacy concept central to each paper.
Pedagogic Grammar instruction unfolds strategically in connection with the sequence of drafts, with increasing focus on revision and editing to embody more effective sentence construction and application of other elements of grammar and usage, to be demonstrated in subsequent drafts.
Writing Prompts
Paper # 1 asks students to work within Langer's concept of a sign to write a personal-experience narrative involving how it was "read" in a way that had an impact on their life at the time. Popular choices have included interpreting behaviors of (or as) a child, of a pet, or of a risk taken, enriching their understanding.
Paper # 2 calls for students to work within Langer's concept of a symbol to tell a personal story about a challenge they faced and what it symbolized for them, then and now.
Paper # 3 requests students to choose an event or closely related set of events that significantly impacted or shaped their literacy as a writer or reader. Literacy may be defined narrowly to focus on their writing or reading experience, or broadly to focus on an experience in which they used their power of conceptualization to work through a confusing or problematic situation. For this project, they read and analyze more of the literacy-narrative models mentioned above. A common topic that has involved both meanings of literacy has related to a behavior in which the learner was forced by a parent, teacher, or significant other to do something to which he or she was strongly opposed. But in living it through and looking back on it now, he or she was able to re-assess the experience as turning out well.
The first-draft rubric lays out performance criteria for skills in Narrative, Literacy, Writing, Sentence, and Grammar/Usage. To illustrate, "Writing Skills" focus on big-picture items like purpose, thesis, audience, and organization. "Grammar Skills" at this stage addresses sentence boundary issues and the comma rules most relevant to them, as well as spelling, capitalization, and agreement.
The rubric for draft two adds or modifies some first-draft criteria based on new instruction. For example, narrative writing benefits from the type of language used, with verbs playing a more significant role, descriptive language needed to clarify the scene and to develop character interactions, dialogue for the audience to "hear" appropriate ones, and the story's development unfolding by the narrator-writer. Additional comma rules for sentence construction are integrated into the mix, as well.
The final draft rubric's expands criteria for Sentence and Grammar and Usage Skills to expect more mature sentence construction, having completed, for example, the sequenced task-set appearing above; it also expects effective use of both the semicolon and the colon and all comma usage. Note below that they act separately as an applied grammar test. [See the Appendix for the rubric guiding students through paper 3's final draft at the end of the course.]
Applied Grammar and Usage Tests
Each final draft embeds a progressively complex applied test of Sentence Skills and Grammar and Usage Skills.
Criteria match what students should know and be able to do with sentence construction and effective punctuation and mechanics at that time.
The test score counts toward the final draft score of each course paper, as well as separately as an applied test score, thus giving greater weight to grammatical outcomes overall.
Classroom Instruction
An assignment is due each session and provides the basis for discussion of relevant grammar and writing instruction. As an extra-incentive, students who attend class earn 10 participation points plus the designated score for the specific assignment.
All sessions occur for 75-minutes twice a week and continue to be delivered by Zoom, due to restraints on physical contact arising from health concerns related to COVID 19.
Zoom break-out sessions in pairs or small groups sometimes precede whole-class discussion, each kind generating constructive criticism on applications of the day's topic(s) for instruction.
Whole-class discussion involves every student giving examples from his or her work, asking questions about it, and receiving /providing suggestions from/to classmates.
Indirect Assessment as a Tentative Tool Pending a New Assessment Plan
Although much has been accomplished since resuming instruction of G 207 in spring 2020, I will turn my attention in fall 2021 to create an authentic assessment plan tied to course outcomes. Included in this work will be re-designing the rubrics for each draft of the three course papers to identify how criteria tie to the course outcomes. I will continue to use their final drafts to embed three applied grammar and usage tests.
Student Performances at Three Checkpoints by Current Grade Averages
Meanwhile, for my own understanding and for my Faculty Annual Report to the dean, I have relied on the grade-point averages at given checkpoints during the term. Course points are distributed widely across daily work, generally 15-25 points, first and middle drafts of papers 40-50 points, and final drafts 80-100, plus the separately applied test score of 40-50 points assessed by the rubric's performance criteria for sentence and grammar and usage skills. Altogether these categories count over half of the total points possible. Our Canvas delivery system allows students to access their grades and averages very quickly throughout the course, while campus policy asks faculty to report to students every few weeks on their "engagement" (attendance, participation, assignment completion, and performance level). I use this opportunity to affirm learners for what they do successfully and to assist them with a conference or a referral, as needed.
Student Performances
The information below on student performance relates to the three checkpoints I have made in each of three-semester of G 207 since spring 2020.
Table 1. Student Performances Reflected as Grade Averages at Checkpoints
Data available for Spring 2020 -13 enrolled: 3 checkpoints of course averages 02/12/20: A (1) B (6) Comments: These data show fluidity in how students began their work in G 207, maintained it through and after mid-term, and concluded it, all in 15 weeks. During each semester, some students or their loved ones were infected by the Covid 19 virus. All of them attended Zoom classes as they felt up to it during their isolation period and illness. Most of them made up the work they had to miss. However, the lack of continuity that naturally occurred made tasks more difficult. Although they had the assignments and related material, having to miss class discussions impacted grade averages to some degree. Most changes in course average over these checkpoints involved movement between B and C range. The data also showed that more students enrolled in G 207 in fall than spring, which I had also experienced in the earlier sequence in which I taught the course. A few withdrawals came in the first week or two, with two more nearer the end, for students who had encountered problems in life or work interfering with their time or energy to devote to studies.
Spring 2021 Student Self-Assessment Of Course Progress
Anticipating the upcoming assessment project, I experimented in spring 2021 by asking students for feedback on how they saw themselves improving over the term in what they knew and/or were able to do. For this purpose, I provided them the list of outcomes below and asked them to quantify their perceived progress from the outset of the term, using a scale of 1-5 (imagining it set to zero the first day of class).
Course outcomes variously emphasize grammar/usage and writing. However, G 207 relies on the sentence as the foundational unit of grammar/usage. So, learners ultimately demonstrate these skills through their drafts of the three literacy narratives. Course methods also account for an organic overlap www.scholink.org/ojs/index.php/selt Studies in English Language Teaching Vol. 9, No. 4, 2021 14 Published by SCHOLINK INC.
of grammar/usage skills with the rhetorical skills reflected by students' choices to be clear and intentional in their content.
Yet these options are also embedded within the specific criteria used to assess skills in narrative, literacy, writing, and sentence construction. Accordingly, the impact of Pedagogic Writing instruction to help students learn grammar is arguably difficult to distinguish from Pedagogic Grammar instruction aimed to improve student writing. But in a course devoted to grammar, it does not concern me. My hope is to encourage more grammar instructors to incorporate strategic writing assignments for students to use their own texts to develop what they know and can do. Meanwhile, pending further study and experience, students' perceptions about the course and their learning experience will be a useful guide to the next steps for development and/or ongoing improvement. Having so many smaller assignments were hard to keep up with Being able to follow some assignments Meeting on Zoom, which sometimes had connectivity problems
Student Evaluations of Teaching
Comments: Although I chose to summarize only those responses with similar thoughts or feelings of students in at least 2 of the 3 sections, my own review always examines and reflects on the comments of every student. In many cases, they were positive, including several of those above. In other cases, they have provided useful perceptions for me to reflect upon their implications for improving instruction going forward. One reason that may have led some students not to respond to the SETs in the latest section was their having just completed the Self-Assessment Survey. The last few comments in the table just above will be topics for review and planning this summer.
Limitations and Conclusion
One hope of this article has been to raise awareness among language educators of Pedagogic Grammar as a promising tool for writing instruction. Yet it is also an argument for Pedagogic Writing instruction as a fruitful way to support students to learn grammar in the context of their own writing, the approach illustrated here for ENG G 207 over the past few semesters. Students' testimony, though a small sample across three terms, encourages me that we are on to something better than I had experienced earlier using the traditional method. Nonetheless, those for whom I write here represent a broad spectrum of fields and specialties. Few of you may be teaching a course devoted only to grammar and usage. Yet you may perhaps find other reasons to connect with this content. Language as discourse has a way of drawing diverse elements together. As writer and readers, we share a common desire for our students to learn-to know and/or be able to do the various objectives to which we direct their energies. Likely, we have also helped to motivate them, whether a single nudge to do their best, or a series of them to keep on trying throughout the term. Further, we may also have tried to help them gain a sense of self-satisfaction or pride in what they have accomplished. Each effort we make may be more impactful than we know or imagine. In this context, I hope that you have found something of value here to your own language instruction for learners. Uses the semicolon and the colon effectively for the purposes and occasions studied (3 pts) | 2021-09-28T01:09:16.660Z | 2021-07-12T00:00:00.000 | {
"year": 2021,
"sha1": "4e96868676a019d5c7b4d7e47d4140749fc45d0c",
"oa_license": "CCBY",
"oa_url": "http://www.scholink.org/ojs/index.php/selt/article/download/4049/4417",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3e881984cb56cec46ef495beca397121eebd92ac",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
267635250 | pes2o/s2orc | v3-fos-license | Global burden of all cause-specific injuries among children and adolescents from 1990 to 2019: a prospective cohort study
Background: To assess the burden and change in incidence, death, and disability-adjusted life years (DALYs) for all-cause-specific injuries among children and adolescents in 204 countries and territories between 1990 and 2019. Materials and methods: Data were extracted from the Global Burden of Disease, Injury, and Risk Factor Study 2019 (GBD 2019). Global, regional, and country-level age-standardized rate (per 100 000) of incidence (ASRI), mortality (ASRM), and DALYs (ASRD) with 95% uncertainty interval (95% UI) of injuries were estimated by age, sex, socio-demographic index (SDI), and all-cause-specific injuries from 1990 to 2019. Results: Overall, the ASRI, ASRM, and ASRD of injury were 9006.18 (95% UI: 7459.74–10 918.04), 23.04 (20.00–26.50), and 2020.19 (1759.47–2318.64) among children and adolescents worldwide in 2019, respectively. All the above indicators showed a downward trend from 1990 to 2019. In level 2 cause of injury, both the global transport injury and unintentional injury declined during the study years, while self-harm and interpersonal violence-related injury showed an increasing trend. High SDI regions had higher ASRI of injuries, but low SDI regions had higher ASRM and ASRD of injuries globally in 2019. Males had a higher burden of injuries than those in females. The ASRI of injuries is higher in adolescents aged 15–19 years, whereas the mortality and DALYs rate are higher among children under 5 years old. Moreover, adolescents aged 15–19 years and individuals living in Central Asia, Middle East, and Africa had higher ASRI, ASRM, and ASRD of injuries owing to self-harm and interpersonal violence. Generally, falls and road traffic injuries are the leading cause of injury among the population aged 0–19 years worldwide, but self-harm, interpersonal violence, and conflict and terrorism are also leading types of injuries in some regions, particularly in Low-Income Countries and Middle-Income Countries. Conclusions: Injury remains a major global public health problem among children and adolescents, although its burden at the worldwide level showed a decreasing trend from 1990 to 2019. Of concern, the burden of injuries caused by transport injuries, and unintentional injuries has shown a downward trend in most countries, while the burden caused by self-harm and interpersonal violence has shown an upward trend in most countries. These findings suggest that more targeted and specific strategies to prevent the burden of injuries should be reoriented, and our study provides important findings for decision-makers and healthcare providers to reduce injury burden among children and adolescents.
Introduction
Currently, injury has become one of the major causes of death and disability among children and adolescents worldwide [1] .It was responsible for the mortality of more than 4.4 million lives annually and imposed a significant burden on global health [2] .Motor vehicle collisions, falls, and interpersonal violence are the top three global causes of death and disability for all age population [3] .However, there is few report on the burden of injuries among children and adolescents.Recent study from the Global Burden of Disease Study 2019 (GBD 2019) reporting transport and unintentional injuries among adolescents (aged 10-24 years) represent substantial causes of health burden, in terms of both the young lives lost and the lifelong impacts of disability [4] , which indicates a need to prevent adolescents from transport and unintentional injuries worldwide.
Noteworthy, children under 20 years of age were predominantly involved in injuries due to a set of behaviors and characteristics of growing development [5] .Meanwhile, they were a valued group in society with the future contributions, but there has also been far less focus on disability attributed to injury in children than adolescents.Specifically, childhood road traffic injuries are major public health concern worldwide.In Turkey, the fatality rate from road traffic injuries among children aged 0-14 years increased from 1.41 per 100 000 in 2006 to 2.13 per 100 000 in 2019, and it was the highest among children aged 0-9 years [6] .Additionally, falls are the most common cause of injury-related hospitalization in children younger than 5 years old [7] .Moreover, according to the WHO, violence-related injuries are the main causes of death of children, with ~0.95 million children and young people under the age of 18 dying from injury and violence worldwide each year [8] .Furthermore, a crosssectional assessment of adolescents involving 11 European countries suggested that 7.8% of adolescents suffered from repetitive nonsuicidal self-injury [9] , and it showed increasing prevalence at age 13-14, peaking at around age 15-16 [10] .According to GBD 2017, the Eastern Mediterranean Region countries bear a heavy injury burden that largely impacts child and adolescent safety and health, and the leading cause of injury deaths and disability-adjusted life years (DALYs) was self-harm and interpersonal violence [11] .Nevertheless, there is no report on the self-harm and interpersonal violence-related incidence, mortality, and disability among children and adolescents worldwide.
In order to reduce injury burden and promote child and adolescent health, there is an immediate need to grip epidemiologic characteristics of childhood and adolescent injuries worldwide.Herein, we describe the pattern of incidence, mortality, and DALYs from all cause injuries in children and adolescents and report trends during the past 30 years using the GBD 2019.
Data resource, injury definition, and its causes
The GBD 2019 study is a prospective cohort study, including comprehensive assessment of disease, risk factors, and health losses related to incidence, death, and disability, conducted by the Institute for Health Metrics and Evaluation (IHME, http://www.healthdata.org),involved 21 geographical regions and composed of 204 countries and territories from 1990 to 2019 [12] .Details of the methodology used in GBD 2019 have been described in previous studies [12][13][14] .The WHO defined adolescents as individuals in the age bracket of 10-19 years.Herein, the current study focused on the methods and statistical analyses of estimation of the injury burden among children (0-9 years) and adolescents aged 10-19 years old from the GBD 2019 study.Briefly, as per GBD Data Dictionary, injury is one of three broad categories of causes of death and injury (noncommunicable diseases, communicable, maternal, neonatal, and nutritional diseases, and injuries), which include the following three categories: transport injuries, unintentional injuries, and self-harm and interpersonal violence.The International Classification of Diseases and Injuries (ICD)-external cause codes were used for mapping the different categories of injury etiology (Table S1, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).The main outcomes include incidence, death, and DALYs among children and adolescents.
DALYs estimation
Mortality-to-incidence ratios (MIR) were applied to the final injury's mortality measures to estimate the incidence.The cause of Death Ensemble model (CODEm) analytical tool, which explores a large variety of possible models to estimate trends in causes of death, was used to generate injury-related death estimates [15] .To years of life lost (YLLs), each death caused by injury was multiplied by the standard life expectancy at that age.Year lived with disability (YLDs) was calculated by multiplying the prevalence of each sequela by the sequela-specific disability weight.A brief measure of cause-burden based on both injury health losses and premature deaths was reported as DALYs, which was the addition of the YLLs and YLDs.
Socio-demographic index estimation
The socio-demographic index (SDI) is a comprehensive indicator of economic growth, educational attainment, and fertility rate.The three components of SDI include per capita income, average education level of people aged 15 and above, and total fertility rate under 25 years old [11] .The minimum value of SDI during the evaluation period will be set as 0, while the maximum value will be set as 1.A location with an SDI of 0 indicates a theoretical minimum level of development status relevant to health outcomes, while a location with an SDI of 1 indicates a theoretical maximum level.The SDI is stratified into quintiles to provide
HIGHLIGHTS
• It is still a challenge regarding the burden of self-harm and interpersonal violence among older adolescents.• Greater incidence of injuries was observed in adolescents but higher death and disability-adjusted life years rates were in children younger than 5 years.• Countries or regions with higher socio-demographic index level has higher incidence but lower mortality and disability.• Falls and road traffic injuries are the leading type of injury worldwide.• Self-harm, interpersonal violence, and conflict and terrorism-related injuries are also needing attention in some regions, particularly in low-income countries and middleincome countries.
Ethics approval
This manuscript was produced as part of the GBD Collaborator Network and in accordance with the GBD Protocol.The institutional review board of the Guangdong Provincial People's Hospital determined that the study did not need approval because it used publicly available data (KY-Q-2022-495-01).
Statistical analysis
The age-standardized rate of incidence (ASRI), mortality (ASRM), and DALYs (ASRD) were calculated based on the GBD reference population.Data are expressed as absolute values with 95% uncertainty intervals (UI).The rates of incidence, mortality, and DALYs are expressed as the number per 100 000 population by age, sex, region, country, and year.Joinpoint regression model was used to evaluate the temporal trend of ASR on incidence, death, and DALYs of injury from 1990 to 2019.The average annual percentage changes (AAPC) with 95% CI were calculated by fitting a regression line to the natural logarithm of the rates using the year as a regression variable to evaluate the trend of childhood injury incidence, death, and disease burden for the past 30 years.If AAPC and its 95% CI were higher or lower than zero, it reflected an upward or downward trend, respectively; otherwise, it was considered stable.Specifically, children and adolescents were categorized into four groups, including 0-4 years, 5-9 years, 10-14 years, and 15-19 years, respectively.In addition, the associations between the burden of injuries and SDI were examined using Pearson's correlation analysis.Data extraction, sorting, and cleaning were conducted through Microsoft Excel 2010, and all statistical analyses and figures were depicted by R (version 3.2.3)and Joinpoint (version 4.8.0.1).A P-value of less than 0.05 was considered statistically significant.This study was reported in line with the strengthening the reporting of cohort, cross-sectional, and case-control studies in surgery (STROCSS) criteria [17] (Supplemental Digital Content 1, http://links.lww.com/JS9/B884).GBD 2019 complies with the Guidelines for Accurate and Transparent Health Estimates Reporting (GATHER) statement.
Similarly, unintentional injuries are the top 1 cause of injuries for death and DALYs among children and adolescents aged 0-19 years (Fig. 1).In level 3 causes, falls, exposure to mechanical forces, other unintentional injuries, foreign body, and animal contact ranked as the top five causes in ASRI of injuries in 2019 (Fig. 2A).However, road injuries, drowning, interpersonal violence, self-harm, and falls caused the greatest burden of death and disability (Fig. 2B, and C).
At the sex level, ASRI, ASRM, and ASRD for injuries as well as other level 2 causes of injuries in males remained about twice as high as in females both in 1990 and 2019 (Figure S1, S2, S3, and S4, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).There was a decreasing trend in ASRI, ASRM, and ASRD for both sexes in terms of transport injuries and unintentional injuries (Fig. 1).There is a growing concern about injuries-related incidence caused by self-harm and interpersonal violence among children and adolescents aged 0-19 years, which had the first peak at 1994, the second peak at 1999, and the third peak at 2016, as shown in Figure S4A (Supplemental Digital Content 2, http://links.lww.com/JS9/B885).Moreover, the injuries-related mortality and disability were also peak at 1994 (Figure S4B, and C, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).Although unintentional injury was the major cause of injury incidence in all age groups, self-harm and interpersonal violence was the leading cause of injury-related deaths and disabilities among adolescents aged 15-19 years in both 1990 and 2019 (Fig. 3).
Among people aged 0-19 years, adolescents aged 15-19 years had the highest incidence rate of injuries, while children aged 0-4 years had the highest mortality and disability rates of injuries (Figure S5, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).Moreover, the global burden of level 2 hierarchical classification of injuries varied widely by age group (Figure S6-8, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).Notably, adolescents aged 15-19 years had the highest incidence, mortality, and disability rates of self-harm and interpersonal violence compared with other age groups (Figure S8, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).The burden of level 3 hierarchical classification of injuries by age and sex was shown in Figure 4, and the greatest mortality and disability of injuries occurred in males aged 15-19 years and females younger than 5 years in 2019.
Regional burden of injury across childhood and adolescence
Furthermore, regions with high SDI level had the largest ASRI (15 042.88, 95% UI: 11 826.26-19008.01) in 2019, but with the lowest AAPC (− 0.34, 95% CI: − 0.44 to − 0.25) compared with other SDI regions (Table 1).In contrast, regions with low SDI levels had the highest ASRM and ASRD of injuries in 2019 (Figure S9, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).The ASR of mortality and DALYs decreased in all SDI quintiles from 1990 to 2019, with high-middle and middle SDI regions experiencing the largest decrease (Table 1).Among 21 GBD regions, Australasia, followed by Central Europe, and Eastern Europe had the highest ASRI of injury among children and adolescents, while Western East Asia, followed by Oceania, and Sub-Saharan Africa had the lowest in 2019.Only Southern Latin America had the highest increasing trends of ASRI of injury, with AAPC being 0.06 (95% CI: 0.00-0.13).Oceania, followed by Caribbean, and Central Sub-Saharan Africa had the highest 1).
In term level 2 hierarchical classification of injuries, only East Asia, and South Asia had the increases in ASRI of transport injuries, whereas Eastern Europe had the largest decrease from 1990 to 2019 (Fig. 1).Five of 21 GBD regions (Caribbean, Australasia, Oceania, Southern Latin America, and Western Europe) occurred increasing ASRI of unintentional injuries.In 2019, North Africa and Middle East, High-income Asia Pacific, and Southern Latin America had the highest increases of ASRI of self-harm and interpersonal violence compared with 1990.Middle East and North Africa (MENA) region showed the highest increases in death and burden of self-harm and interpersonal violence, and Southern Latin America showed the highest increases in burden resulting from transport injuries.The decline in death and DALYs attributable to unintentional injuries was observed in all GBD regions.
In 2019, it is worth additionally noting that conflict and terrorism was the major cause of incidence, mortality, and DALYs among children and adolescents of all ages in low SDI regions (Fig. 2).In 2019, conflict and terrorism were the leading causes of incidence, mortality, and DALYs among children aged 0-4 years in the MENA region (Figure S10, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).Although road injuries were the major causes of mortality and disability worldwide (Figure S10-13, Supplemental Digital Content 2, http://links.lww.com/JS9/B885), self-harm, and interpersonal violence were ranked among the top causes among adolescents aged 15-19 years in most regions examined in the GBD assessments (Figure S13, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).
National burden of injury across childhood and adolescence
Figure 5 shows the current status and trends of injuries among children and adolescents in 204 countries and territories worldwide.New Zealand had the highest ASRI of injuries at 42 523.ASRD of self-harm and interpersonal violence, transport injuries, and unintentional injuries in 2019, respectively (Figure S14-16, Supplemental Digital Content 2, http://links.lww.com/JS9/B885).South Sudan, Pakistan, and Cuba have the largest upward trend in ASRM of self-harm and interpersonal violence, transport injuries, and unintentional injuries from 1990 to 2019, respectively (Figure S14-16, Supplemental Digital Content 2, http:// links.lww.com/JS9/B885).South Sudan, Botswana, and Northern Mariana Islands have the largest upward trend in ASRM attributable to self-harm and interpersonal violence, transport injuries, and unintentional injuries, respectively.Moreover, Yemen, Botswana, and Northern Mariana Islands have the largest upward trend in ASRD attributable to self-harm and interpersonal violence, transport injuries, and unintentional injuries during the study period, respectively.
Discussion
Childhood and adolescent injuries are an important public health problem, yet there is limited evidence on the causes of these injuries worldwide.In the current study, we estimated that the ASRI, ASRM, and ASRD of injuries were 9006.18 per 100 000, 23.04 per 100 000, and 2020.19 per 100 000 in 2019.Child and adolescent injury is one of the leading causes of child death and DALYs globally with a large proportion occurring in Low SDI countries.However, regions with high SDI had a higher incidence of injuries, while the mortality and DALYs in these regions were lower.Males had a higher burden of injuries than those in females.The incidence of injuries is higher in adolescents aged 15-19 years, but the mortality rate is higher among children under 5 years old.Self-harm and interpersonal violence is a major cause of injuries among adolescents aged 15-19 years.Although falls and road traffic injuries are the leading type of injury worldwide, self-harm, interpersonal violence, and conflict and terrorism are also major causes of injuries in some regions, particularly in low-income countries and middle-income countries.Of note, the burden of injuries caused by transport injuries, and unintentional injuries has shown a downward trend in most countries, while the burden caused by self-harm and interpersonal violence has shown an upward trend in most countries.
Generally, the burden of injuries in children and adolescents remains an important public health challenge, especially owing to self-harm and interpersonal violence, which is growing in magnitude globally.The burden of cause-specific injuries varied by age, sex, and geographical levels indicating an increased commitment and a call for new approaches such as better primary prevention and improved resource allocation for injury prevention, and intervention targeting children and adolescents.The findings from this study provide further insights into self-harm, interpersonal violence, and conflict and terrorism-related injuries prevention when we focus on falls and road traffic injuries prevention, particularly in low and low-middle-income countries.
A recent study using GBD 2019 tracked trends in transport and unintentional injury mortality and morbidity between 1990 and 2019, and suggested that transport and unintentional injuries continue to be substantial causes of harm in adolescents (10-24 years) [4] .Based on their findings, the unintentional injury burden was higher among males than females for most types of injury.Throughout adolescence, transport and unintentional injury fatality rates increased by age group.Moreover, global mortality and DALYs rate due to transport and unintentional injury declined significantly from 1990 to 2019, which is consistent with our findings.However, there is no evidence of selfharm and interpersonal violence-related injuries among children and adolescent, and the burden of injuries among the population aged 0-9 years are still unaddressed worldwide.A previous report by WHO indicated that road traffic injuries are the leading cause of death for adolescents aged 15-19 years [18] .Nevertheless, self-harm and interpersonal violence have a higher percentage of injuries among adolescents aged 15-19 years in GBD 2019.Adolescent self-harm, as a key predictor of suicidal behavior, is a public health problem, which is associated with adverse childhood experiences [19][20][21] .Establishing a systematic referral system to connect adolescents with psychologists can enhance the likelihood of identifying self-harm tendencies and offering the essential support to prevent self-harm among adolescents [19] .
In response to severe child injuries, multiple international cooperation organizations such as the WHO have issued multiple calls [e.g.Convention on the Rights of the Child, A World Health Assembly resolution (WHA56.24,and WHA57.10)], and the global childhood and adolescent injuries have shown an overall downward trend over the past 30 years [22][23][24] , which is consistent with previous research on spatio-temporal variations in transport and unintentional injuries [4] .Although these injury prevention projects have been effectively implemented in high-income countries, these measures have not yet been validated in most low and middle-income countries.There is a lack of research on the breadth of research on child and adolescent injuries and the effectiveness of intervention projects in low-income countries and middle-income countries [8] .Compared with epidemiological survey information, there is a particular lack of understanding of the psychological and behavioral mechanisms underlying the occurrence of child and adolescent injuries.
Although our findings indicate the downward trend of the global burden of child and adolescent injuries, which could represent improvements in the health system response to the prevention and treatment of injury-related harms, it is noteworthy that an increasing trend in child injuries in some regions, especially in terms of self-harm and interpersonal violence.These countries are mainly located in Central Asia, the Middle East, and Africa, such as Afghanistan, Yemen, Libya, and Botswana.These countries and regions have been plagued by constant wars, posing a health threat to their people.For instance, the Rwandan genocide of 1994 and Civil War had widespread effects on the population exposed to war and political violence [25] .The 1990-1991 Gulf War and Yemeni Civil War (1994) may affect physical health, behavioral and emotional functioning in children of war-exposure [26] .These information help to explain the underlying peak of burden on childhood and adolescent injuries in 1994.There is a call to action to enhance investment in interventions aimed at safety, including systems-level approaches.Such action is crucial because, beyond fatal effects, injury-related morbidity in adolescents can have devasting effects on significant physical, emotional, and cognitive challenges [27] .Although conflict affects people of all ages, children are the most vulnerable groups of the society by the conflict [28] .It is estimated that conflict affects the lives of millions of children worldwide [29] .Furthermore, other factors, such as low capacity for the tertiary care of injuries also affect injury risk and outcomes in the low SDI quintile.In the context of the findings of our study, there is a call for global development assistance for the prevention of childhood and adolescent injury, particularly in lower-income countries.
In 2019, we found that injuries among children resulted in higher mortality and burden in low-income countries and middleincome countries, which is consistent with previous research findings [30][31][32][33] .Due to the limited available data from low-and middle-income countries, it is likely that the disease burden due to injuries is underestimated among children and adolescents.There is a need to improve data collection entirely for these countries.Moreover, it is worth noting that the incidence of injuries is higher in developed countries, while the mortality and burden are lower, which may be related to the increasing urbanization leading to an increase in the number of vehicles on unsafe roads in developed countries, as well as the satisfactory access and quality of health services.Strengthening public awareness and education about child injury, enhancing the primary prevention of injury, and improving the acceptability, approachability, availability, and efficacy of injury-related health initiatives are critical to reducing the burden of injuries.
A prior systematic review and meta-analysis indicated that the proportion of self-harm behaviors was similar in boys and girls (3•5% for boys vs. 3•0% for girls) [34] .However, our study has identified the global higher mortality and morbidity among males for injuries caused by self-harm and interpersonal violence, compared with females.The sex disparity of childhood and adolescent injury conversely requires further research, as these should be critical for catalyzing policy change to further accelerate injury reduction.Such findings indicate the importance of considering sex differences when developing injury prevention interventions for children and adolescents, including countryspecific cultural and sex norms that might influence risk.
Generally, adolescents aged 15-19 years have a higher injury incidence but higher mortality and DALYs rate are observed among children younger than 5 years.Of note, adolescents aged 15-19 years have a higher incidence, mortality, and disability owing to self-harm and violence.Finding opportunities to break the cycle of self-harm and violence is crucial in children as important interventions have the potential to reduce injury owning to self-harm and violence [35][36][37][38] .In the USA, it is estimated that ~180 000 non-fatal Emergency Department visits and 2000 pediatric deaths annually are owing to firearm and assault injuries-related to interpersonal violence [39] .There are concerning demands a social contribution by psychological intervention and violence prevention, and those efforts must be intense, multipronged, and thoughtful, making them effective and available to children and adolescents injured by self-harm and interpersonal violence [40,41] .
The findings of our study have valuable implications to influence policy for prospective injury control efforts among children and adolescents worldwide.Facing the burden of injuries were varied by sex, age and type of injury, more different efforts should be further performed to reduce the burden of injury.For example, it calls for intensification of the efforts to enforce road safety laws and implement available healthcare efficiently to ensure free of falls, particularly for young children.Furthermore, governments need to consider self-harm as one of the public health priority problems and formulate new inclusive legislation, policies, guidelines, and national strategies that address self-harm to reduce the observed burden.
Even though the previous study has examined the burden of traffic and unintentional injuries among adolescents, the present study is distinct on this critical topic, adding more epidemic information on the burden of injuries due to self-harm and interpersonal violence among children and adolescents.However, there were some limitations in the current study.First, the common limitations of the GBD dataset, such as limited data in low-income and middle-income countries.Although the GBD study framework estimates the epidemic data in these regions by model, our findings should be interpreted with caution due to this limitation.Second, we only explored level 3 causes of injury but without level-4 data due to the sparsity of data.Third, we do not look at the risk factors for childhood and adolescent injuries, because injuries are generally associated with social-economic factors.Of concern, risk factors are classified into behavior, metabolic, and environmental terms in the GBD dataset.Finally, we only mapped the burden of injury using the GBD Study data before the COVID-19 pandemic.Future studies should explore the impact of the COVID-19 lockdown on the rate of childhood and adolescent injury, particularly by self-harm and interpersonal violence.
Conclusion
By presenting overall and different levels of injuries among children and adolescents in this study, these are still substantial causes of health burden worldwide in terms of morbidity, mortality, and disability.Although downward trends were observed in global ASRI, ASRM, and ASRD for injury, such injuries varied by age, sex, location, and causes.In high SDI countries, higher incidence calls necessitate a commitment to injury prevention for children and adolescents.Furthermore, the higher chance of treatment-seeking and better treatment access should be adopted as important approaches to reduce mortality and disability, particularly in low and low-middle-income countries.Increasingly, the relative burden of childhood and adolescent injuries attributable to self-harm and interpersonal violence is a major public health concern among older adolescents, and individuals living in Middle East, Central Asia, and African countries because of armed conflict or war occurrence.Our findings add more information and provide guidance for developing childhood and adolescent injury prevention programs.The global decision-makers and healthcare promoters must prioritize investment in conducting more effective interventions to reduce childhood injury opportunities and improve the healthcare system for decreasing pediatric injury owing to different causes.
Figure 1 .
Figure 1.Age-standardised incidence (A, D), mortality (B, E) and DALYs (C, F) rate of level 2 cause of injuries among children and adolescents in 1990 (A-C) and 2019 (D-F) among GBD regions.
Figure 2 .
Figure 2. The rank of level 3 cause of injuries for age-standardised rate of incidence (A), mortality (B) and DALYs (C) among children and adolescents by regions in 2019.
Figure 3 .
Figure 3. Percentage incidence D), mortality (B, E) and DALYs (C, F) rate of level 2 cause of injuries among children and adolescents in 1990 (A-C) and 2019 (D-F) among GBD regions.DALYs, disability-adjusted life years.
Figure 4 .
Figure 4. Pyramid figure with incidence (A), mortality (B) and DALYs (C) rate by sex, age and level 3 cause of injuries among children and adolescents between 1990 and 2019.
Figure 5 .
Figure 5.The age-standardised rate of incidence (A), mortality (B) and DALYs (C) of injuries among children and adolescents in 2019, and AAPC of incidence (D), mortality (E) and DALYs (F) of injuries among children and adolescents from 1990 to 2019 in 204 countries and territories.
Table 1
Incidence, mortality, and DALYs of injuries among children and adolescents in 1990 and 2019, and change from 1990 to 2019 by regions.ASR of mortality and DALYs, while Western Europe had the lowest in 2019.The ASR of mortality and DALYs decreased in most GBD regions from 1990 to 2019, with Central Europe experiencing the largest decrease in mortality (AAPC: − 4.81, 95% CI: − 5.15 to − 4.47), while East Asia in disability (AAPC: − 4.22, 95% CI: − 4.57 to − 3.87) (Table 19(95% UI: 34 429.31-52270.26) in 2019, followed by Australia, and Slovenia.Afghanistan had the highest ASRM at 78.09 (95% UI: 66.32-94.56)per 100 000 in 2019, followed by Central African Republic, and Yemen.Similarly, Afghanistan had the highest ASRD at 7122.94 (95% UI: 6107.83-8468.96)per 100 000 in 2019, followed by Central African Republic, and Haiti.Of the 204 countries and territories, 43 showed an increasing trend in ASRI with Yemen showing the largest upward trend (AAPC: 2.96, 95% CI: 1.90-4.02),64 showed a downward trend in ASRI with Eritrea showing the most significant downward trend with an AAPC of − 7.34 (95% CI: − 14.11 to − 0.03), while the ASRI remained stable in the rest.Only Dominica, and Botswana showed an increasing trend in ASRM and ASRD of injuries during the study period. | 2024-02-14T06:18:32.767Z | 2024-02-12T00:00:00.000 | {
"year": 2024,
"sha1": "4f8b36ed6b84e4a063ceb095c9572d3e7657bb29",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/international-journal-of-surgery/abstract/9900/global_burden_of_all_cause_specific_injuries_among.1054.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc1a2043818ee41f6e71ffc123530a13a42a194c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237939083 | pes2o/s2orc | v3-fos-license | Automatic Monitoring of Relevant Behaviors for Crustacean Production in Aquaculture: A Review
Simple Summary Automatic behavior monitoring, also called automated analytics or automated reporting, is the ability of an analytics platform to auto-detect relevant insights—anomalies, trends, patterns—and deliver them to users in real time, without users having to manually explore their data to find the answers they need. An analytics platform with automated behavior monitoring uses algorithms to auto-analyze datasets to search for notable changes in data. It then generates alerts at fixed intervals or triggers (thresholds), and delivers the findings to each user, ready-made. In-aquaculture scoring of behavioral indicators of aquatic animal welfare is challenging, but the increasing availability of low-cost technology now makes the automated monitoring of behavior feasible. Abstract Crustacean farming is a fast-growing sector and has contributed to improving incomes. Many studies have focused on how to improve crustacean production. Information about crustacean behavior is important in this respect. Manual methods of detecting crustacean behavior are usually infectible, time-consuming, and imprecise. Therefore, automatic growth situation monitoring according to changes in behavior has gained more attention, including acoustic technology, machine vision, and sensors. This article reviews the development of these automatic behavior monitoring methods over the past three decades and summarizes their domains of application, as well as their advantages and disadvantages. Furthermore, the challenges of individual sensitivity and aquaculture environment for future research on the behavior of crustaceans are also highlighted. Studies show that feeding behavior, movement rhythms, and reproduction behavior are the three most important behaviors of crustaceans, and the applications of information technology such as advanced machine vision technology have great significance to accelerate the development of new means and techniques for more effective automatic monitoring. However, the accuracy and intelligence still need to be improved to meet intensive aquaculture requirements. Our purpose is to provide researchers and practitioners with a better understanding of the state of the art of automatic monitoring of crustacean behaviors, pursuant of supporting the implementation of smart crustacean farming applications.
Introduction
Aquaculture has become one of the largest commercial and economically important industries in recent years [1]. Lobsters, crayfish, crabs, crayfish, prawns, and shrimp are the most valuable crustacean species groups with significant production. Shrimp and prawn catches recorded new highs in 2017 and 2018 at over 336,000 tons [2]. In aquaculture, most of the modern information technologies are applied to production management and reliable monitoring of crustacean behavior is very important for aquaculture industries because it provides a starting point for welfare assessment [3,4]. Traditional crustacean behavior monitoring is mostly based on manual measurement. However, manual monitoring is usually laborious, time-consuming, and ineffective which thus limits its economic benefits [5,6].
Modern-day crustacean aquaculture originated in Japan in the 1930s [7], and automatic monitoring methods were developed in the 1970s and expanded rapidly around the world [8]. Automatic behavior monitoring in aquaculture is defined as the application of process engineering principles and techniques to precision fishery farming to automatically monitor and recognize animal behavior [9,10]. Until now, scholars and researchers have developed various automatic methods to monitor crustacean behaviors in laboratories or ponds, including acoustic technology [11], machine vision [12], and movement sensors [13,14]. Compared with the environmental parameter detection system, automatic behavior monitoring is a posteriori indicated, but it is very meaningful for welfare evaluation [15,16]. In terms of feeding, moving, home range, and activity, rhythms may grasp biological behavior information, monitor animal health in real time, and provide early warning of diseases [17]. Therefore, real-time monitoring of individual behavior is important for improving production in crustaceans, and there is an urgent need for farmers to monitor behavior in real time, which allows fishermen to take actions in the initial stages of welfare or disease problems to meet the intensive aquaculture requirements [18].
This paper aims to summarize the characteristics of different crustacean behaviors and various automatic aquaculture behavior monitoring methods that have been used over the past three decades. In addition, this article also discusses and summarizes the advantages and disadvantages of each method. Finally, we present potential applications and new techniques for the automatic monitoring of crustacean behavior and the major obstacles that need to be overcome. This review could provide a valuable reference to guide future research into intelligent technologies for behavior monitoring and help practitioners to assess crustacean welfare.
Important Behaviors in Crustacean Aquaculture
Modern technology offers the possibility for real-time shrimp behavior monitoring in aquaculture as a fast and automatic research topic and a repeatable method [19]. In general, when shrimps are in different physiological states, their behavioral profile will change, such as posture, sound frequency, and activity rhythms [20][21][22]. Figure 1 shows the number of papers related to different methods and monitoring behaviors. The most popular methods are acoustic technology, machine vision, and movement sensors. Notably, feeding behavior, movement rhythms and reproductive behavior are the main focus of automated monitoring methods. We will focus on understanding the characteristics and influencing factors of the above three behaviors, which can provide a basis for the development of various automatic monitoring methods.
Feeding Behavior
Feeding is the primary factor for determining the efficiency and cost of aqua feed, which may represent a considerable proportion of the crustacean farming budget [23]. Crustaceans use visual, mechanoreceptor, and chemoreceptor systems to detect the location of food sources, and when food is available, crustaceans change their sound signatures and movements [24]. Feeding behavior can reflect many aspects of an individual organism. The survival rate and molting cycle of red swamp crayfish are associated with different feeding rates [25]. Santos et al. revealed that white shrimp display nocturnal feeding and locomotor rhythms [26]. Thus far, scholars have only used computer vision and passive acoustics to recognize feeding behavior. In the future, we can focus on variables that reflect feeding behavior, such as activity rhythms, posture, and position, and use more types of sensors to indirectly monitor feeding behavior. Feeding table and schedule is a common and accurate feeding method, but automated feeding behavior recognition also has great potential in determining when to start and stop feeding in order to improve feed conversion rate and reduce costs [27].
Movement Rhythms
In addition to feeding behavior, movement also plays a major role in determining the structure of populations and communities, as well as the evolution and diversity of life [28,29]. Movement rhythms are defined as the recurrence of any event within a biological system at more or less regular intervals. Crustacean movements can be categorized spatially as homing, nomadic, or migratory, and temporally as daily, ontogenetic, or seasonal [30,31]. As a basis for welfare assessment, the rhythms of movement behavior can only be used to help choose the correct location for fishing, and are not the core of assessing performance under aquaculture conditions. Therefore, monitoring of movement rhythms during the fishing season will be of great help for choosing the correct location for fishing. Studies designed to understand crustacean behaviors have used techniques such as tag-recapture [32,33], visual tracking [34,35], acoustic telemetry [36,37], or a combination of some of these techniques [38]. However, differently from land organisms, marine species present technical difficulties when their movement is monitored over prolonged period of time, due to the presence of saline water [39,40].
Reproductive Behavior
The reproductive behavior of animals is a significant manifestation of their life and mating is a key step in reproduction [41,42]. The mating process includes approach, touch, mount, turn, rolling, and thrust [43], and this process lasts between 28 s and 6.40 min [44,45]. Obvious external action characteristics and behavior duration are the basis for monitoring using automated methods. Monitoring reproductive behavior can accurately determine when mating occurs, and guide fishermen to perform artificial insemination, thereby increasing reproductive yield. In addition, by monitoring whether the reproductive behavior is normal, disease monitoring and prevention can also be effectively carried out. Therefore, analysis of reproductive behavior can effectively improve crustacean production and larval quality. However, many factors influence reproduction, which are broadly divided into temperature effect, photoperiod effect, and season effect [46]. In addition to environmental effects, individual-level factors also affect reproduction, such as body size, the history of sex, investment in offspring, fitness, and dominance status [47].
We can only effectively analyze behaviors that have been monitored by automated methods. There are also some behaviors such as struggle behaviors that reflect changes within the population, but understanding these behaviors is limited by the inefficacies of manual observation. Under intensive cultivation conditions, the accuracy and precision of the monitoring results are dependent on multiple factors spanning individuals, the environment, water quality, and device model [48]. Based on the studies discussed above, we can appreciate that crustacean behaviors are complex and difficult to monitor. Discussed below are the current technical shortcomings and the future development direction, which suggest a pressing need for providing new ideas for further improvement of intelligent monitoring for farmers and information technicians.
Behavior Monitoring Methods Based on Acoustic Technology
Autonomous acoustic monitoring is a technique using sound waves to remotely measure information. Acoustic technology has been widely used in species identification [49], biomass estimation [50], and behavior monitoring without causing stress to crustaceans [51]. For underwater monitoring, acoustic technology has key advantages over light waves and electromagnetic waves because of the long propagation distances [16]; another advantage of acoustic technology is that its measurement results are less affected by water turbidity and underwater light [52]. According to data acquisition methods, acoustic technology can be divided into passive acoustics and active acoustics. Active acoustics includes sonar, echo, and acoustic telemetry. Sonar and echo technology are more used to measure the density of crustaceans, and acoustic telemetry is more common to monitor crustacean behaviors.
Passive Acoustics
According to Howe et al. [11], passive acoustics is the action of listening for sounds, often at specific frequencies or for purposes of specific analyses. The basic technique involves using one or more hydrophones or appropriate acoustic processing systems to detect natural vocalizations made by underwater creatures. However, the frequency of the sound is very broad. Therefore, the hydrophones in the passive acoustic system placed into farm ponds will be equipped with an amplifier attached to a digital acquisition unit [53]. The digital acquisition unit is attached to a personal computer, which is used to provide valuable information and this process is often undertaken by complex and specific algorithms [54]. Many investigations have indicated that when some behaviors occur, crustaceans emit different sound frequencies, including feeding [55], mating [56], carapace vibrations [57], snap [58], and stick and slip friction [59][60][61]. With such variety of sound production mechanisms, the characteristics of the sounds produced by crustaceans are diverse [62,63]. According to the above theoretical basis, experts can identify crustacean behaviors via long-term acoustic monitoring of sounds.
The mechanisms and spectral characteristics of crustacean behaviors are heterogenous. In terms of feeding sounds, the physical production mechanism is that shrimp use mandibles and maxillae to tear feed pellets into pieces before entering the oral cavity [63]. Some scholars have used the sound spectral features of feeding as an indication of pellet consumption [64]. These experimental results show that the correlation between sound and feeding behavior can reach more than 95%. Although passive acoustic technology can provide guidance for measuring the relative intensity of feeding activity, it is unclear how accurate it is at estimating the quantity of consumed pellets from feeding sounds. In terms of activity rhythms, calibrated hydrophones can be used to measure the relationship between crustacean sound signals and intraspecific interactions (encounter/approach, fighting, and successive tail flips), circadian rhythm [65], and seasonal rhythm [66]. If the p-value of the significant difference between activity rhythm and sound is less than 0.05, it fully proves the reliability of using passive acoustic technology to monitor crustacean activity rhythm. In addition to monitoring activity rhythms, Kikuchi et al. also found that the frequency of stridulating sounds from Japanese spiny lobsters tended to increase at night with the degree of tidal change, and that they are more active during large tidal changes [67]. Bohnenstiehl et al. showed that sound pressure levels were positively correlated with snap rate (r = 0.71−0.92) and varied seasonally by 15 decibels in the 1.5-20 kHz range [21]. The activity rhythm and snap information measured by passive acoustic technology can provide guidance to determine crustacean distribution and optimal harvest time. Commonly cited advantages of passive acoustics include that they can rapidly and noninvasively sample large crustacean volumes. However, other forms of impulse may be similar to the characteristics that could potentially be misclassified as specified behavior, which is also the main cause of error [53]. Therefore, the key challenges are improvements in automated signal detection and classification. The signal detection method based on machine learning can extract the time-frequency characteristics of the sound and filter out the interference of noisy sounds. In addition, post-processing and analysis of large datasets are current difficulties [68,69]. In order for passive acoustics to be better applied to crustacean behavioral monitoring, specialists can use big data technology to achieve intelligent data processing and analysis and develop user-friendly software that can be used by fishermen and ecologists.
Acoustic Telemetry
Acoustic telemetry is technology to transfer information underwater using sound; it was first used in the early 1970s and has been continuously improved over time [70,71]. Figure 2 is a schematic diagram of acoustic telemetry. An acoustic telemetry system designed specifically for aquaculture includes an acoustic receiver with hydrophones, radio smart transmitters, tags, and base station with antenna and computer [29]. Hydrophones are usually mounted on surface buoys, which listen to the tagged animals [72], and an acoustic transmitter sends out information, e.g., an ID code, as short tone-bursts, which are picked up, decoded, and timestamped by an acoustic receiver [73]. Finally, the radio sends tag information and a time stamp to the base station. Commonly, the base station analyzes the arrival time of different signals to determine the location of the underwater animals; this information consists of presence, movement, and behaviors of the tagged animal [74]. Therefore, this method is effective for estimating daily home ranges, core areas of activity [75], nomadic movements [76,77], activity patterns [78], and distance traveled, as well as behaviors [79] such as feeding, molting, and reproduction. It is worth noting that this technology cannot accurately gauge local movements. As the most critical step of monitoring crustacean behavior, individual data concerning underwater animals collected by acoustic telemetry are very important for fishermen and researchers, and the range is from small ponds to large lakes and coastal areas [73]. Compared with radio and PIT-tag telemetry, acoustic telemetry is more effective for tracking aquatic organisms in both estuaries and oceans [71]. For different crustacean behaviors, the acoustic telemetry monitoring system also has subtle differences in data processing and analysis. Due to the maturity and completeness of the equipment, many scholars have used Canadian VEMCO-brand (Halifax, NS, Canada) acoustic telemetry systems with tags, which are one of the most widely used systems to obtain data on crustacean positions. The position information can be directly quantified into diurnal activity rhythms [29,[80][81][82], seasonal movements [83], home range [31,75,84], nomadic behavior [72,76], and migratory patterns [85][86][87]. Compared with passive acoustic monitoring, acoustic telemetry technology is more effective in determining the activity pattern of individual crustaceans. In addition to activity rhythms, VEMCO VR2 systems have been used to reveal that female lobsters' reproductive migration occurred between 5 June and 25 August. This result provides reference for revealing the reproductive behavior of the lobster [31,79]. Information such as this can guide fishermen to carry out artificial breeding in time or create suitable natural mating environments to improve breeding production.
In summary, all the above studies show that acoustic telemetry can monitor aquatic animals in a free-living state with the advantage of location. The detailed information concerning crustacean behaviors derived from acoustic technology studies is listed in Table 1. Of course, acoustic telemetry also faces some difficulties and challenges. A common concern is the potentially adverse effects on animal survival and behaviors. The difficulty is that in order to obtain behavior data, the animal to be monitored must be tagged. In addition, telemetry projects are often relatively expensive. Although the acoustic receivers and base station can be used repeatedly, tags are usually considered expendables [88]. Compared with passive acoustics, fewer crustaceans tend to be monitored and tracked in acoustic telemetry contexts. The data resolution has tended be very high, but some complex behaviors such as sublime aggression, courtship, and some actions that are transmitted by chemical signals are hard to identify by acoustic telemetry. Another technical difficulty is quantifying spatial location. This represents an important focus area for future research and development. By combining the Internet of Things, artificial intelligence, and cloud computing, it is possible to identify the spatial position information of crustacean movements pursuant of intelligent optimization and decision-making control functions in smart aquaculture.
Behavior Monitoring Based on Machine Vision
Underwater machine vision technology has been used since the 1950s to study the behavior, distribution, and abundance of marine and freshwater organisms [89]. Applications of machine vision have increased considerably in two major aquaculture domains, namely: (1) pre-harvesting and growth of underwater animals and (2) post-harvesting [90]. This technology can provide an effective means for the analysis of individual features [91,92], species classification [93], vocalizations [94], and behavior recognition within complex data sets at scales and resolutions not previously possible [95,96]. Machine vision technology can help us solve some important problems concerning ecology, social structure, collective behavior, communication, and welfare [97]. It can also save initial raw information for potential re-analysis, and record both visible benthic organisms and other biological activity [98]. Machine vision methods can quantitatively analyze behavior and greatly increase the efficiency, repeatability, and accuracy of image review, which is also a prominent advantage compared to acoustic technology. The typical equipment includes an industrial camera, source, acquisition card, and image processor. Based on the different wavelengths utilized by cameras, light can be divided into visible and infrared. The system structure and monitoring flow chart which utilizes visible light as the light source is shown in Figure 3.
Machine Vision Based on Visible Light
Machine vision technology based on visible light is widely used for crustacean behavior monitoring compared to other types of light sources. Extant studies on the monitoring of shrimp behavior can be divided into two categories. Direct methods use the measured videos or images to obtain the feature, trajectory, angle, velocity, and range of crustacean activities, as well as other parameters. With indirect methods, crustacean behavior is monitored from information on uneaten pellets recorded by a camera.
Direct Behavior Monitoring
Studies have shown that crustaceans exhibit particular behaviors in different physiological states [99]. According to the specifics of the experimental environment and the characteristics of action occurrence, image processing systems usually approach this by the applicable algorithms, including image preprocessing, image segmentation, and feature extraction. There are three major branches of image preprocessing, namely image reconstruction, image restoration, and image enhancement [100]. This involves many methods such as linear transformation, histogram equalization, filtering, increasing, and frequency domain enhancement [101]. Especially for aquatic creatures such as crustaceans, which can easily cause water turbidity, image preprocessing is commonly applied to improve the quality of turbid images. Due to the temporal and spatial characteristics of video images, the main idea of the moving target detection method is to extract the changed regions from the background in the video image [102,103]. In recent years, more and more methods have been proposed to provide accurate and consistent segmentation for moving target extraction; commonly used methods include threshold segmentation, region segmentation, and edge detection [104,105]. Analysis and extraction of target features is the final step of behavior identification of moving targets, involving color features, texture features, geometric features, and motion characteristics.
Crustacean behaviors are related to their size, shape, speed, and color. The appearance detection method can identify static visual appearance features that are lacking in motion-based technologies, and it performs well in a stationary scene where crustaceans exhibit minimal motion. Feature extraction from texture is a basic approach for identifying behavior, e.g., the texture feature was used to extract the patterns of bay lobsters' exoskeletons to automatically classify the molting stage with a maximum accuracy of 98.61% [106]. Oishi et al. successfully detected shrimp mating motions using cubic higher-order local auto-correlation (CHLAC) features in conjunction with a subspace method, with a standard deviation of 5.8 ± 1.3 [107]. Although posture analysis based on skeleton characteristics is often used in agriculture for large animals such as cattle and pigs, Yan and Alfredsen also extracted the lobster skeleton to quantify the posture of the lobster; the migration of this technology will provide more technical support for the application of skeleton feature extraction methods in aquaculture [18]. Machine vision monitoring methods are also widely used in the measurement of crustacean movement rhythms. The working principle is that the visual monitoring system obtains the pixel coordinates of the crustacean according to the position of the crustacean in the image. The computer then converts the pixel unit to the actual distance (mm) according to the x, y Cartesian coordinates recorded by the tracking software. The researchers calculate the Euclidean distance between the coordinates to obtain the total distance traveled by each shrimp [12]. Using the methods mentioned above, Aguzzi et al. found that the measurement of displacement of lobsters displays diurnal activity rhythms and burrow-related behavior [93]. Crustaceans offer the benefits of delineated developmental life stages and the accumulation of environmental toxins which change their behavior [108]. Therefore, some scholars have been able detect changes by tracking and analyzing the locomotion behavior of shrimp exposed to toxic chemicals in the environment, especially their movement speed, which yielded a p-value less than 0.08 [109,110]. In addition to image processing algorithms, mature video behavior monitoring software platforms have been developed in recent years. Commercial software uses graphics and mathematical methods to describe motion trajectories [111,112]; a real-time monitoring system can analyze multiple behaviors at the same time [34,35], and an underwater imaging system has been independently developed by researchers [113]. Real-time monitoring systems can be operated in aquaculture over the long term to cover the entire crustacean life cycle, from nursery to fishing, identifying the growth status of crustaceans in real time, and providing early warnings and alarms for abnormal behaviors in a way which minimizes labor inputs. However, the high price is the main issue that restricts it from being widely used in aquaculture.
Indirect Behavior Monitoring
In addition, other information can be used to indirectly quantify crustacean behavior. Uneaten pellets and displacement represent important information for analyzing, identifying, and monitoring shrimp behavior. Therefore, such methods can be used to quantify particular behaviors that are difficult to detect [114].
Detection of uneaten pellets is another way of using machine vision to monitor feeding behavior. During this process, the corresponding area and other parameters of the food pellets can be used to indirectly monitor feeding behavior [113]. Those authors also measured organic matter residues in pond sediments to estimate feeding behavior at night time. The remaining pellets can be used as an indicator of the feeding intensity, thereby saving the amount of feed and effectively reducing pollution in culture ponds, but the accuracy of the results cannot be quantified [115]. Although indirect information can be used to monitor behavior, compared with the direct monitoring method, it is less accurate and prone to errors. This information can also be stored in a big data database, and it can help information technology staff build an expert farming system. Long-term underwater imaging and expert systems can also help in terms of smart feeding decisions, smart sewage decisions, and abnormal status warnings. In summary, machine vision technology has improved task performance (automatic monitoring) in achieving a task (e.g., classifying images) from image data. This technology can be a highly reliable and accurate method for objectively measuring activity levels in aquaculture with a low consumption of labor and time. However, regardless of whether direct or indirect measurements are used, it is still in the experimental stage, and large-scale applications still need to overcome many practical problems. Crustacean activities are mainly concentrated at night; the water can lead to reflections and the dark surface of the shrimp will directly reduce the clarity of the acquired video images. In addition to these practical problems, machine vision technology also faces some challenges in monitoring shrimp movements. The complexity of the monitoring environment and the uncertainty of the monitored objects are the biggest factors that interfere with shrimp behavior monitoring. There is an urgent need to improve the technology for extracting moving targets in underwater video images, and software to analyze more specific behaviors will become more important in the future.
Machine Vision Based on Invisible Light
Invisible light is an electromagnetic wave that cannot be seen by humans. The wavelength range is greater than 760 nm and less than 380 nm. The principle applied in aquaculture is based on the absorption of invisible light in water, resulting in variable brightness, which is not affected by visible light intensity and can yield good imaging results in dark places such as inside animal shelters [116,117]. Most crustacean species are nocturnal, remaining inside shelters during the day and actively foraging outside at night [118]. Therefore, invisible light technology is more suitable for capturing dim images of shrimp at night than visible light technology. Due to the low cost and low requirements of visible light intensity, it has a unique ability to fully understand the behaviors and rhythms of shrimp in aquaculture, which has poor lighting.
Invisible light technology provides a new method for accurately identifying crustacean behavior and mainly includes infrared imaging technology and X-ray imaging. The advantages of using infrared imaging technology to monitor crustacean behavior, including the fact that crustacean eyes are not sensitive to the infrared light used in the system and the scattering of infrared light in water does not tend to present a problem [117]. However, the major disadvantage of infrared light is that the attenuation coefficient and absorption of light in water increases dramatically as the light wavelength increases into the visible red region and then increases exponentially in the infrared region [119]. Hesse et al. used infrared photoelectric sensors to collect infrared images and study the different reactions of lobsters when different predators approach [120]. Ahvenharju and Ruohonen used ballotini glass beads to label diets with X-ray, and the number of ingested glass beads in the digestive track was counted from the X-ray images [121]. The accuracy rate was 92.8 ± 8.6% and the results confirm that using an X-radio graph technique makes it possible to measure the individual food consumption of freshwater crayfish juveniles reared communally. Invisible light technology is not affected by the light of the aquaculture environment, which can monitor crustaceans at night.
Invisible light has been used for monitoring crustacean behavior in laboratories and ponds. Compared with visible light systems, invisible light imaging technology requires no calibration, and is more suitable for measurements in turbid water with complex light conditions. In addition to behavior monitoring, it has also been used in aquaculture biomass estimation, 2D and 3D tracking, positioning of crustacean stocks, and various behavioral analyses [122]. However, further research is needed before such technology can be applied to commercial aquaculture to obtain real-time data on crustacean behavior pursuant of minimizing the interference caused by absorption, refraction, and scattering. Therefore, there is a need to improve the ability of invisible light technology to monitor crustaceans under high illumination levels or longer distances. For both visible light and invisible light methods to monitor the movement of crustaceans, improving image feature extraction technology and solving the problem that machine vision technology cannot be applied in high-density and large-population breeding contexts are still challenges to be overcome.
Overall, monitoring crustacean behavior through images with machine vision technology is currently an important application and research focus for realizing precision aquaculture, and the detailed information concerning machine vision technology for crustacean behavior monitoring is listed in Table 2. However, most research is still at the laboratory stage, and this method is not suitable for detecting some inconspicuous behaviors and behavioral transmission between crustaceans based on chemical signals which also cannot be detected. Therefore, it is necessary to develop a high-resolution monitoring system capable of local amplification. [121] In summary, future research and development directions can be demarcated as follows: (1) The focus on spatiotemporal and spatial sequence will continue to improve the accuracy and robustness of machine vision recognition of crustacean behavior. It is expected that algorithms similar to two-stream networks and 3D convolutional networks will be developed that can account for spatiotemporal sequences to achieve higher performance vis à vis shrimp behavior recognition methods. (2) The embedded vision system has the characteristics of compact structure, fast processing speed, and low cost; this is an important direction for the development of machine vision systems in the future. It also makes it possible to combine machine vision systems for large-scale popularization in aquaculture. (3) Machine vision systems that incorporate multiple technologies are also current and future research hotspots. For example, the fusion of the machine vision system and the Beidou navigation system can achieve high precision and low cost in the context of farmland navigation systems; the multiple video systems can collect more behavior information from crustaceans. The combined use of bio-floc technology and machine vision technology can make it possible to identify individual animals in intensive high-density environments.
Electrosensors
In addition to acoustic and optical technology, other sensors based on different parameters have been leveraged to identify and monitor crustacean behavior [125]. More broadly, sensors are often used for farming purposes. In recent years, more and more sensors suitable for crustacean behavior monitoring have been developed, and some equipment has been proposed for monitoring in aquaculture.
Accelerometer
Accelerometers are electromechanical devices designed to measure acceleration forces caused by gravity and the moving or vibrating activity of a subject. In particular, threeaxial accelerometers can measure the motion, vibration, and displacement of underwater animals in X, Y, and Z directions [126]. When crustaceans undergo behavioral changes, they are usually accompanied by changes in movement speed or acceleration. Therefore, the high correlation between accelerometer data and movement of free-living individuals in different behavioral contexts is the key to identifying and monitoring different behavior states [127]. The development of accelerometer data loggers has made it possible to monitor daily patterns of behavior in many crustacean species, mainly lobsters, including slipper, spiny, and clawed.
Accelerometers are very effective in monitoring the activity rhythm of crustaceans, and are currently one of the main application areas of acceleration sensors. The collected accelerometer outputs can be converted into distances moved per unit time and scholars can estimate the distance moved by shrimp in a period of time according to this method to an extent that is statistically significant, that is, p < 0.005 [125,128]. However, the correlation between the movement and accelerometer is uncertain. Jury et al. obtained the value of r 2 between the video activity and accelerometer activity of 0.898. The activity was defined as forward, backward, or sideways locomotion (>2 cm) for each lobster [128]. Goldstein et al. obtained the value of r 2 between the video distance and acceleration of 0.53 and 0.63 [125]. These results indicate that the accelerometer can only estimate activity, and it is still not accurate enough for more demanding distance calculations. The system structure of monitoring crustacean behavior with sensors is shown in Figure 4; the acceleration sensor usually needs to be fixed on the crustacean, so it will apply pressure and thus cannot be used on small crustaceans (e.g., krill and fairy shrimp) This is also one of the challenges faced by intrusive automated monitoring methods. In addition to pure activity discipline assessment, some researchers have constructed models based on acceleration data and estimated the physiological status and welfare of crustaceans [13,14]. Thus, acceleration sensors can effectively be used to monitor the relative activity of lobsters over long periods in the laboratory and field, and this technology provides important reference information for the development of intelligent decision systems. The accelerometer is similar to acoustic technology in that high turbidity and changing light levels will not affect the recording, and the sensor systems are readily adaptable to the field. The distance traveled and rate of movement can be estimated by calibrating accelerometer outputs compared to actual movements. Therefore, once appropriately calibrated, accelerometry appears to be a suitable method for assessment of movement patterns and distance traveled by animals above a certain size.
Electromyography
Electromyography (EMG) is an electrodiagnostic automated technique for evaluating and recording the electrical activity produced by skeletal muscles. An electromyograph detects the electric potential generated by muscle cells when these cells are electrically or neurologically activated. The structure of the system using EMG to monitor shrimp behavior is shown in Figure 5. The signal collected by EMG is converted and transmitted to computers, and the signals can be analyzed to detect physiological abnormalities, activation level, recruitment order, and the biomechanics of crustacean movement [129,130]. Therefore, these electrical signals can be used to design automated growth monitoring systems and develop intelligent decision-making and control systems for aquaculture.
According to some studies, the behavior of crustaceans can cause muscle cells to generate electrical potential [131,132]. EMG has been used to monitor the feeding behavior of individual crustacean. Gripping action is a key element of the feeding behavior of crustacean, especially lobsters. Therefore, the recorded electromyogram of the lobster claw muscle can characterize feeding behavior to an extent that is statistically significant at the 0.05 level [133,134]. It is reliable for monitoring feeding behavior and estimating the intensity of behavior based on chemical and biological EMG methods to gain a deeper understanding of crustacean feeding status, and the obtained data can be used to establish accurate growth models. The principle of studying lobster movement patterns is similar to monitoring eating behavior; the difference is that chronic electrodes are implanted on the shrimp's legs instead of on the claws [129]. More importantly, the EMG pattern can be analyzed to determine whether the behavior is reflexive or spontaneous [130,135], which solves the problem that machine vision and acoustics cannot monitor some behaviors that are not obvious. The studies above have shown that these sensors are highly accurate in detecting motion states and have great potential for estimating the intensity of behavior. Table 3 shows detailed information on sensor technologies. However, sensors need to be in contact with the crustacean, or even implanted in the crustacean during the measurement, which is an interventional monitoring method. The pressure and interference caused by this on the crustacean is difficult to gauge. In the future, miniaturized, lightweight sensors have great potential for reducing the pressure in small-scale biological monitoring contexts. Recently, namely, the fusion of acceleration sensors and other sensors (such as pressure, GPS, and acoustic tags); this has successfully been applied to monitor the ecology, physiology, and behavior of different fish [136]. This method of multi-information fusion could also be used in crustacean behavior monitoring. For behaviors such as motion rhythms that require long-term monitoring, the development of corrosion-resistant equipment materials will be another problem that needs to be overcome in future development. According to some studies, the behavior of crustaceans can cause muscle cells to generate electrical potential [131,132]. EMG has been used to monitor the feeding behavior of individual crustacean. Gripping action is a key element of the feeding behavior of crustacean, especially lobsters. Therefore, the recorded electromyogram of the lobster claw muscle can characterize feeding behavior to an extent that is statistically significant at the 0.05 level [133,134]. It is reliable for monitoring feeding behavior and estimating the intensity of behavior based on chemical and biological EMG methods to gain a deeper understanding of crustacean feeding status, and the obtained data can be used to establish accurate growth models. The principle of studying lobster movement patterns is similar to monitoring eating behavior; the difference is that chronic electrodes are implanted on the shrimp's legs instead of on the claws [129]. More importantly, the EMG pattern can be analyzed to determine whether the behavior is reflexive or spontaneous [130,135], which solves the problem that machine vision and acoustics cannot monitor some behaviors that are not obvious.
The studies above have shown that these sensors are highly accurate in detecting motion states and have great potential for estimating the intensity of behavior. Table 3 shows detailed information on sensor technologies. However, sensors need to be in contact with the crustacean, or even implanted in the crustacean during the measurement, which is an interventional monitoring method. The pressure and interference caused by this on the crustacean is difficult to gauge. In the future, miniaturized, lightweight sensors have great potential for reducing the pressure in small-scale biological monitoring contexts. Recently, namely, the fusion of acceleration sensors and other sensors (such as pressure, GPS, and acoustic tags); this has successfully been applied to monitor the ecology, physiology, and behavior of different fish [136]. This method of multi-information fusion could also be used in crustacean behavior monitoring. For behaviors such as motion rhythms that require longterm monitoring, the development of corrosion-resistant equipment materials will be another problem that needs to be overcome in future development.
Other Methods
In addition to the methods mentioned above, other technologies have been used to monitor behaviors and may be feasible alternatives, although there is no large-scale application.
The information collected by using a single technology is insufficient. In order to obtain more comprehensive and accurate behavioral information, researchers are trying to simultaneously use different technologies to obtain crustacean behavioral information from multiple angles. The combination of acoustic technology and sensor technology can yield behavior information from multiple angles. The technical fusion of acoustics and sensors can not only be used without obstacles in muddy underwater environments, but there is also an obvious absolute correspondence between the sound frequency of crustacean and the motion acceleration [127]. Therefore, it is feasible to use information fusion technology to make up for the blind spot of a single technology. This also provides a fa- Grasping behavior American lobster Laboratory p = 3.099 × 10 −9 , <0.05 [134]
Other Methods
In addition to the methods mentioned above, other technologies have been used to monitor behaviors and may be feasible alternatives, although there is no large-scale application.
The information collected by using a single technology is insufficient. In order to obtain more comprehensive and accurate behavioral information, researchers are trying to simultaneously use different technologies to obtain crustacean behavioral information from multiple angles. The combination of acoustic technology and sensor technology can yield behavior information from multiple angles. The technical fusion of acoustics and sensors can not only be used without obstacles in muddy underwater environments, but there is also an obvious absolute correspondence between the sound frequency of crustacean and the motion acceleration [127]. Therefore, it is feasible to use information fusion technology to make up for the blind spot of a single technology. This also provides a favorable theoretical basis for future large-scale research into information fusion technology in crustacean behavior monitoring.
Radio tag technology can also be used to quantify the behavioral characteristics of crustaceans; it transmits individual information to a receiving station or monitoring center. An RFID tag consists of a tiny radio transponder. When triggered by an electromagnetic interrogation pulse from a nearby RFID reader device, the tag transmits digital data, usually an identifying inventory number, back to the reader [103]. Radio tags are cheaper than acoustic tags and can be used to develop a low-cost real-time tracking system. However, tag loss during molting of the exoskeleton is the main difficulty and challenge of labeling technology to monitor crustacean behavior [137]. Therefore, the invention of internal elastomer tags could provide a new solution for the fixation of the label, and these tags would likely have large-scale applications in commercial fisheries in the future.
Challenges and Future Perspectives
The acquisition of crustacean behavior information is critical because it helps fishermen to know the behavior state in time, for applications such as grasping the best harvesting location according to the seasonal movement, and timely adjustment of the most suitable environmental parameters of crustaceans in the breeding period to provide a reference for obtaining the maximum welfare harvest. However, automatic monitoring of crustacean behavior is very difficult and challenging. One of the major reasons is that crustaceans are sensitive and translucent, and while monitoring behaviors the free movement of the crustacean should be ensured, which limits the application of many methods. Another reason is that the environmental characteristics of aquaculture are not conducive to crustacean behavior monitoring, such as low visibility, poor optical path through biofouling on optical systems, impossibility to discriminate individual animals, noise interference from apparatuses, inaccurate accelerometers, and electronic sensors being disturbed by electric fields. Manual monitoring is often ineffective, expensive, and damaging. With the development of advanced automation technologies such as machine vision, acoustics, and sensors there is significant potential to improve the precision of crustacean farming. However, the unique defect of each technology is also objective. The technical difficulties that need to be solved urgently include the substantive damage caused by sensors to crustaceans, how to move beyond single to multiple technology approaches, the low degree of automation, and the weak tracking ability of individual shrimp. Therefore, we propose future development trends in crustacean behavior monitoring to improve the level of precision aquaculture.
(1) It is necessary to expand and improve the application of imaging technology in aquaculture, which is suitable for crustacean breeding environments with low visibility and high density. In future studies, multiple types of imaging technologies can be used for behavior monitoring in aquaculture, moving beyond just to infrared imaging and RGB imaging. Microwave technology has been widely used in underwater imaging. Digital holography is one of the most advanced technologies used for monitoring aquatic animals. Therefore, to avoid the interference caused by the turbid water quality, microwave technology and digital holography can be used to monitor the behavior of crustaceans in the turbid water environment.
(2) Most machine-vision-based behavior monitoring uses planar images for analysis. Underwater 3D technology can conveniently obtain 3D coordinate information of crustaceans, which makes it easier to track individual crustaceans, improving the monitoring accuracy of the movement rhythm. Real-time aquatic behavior monitoring will support improved aquaculture management, welfare, and policy interventions.
(3) Deep learning (DL) is an algorithm that is highly suitable for underwater recognition. Performance comparisons with traditional methods based on manually extracted features indicate that the greatest contribution of DL is its ability to automatically extract features. Moreover, DL can also output high-precision processing results. A rapid, lowcost deep learning system would be highly suitable for the identification of individual crustaceans in a high-density stocking environment. Therefore, deep learning technology can be used to develop non-invasive, reproducible, and automated individual crustacean tracking and behavior monitoring.
(4) The combination of multiple technologies has been preliminarily explored in crustacean behavior monitoring. However, these electronic monitoring devices are inevitably affected by electric fields and accuracy. Therefore, a non-invasive method that combines multiple technologies has greater potential. For example, information fusion technology based on images and sensors is formed to solve the problem of a single device being affected by the environment and failure.
(5) Currently, the acoustic technology behavior monitoring method is seriously disturbed by noise. In addition to reducing the noise of equipment in the aquaculture environment as much as possible, the ability of acoustic technology should also be improved. Big data technology can efficiently analyze more data collected in one area or data collected across a larger area more frequently; fishermen will be able to determine changes in acoustic patterns more readily and compare them to other environmental data to provide a holistic understanding of crustaceans.
Conclusions
Over the past three decades, researchers have developed various automatic techniques and methods to monitor crustacean behaviors. This paper reviews current research concerning intelligent crustacean behavior monitoring, including acoustics, machine vision, sensors, and other emerging options. Based on an extensive analysis of the literature, Table 4 summarizes the advantages and disadvantages of various monitoring technologies and their wide range of applications, which could help provide the most suitable behavior monitoring means for different aquaculture environments. As a large-scale application technology, acoustics are not affected by water turbidity and they can work well in almost invisible conditions. Therefore, acoustic technology is more suitable than other methods for use in low visibility environments. However, their non-reusability, high cost, and noise interference limit their application in aquaculture. Compared with acoustics technology, machine vision is objective, repeatable, inexpensive, and not affected by noise; it can identify crustacean behavior remotely without causing damage or stress to the crustacean. The application of machine vision is limited by water surface reflectivity and low image quality. This problem can be solved by using near-infrared machine vision as its imaging quality is not affected by the intensity of visible light. In addition, it is necessary to develop more general sensors for a variety of crustaceans. The advantage of sensors is that they are inexpensive and highly accurate. However, they only work for large fish; the stress and damage caused by sensors on small aquatic animals limits their current development and applications. With the increasing diffusion of automation technology aquaculture in future, it can be expected that improved algorithms and new software will be developed for intelligent crustacean behavior monitoring to realize automatic aquaculture, and even unmanned fisheries. Funding: This work was financially supported by the Construction and large-scale application of big data analysis and management cloud service platform for the whole industry chain of shrimp (Project number 2017B010126001).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
No new data were created or analyzed in this study. | 2021-09-28T05:14:33.656Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "a21e138f8493cd3a53c0c3266b8e14dd7595f64a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/9/2709/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a21e138f8493cd3a53c0c3266b8e14dd7595f64a",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215768818 | pes2o/s2orc | v3-fos-license | Pilot Assignment in Cell-Free Massive MIMO based on the Hungarian Algorithm
This letter focuses on the problem of pilot assignment in cell-free massive MIMO systems. Exploiting the well-known Hungarian algorithms, several algorithms are proposed, either maximizing the system throughput or maximizing the system fairness. The algorithms operate based on the knowledge of large scale fading coefficients and of the positions of the mobile stations. However, the latter information is not really necessary, since the paper shows that large scale fading coefficients can be used as a proxy for the distances between the mobile users and the access points with a very limited performance loss. Numerical results will show that the proposed pilot assignment algorithms outperform several competing alternatives available in the literature.
I. INTRODUCTION
Cell-free (CF) massive MIMO (mMIMO) is a wireless network deployment architecture credited to be a possible evolution of traditional multicell mMIMO systems [1], [2]. In CF mMIMO, a very large number of distributed single-antenna access-points (APs) serves several mobile stations (MSs) using the same time-frequency resource. All APs are connected to a central processing unit (CPU) and cooperate via a backhaul network, and time-division duplex (TDD) protocol is used. CF mMIMO systems have actually no cell boundaries and benefit from large-scale fading diversity. They are thus able to ensure an improved level of fairness across users when compared with multicell mMIMO systems [3]- [5].
Similarly to multicell mMIMO, the performance of CF mMIMO systems is critically affected by the lack of a sufficiently large number of orthogonal pilot sequences, which prevents the possibility of acquiring channel state information (CSI) with no interference. The use of properly designed pilot assignment (PA) algorithms, thus, is crucial in order to ensure good performance in highly loaded networks. One of the first papers dealing with the problem of PA is [1]: based on the knowledge of the large-scale fading (LSF) channel coefficients, the greedy algorithm in [1], starting from a random PA, iteratively updates the pilot of the worst performing MS in This work was supported by HiSilicon through cooperation agreement YBN2018115022.
S. Buzzi order to increase the system fairness. The authors [6], instead, propose to use the algorithm in [1] using as starting point an assignment based on the location of the MSs. Similarly, patent [7] proposed an iterative algorithm, based on consecutive updates of the pilots for the worst and best performing MSs, again aiming at the maximization of the system fairness. In [8] a PA algorithm based on the knowledge of the MSs' positions is proposed. Finally, reference [9] neglects the PA problem and shows that the channel estimation error can be lowered also through the optimization of the powers used to transmit the pilots.
In this paper we focus on the PA problem for CF mMIMO systems, and, leveraging the well-known Hungarian algorithm [10], introduce four different algorithms aimed at the maximization of either the system throughput or the system fairness, exploiting either the location of the MSs or the knowledge of the LSF coefficients as a proxy of the distances between the MSs and the APs. The numerical results, provided in Section IV, will reveal the superiority of the newly proposed solutions with respect to competing alternatives.
II. SYSTEM MODEL AND PERFORMANCE MEASURES
We consider an area with K single-antenna MSs and M APs with N AP antennas connected, by means of a backhaul network, to a CPU wherein data-decoding is performed. We denote by K m and M k the set of MSs served by the mth AP, and the set of APs serving the k-th MS, respectively. The symbol g k,m denotes the N AP -dimensional vector representing the channel between the k-th MS and the m-th AP; we assume g k,m = β k,m h k,m , with h k,m an N APdimensional vector whose entries are i.i.d CN (0, 1) random variables (RVs), modeling the fast fading, and β k,m the LSF coefficient.
At each AP, channel estimation is performed by the linear minimum-mean-square-error (MMSE) processing. Denoting by τ p < τ c the length (in time-frequency samples) of the uplink training phase and by τ c the length (in time-frequency samples) of the coherence interval, the mth AP forms an MMSE estimate of {g k,m } k∈Km based on the N AP -dimensional statistics y k,m = √ η k g k,m + where η k is the power employed by the k-th MS during the training phase, φ i is the τ pdimensional column pilot sequence transmitted by the i-th MS and w k,m a N AP -dimensional vector with i.i.d. CN (0, σ 2 w ) entries containing the thermal noise contribution. We assume that the pilot sequences transmitted by the MSs are chosen in a set of τ p orthogonal sequences P τp = φ 1 , φ 2 , . . . , φ τp , where φ i is the i-th τ p -dimensional column sequence and φ i 2 = 1, ∀ i = 1, . . . , τ p . The MMSE channel estimate of the channel g k,m can be written aŝ (1) On the downlink, the APs treat the channel estimates as the true channels and perform conjugate beamforming, while on the uplink, the generic m-th AP participates to the decoding of the data sent by the MSs in K m , but data decoding takes place in the CPU [3], [5].
As performance measures used for the testing of the proposed PA algorithms we will consider the achievable rates in downlink and uplink. Applying the use-and-then-forget (UatF) bounding techniques in [11] a lower-bound to the k-th MS downlink achievable rate is reported in Eq. (2) at the top of the next page. Similarly, the same bounding technique leads to the k-th MS uplink achievable rate reported in (3), again at the top of next page. In these expressions, the following notation has been used: W is the system bandwidth, τ d and τ u are the lengths (in time-frequency samples) of the downlink and uplink data transmission phases in each coherence interval; η DL k,m a scalar coefficient controlling the power transmitted by the m-th AP to the k-th MS; σ 2 z is the AWGN noise variance at the generic MS receiver; η UL k is the uplink transmit power used by the k-th MS in the data transmission phase; σ 2 w is the AWGN noise variance at the generic AP receiver; finally, Details on the UatF bound and on the derivations of Eqs. (2) and (3) can be found in [1], [5], [11] and are here omitted due to the lack of space.
III. PILOT ASSIGNMENT ALGORITHM
We are now ready to illustrate the proposed PA schemes. To this end, we assume that the number of MSs K is larger than the number τ p of available orthogonal pilots 1 and, also, that the ratio K/τ p is an integer 2 .
The schemes that we propose are iterative, have a common structure, and start with a random PA. Basically, the steps of the algorithms can be stated as follows: 1) Assign each MS a pilot randomly picked from the set P τp of orthogonal pilots. 2) Consider the generic k-th MS; pick the τ p − 1 MSs that are closest to MS k. The set of these MSs, including the k-th one, forms the set S k , of cardinality τ p . The remaining K − τ p MSs are grouped in the set T k . 3) Use the Hungarian algorithm to assign pilots to the users in the set S k considering the PA of the users in the set T k as fixed. 4) Repeat steps 2) and 3) for all values of k = 1, . . . , K. 5) Repeat steps from 2) to 4) until the performance measures have reached convergence and/or the maximum number of allowed iterations has been reached.
We now provide further details to better clarify the meaning of the above steps.
A. Defining the set S k
To execute the above step 2), the (τ p − 1) MSs that are closest to the k-th MS are to be selected. One simple way of doing this is to rely on the knowledge of the MSs' positions. Indeed, if this knowledge is available at the CPU, the set S k can be readily defined. We say that in this case we are using a location-based (LB) procedure.
If, instead, MSs' location is not available, knowledge of the LSF coefficients can be exploited as indicators of the distance between MSs and APs. Precisely, we are not able to select the (τ p − 1) MSs that are closest to the k-th MS, but only the (τ p − 1) MSs that are closest to (i.e., have the largest LSF coefficients to) the AP that is closest to MS k. The two sets of course cannot be claimed to be coincident but with high likelihood will have several common elements. In this case, we say that we are using a location-agnostic (LA) procedure. In particular, the LA procedure works as follows. For the kth MS, the CPU first computes the index of its nearest AP as m * = arg max m β k,m . Then, consider the set of the LSF coefficients D k,m * = {β j,m * } K j=1,j =k , sort the entries of D k,m * in decreasing order, and denote by O m * ,k (ℓ) the MS index associated with the LSF coefficient appearing in the ℓth position in the ordered version of the set D k,m * . The set S k will thus contain the index (MS) k and the indexes (MSs) associated to the (τ p − 1) largest coefficients in D k,m * , i.e.:
B. Running the Hungarian algorithm
Once the sets S k and T k have been defined, the set of τ p available orthogonal pilots is to be assigned to the τ p MSs in S k according to some optimality criterion. Denoting by a (k) ℓ,q a scalar quantity measuring the reward, to be specified in the following subsection, for the system if the q-th pilot in P τp is assigned to the ℓ-th MS in the set S k , and letting x (k) ℓ,q be a binary 0 − 1 variable indicating that the q-th pilot sequence is assigned to the ℓ-th MS, we are formally faced with the following optimization problem : Problem (4) accepts has an input the coefficients a (k) ℓ,q , for all ℓ and q, and solving it entails providing the values of the optimization variables x (k) ℓ,q , for all ℓ and q. The constraints (4b) and (4c) are needed to ensure that each pilot is assigned to just one user and that all the pilots are used once, respectively. The above combinatorial optimization problem can be solved in polynomial time using the Hungarian method [12, Algorithm
14.2.3].
This method, which is due to Harold Kuhn [13] and is based on ideas of two Hungarian mathematicians König and Egervàry is one of the most important combinatorial algorithms used to solve weighted matching problem in a bipartite graph. A fast and efficient implementation of the Hungarian algorithm was introduced in [10]. We do not provide further details on this algorithm for the sake of brevity.
C. Defining the reward coefficients
Let us now define how the coefficients a (k) ℓ,q are computed. If the goal is to maximize the system downlink or uplink sum rate, then a reasonable choice is to assume that a (k) ℓ,q is equal to the ℓ-th MS rate when it is assigned the q-th pilot; we denote the ℓ-th MS rate under this assumption as R x ℓ ({x ℓ,q = 1}), where x can be DL or UL and we thus have a (k) It is important to remark that the above rate does not depend on the assignments that are decided for the other MSs in S k , since these MSs are using orthogonal pilots; rather, the rate will depend on the locations of the MSs in T k that are assigned the same q-th pilot as the MS k.
If, instead, the system designer goal is to maximize fairness across users, then a different choice is in order. Denoting by T k (q) the set of MSs in T k that are using the q-th pilot, the following choice is proposed a where again x can be DL or UL. Otherwise stated, a (k) ℓ,q is the smallest rate computed among all the MSs in the system that are using the q-th pilot, including the ℓ-th MS. As a final remark, we notice that to define the reward coefficients we could make different choices, i.e., using only the downlink/uplink rates or using a combination of the uplink and downlink rates.
IV. NUMERICAL RESULTS
In our simulation setup, we consider a communication bandwidth of W = 20 MHz centered over the carrier frequency f 0 = 1.9 GHz. The antenna height at the AP is 10 m and at the MS is 1.65 m. The additive thermal noise is assumed to have a power spectral density of −174 dBm/Hz, while the front-end receiver at the APs and at the MSs is assumed to have a noise figure of 9 dB. We assume M = 100, N AP = 4 K = 40 and a MS-centric approach [3], [5], where each MS is served by the N = 20 APs with the highest LSF coefficients and K m and M k are defined accordingly. The APa and MSs are deployed at random positions on a square area of 1000 × 1000 (square meters). In order to avoid boundary effects, the square area is wrapped around [1], [3]. The LSF coefficient β k,m is modeled as in [14, Exploiting relations in Eqs. (2) and (3) AP-MS distances is effective and entails almost no loss in performance. Then, results clearly show that the proposed solutions outperform competing alternatives, with the largest performance gain on the downlink.
V. CONCLUSION
In this letter, the problem of PA in a CF mMIMO system has been considered. An iterative procedure based on the Hungarian algorithm has been proposed. The algorithm parameters can be tuned so as to maximize either the sumrate or the fairness across users, and can be implemented based on the knowledge of the LSF coefficients. Simulation results have shown that the proposed procedures exhibit a significant advantage over several competing alternatives. | 2020-04-16T01:00:40.430Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "65eb0d055003fa9236150f5ef0652dfaa96f8bf0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.06940",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "65eb0d055003fa9236150f5ef0652dfaa96f8bf0",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
} |
2323300 | pes2o/s2orc | v3-fos-license | Climate Change Impacts on the Tree of Life: Changes in Phylogenetic Diversity Illustrated for Acropora Corals
The possible loss of whole branches from the tree of life is a dramatic, but under-studied, biological implication of climate change. The tree of life represents an evolutionary heritage providing both present and future benefits to humanity, often in unanticipated ways. Losses in this evolutionary (evo) life-support system represent losses in “evosystem” services, and are quantified using the phylogenetic diversity (PD) measure. High species-level biodiversity losses may or may not correspond to high PD losses. If climate change impacts are clumped on the phylogeny, then loss of deeper phylogenetic branches can mean disproportionately large PD loss for a given degree of species loss. Over time, successive species extinctions within a clade each may imply only a moderate loss of PD, until the last species within that clade goes extinct, and PD drops precipitously. Emerging methods of “phylogenetic risk analysis” address such phylogenetic tipping points by adjusting conservation priorities to better reflect risk of such worst-case losses. We have further developed and explored this approach for one of the most threatened taxonomic groups, corals. Based on a phylogenetic tree for the corals genus Acropora, we identify cases where worst-case PD losses may be avoided by designing risk-averse conservation priorities. We also propose spatial heterogeneity measures changes to assess possible changes in the geographic distribution of corals PD.
Introduction
Human-induced global climate change has been implicated as one important driver in the ongoing global biodiversity crisis [1 7]. Such studies have documented climate change impacts across the various levels of biological variation that are genes, species, and ecosystems. Our paper focuses on the impacts of climate change on another important level of variation phylogenetic diversity [8,9]. Loss of phylogenetic diversity arises from the loss, through species extinctions, of evolutionary branches from the tree of life (phylogeny). Based on a phylogenetic diversity measure, PD (defined below), phylogenetic pattern can be used to assess expected loss of biodiversity at the level of features or attributes of species. Our first goal in this paper is to briefly review findings from the relatively small number of studies that have examined climate change impacts on phylogenetic diversity. We then will focus in more detail on one of those key taxonomic groups of great interest the reef-building scleractinian corals. Based on an inferred phylogenetic pattern for the genus Acropora, we will develop and illustrate some useful phylogenetic diversity indices for quantifying various aspects of climate-change impacts on phylogenetic diversity.
Our paper is a contribution to this special issue of Biology, with a thematic focus on the impacts of human-. The overview [10] highlights typical systems-level topics, including interactions, thresholds, cascading effects, and loss of functions. Our focus on phylogenetic diversity at first may not seem to fit well into this big-system perspective. The typical rationale for Earth system analyses, integrating humans and environment, is that analyses of the whole system provide a better understanding of the system components as well as the whole [11,12]. In contrast, our phylogenetic pattern approach, in focusing on attributes or features, may appear the smaller components, not the whole system. However, we will argue that our approach in fact contributes to any complete approach that is truly integrative and links broadly to human well-being issues. We develop our argument by first considering recent characterizations of biodiversity in terms of functions and processes. Earth system science typically focuses on the problem of maintaining statement for Earth system science [13] sees the -functioning and resilient earth system for the indefinite This focus on functioning systems iodiversity has been re-defined as primarily about functions and processes, giving less emphasis to the attributes and patterns conventionally used to characterize living variation. For example, a recent proposal for a [11] defined biodiversity l life on earth across all levels (genes, population, and species including humans, assemblages, ecosystems/landscapes, and the ecosphere) and the ecological, cultural, and evolutionary biodiversity here is characterized in terms of processes. Thus, the seemingly obvious idea that should be about extent of variation [14] is absent.
This processes perspective is echoed in the new international initiative, [15], on functioning ecosystems, which are important for human well-being and economies, and the loss of biodiversity has been shown to undermine development . This Future Earth framework also refers to esses including Again, biodiversity is characterized here through function and process, and in this way is seen as under-pinn human well-being [16]. Apparent support for this perspective is found in the argument [11] that a process-based definition of biodiversity is needed because a traditional focus on attributes and patterns ignores humans measurements of biodiversity often focused on characterizing the attributes of observable patterns and afforded less attention to processes. Often, these views of biodiversity see humans as separate from the rest of nature These arguments might seem to imply that an attributes/pattern approach, such as phylogenetic diversity, does not fit into a systems perspective. However, the reality is that attributes and pattern not only provide the basic elements used for quantifying variation and therefore provide a fundamental link to human well-being. This link to human well-being is apparent in the historical rationale for the conservation of biodiversity (living variation): the need to preserve [14,17 20]. Option values of biodiversity reflect the possibility of future, often unanticipated, human uses and benefits [8,14,17 20]. Thus, the relationship between biodiversity and human well-being extends well beyond the narrow idea of biodiversity as supporting functioning systems. This relationship also extends beyond the observation (noted above) that the loss of biodiversity has been shown to undermine development obvious point that loss of biodiversity sometimes could result from development, and so require trade-offs. Consequently, biodiversity is not just a cog in the wheel of a functioning Earth system in support of human development. Human well-being from biodiversity, particularly relating to benefits for future generations, may involve both trade-offs and synergies with other needs of society. This contrasts with a perspective [16] in which biodiversity . These themes are clear when we consider biodiversity measures based on phylogenetic pattern and phylogenetic diversity. The tree of life represents an evolutionary heritage providing both present and future benefits to humanity, often in unanticipated ways. Returning to the themes for this special issue, we can say that losses in this evolutionary life-support system represent losses in current and future benefits for humans [18 20]. Thus, both attributes and pattern are central to Earth system science.
We therefore view phylogenetic diversity as contributing to an inclusive systems science in two ways. First, phylogenetic from evolutionary processes [8,9], and so is part of a life support system that provides both current and possible future benefits from such attributes [18]). Second, phylogenetic diversity is part of a larger systems approach to sustainability that examines global environmental change and decision-making, and investigates how we can balance different needs of society [21,22]. An expanded systems approach requires practical measures and indices of biodiversity. Here we establish measures and indices reflecting the option values associated with phylogenetic diversity. We choose to apply the new indices to hard corals because this group is highly threatened, with one-third of existing species falling into an elevated category of threat under IUCN criteria [23]. Furthermore, the keystone functional roles of reef-building corals such as Acropora play in coral reef ecosystems are well established [24]. I , it has been reported that are already indications of dramatic impacts of global warming at the system level, particularly in the arctic and for coral reef systems. Mass coral bleaching driven by warmer sea temperatures has killed vast numbers of corals across the tropics, causing some reefs to lose their ecosystem structure and functions. [25]. We see the quantification coral biodiversity option values as complementing these studies on ecosystem structure and function.
To develop these arguments, we will first introduce the phylogenetic diversity measure, PD [8,26] and review studies examining climate change impacts on PD. We then develop the phylogenetic pattern for Acropora species using a new molecular phylogeny and use this framework to explore a range of indices based on PD that capture various aspects of change.
The Phylogenetic Diversity Measure, PD
The PD measure is based on the assumption that shared ancestry for two species indicates that they have shared attributes or features. The PD of a subset of species from the phylogenetic tree is calculated as the minimum total length of all the phylogenetic branches required to connect all those species on the tree. PD provides a natural way to talk about future uses and benefits provided by species the option values of biodiversity. evolutionary process model, where shared ancestry accounts for shared features, means that PD can be interpreted as counting-up the features represented by a given set of species ( Figure 1); any subset of species that has greater PD will be expected to have greater feature diversity. In this way, PD values indicate option values at the level of features of species [8,26].
Figure 1.
A hypothetical phylogenetic tree for species a through e. Branch lengths are shown above branches. The PD is 41 for this set of species (20 + 5 + 4 + 2 + 1 + 5 + 1 + 3). If species a was lost, 5 units of PD would be lost. Successive losses of species would imply PD losses of similar magnitude. However, the loss of the last species of the clade would imply that the deeper branch of 20 units is now lost as well.
A study by Forest et al. [27] nicely demonstrated how PD captures option values. They analyzed a large plant phylogeny including taxa with a variety of known human uses (medicinal, food, etc.). For their analyses, they assumed that we did not yet know about these uses, and showed how conserving species to maximize total PD (say, for given budget) was best way to maximise the chance of preserving a wide range of uses.
We can interpret various PD calculations as if they were counting-up features. For example, the loss of a species from a protected set is interpreted as a loss in the total number of features represented by the set (Figure 1). A family of PD calculations extends conventional species-level indices to the fe restricted to a given region [28,29].
The PD measure has provided a phylogenetic basis for setting conservation priorities among species or areas [8,9,27], and is regarded sity of a collection of species [30] [31]. PD has also [32], [33]. A larger number of species in a set generally implies a larger PD of the set [31,34,35]. Based on this general relationship, it is sometimes argued that conservation priorities based on maximizing species richness will also conserve phylogenetic diversity [36], but see [27]. Faith and Williams [37] suggested that the PD species relationship could be approximated by a power law curve, and this proposal has gained support from the empirical work of [31]. This relationship has interesting implications for the expected magnitude of PD loss for any given amount of species loss. The shape of the curve implies that initial losses of species, from climate change or other impacts, will mean only small losses in PD, while later species losses can mean steeper declines in PD. species losses from the tree. In reality, the amount of actual PD loss depends on whether the species extinctions are clumped or well-dispersed on the phylogenetic tree (for review and discussion, see [35,38]). The decoupling of species loss and PD loss has recently been documented [39]. This PD study demonstrates medium scales and are imperfe , and concludes that the results are . Faith [18] referred to possible scenarios where phylogenetically clumped impacts, spread out over time, can mean that initial species losses produce small incremental PD losses, until all descendent species from a longer branch are lost and the PD falls precipitously ( Figure 1). They outlined a form of phylogenetic risk analysis to guide conservation decisions that try to reduce risk of these worst case loss We return to this problem below.
A Brief Review of Climate Change Impacts on PD
The PD-species power curve relationship suggests that climate change impacts initially (for a small number of affected species) might imply small PD loss. However, we have noted that the actual extent of PD loss depends on whether species extinctions are clumped or well-dispersed on the phylogenetic tree. A number of studies investigating climate change impacts have found relatively small PD losses. For example, climate change impacts that are spread out over a phylogenetic tree effectively ensure that most deep branches throughout the tree have at least one surviving descendent [40]. Another study found small PD loss [38] given species losses were phylogenetically dispersed among plant, bird and mammal taxonomic groups over continental Europe.
In contrast, some studies have found that species extinction is concentrated on the phylogeny se of such disproportionate loss is the occurrence of entire clades in the same threatened location or region [41]. For example, this kind of phylogenetic clumping for mammals accounts for the finding that several biodiversity hotspots in southern Asia and [42]. Disproportionate PD loss also may arise when entire clades share the same key trait(s) implying vulnerability to climate change. For example, Willis et al. [43] used a time series of abundance of flowering plants to indicate possible human-caused impacts. This study of plants revealed that reductions in abundance were not randomly distributed across the plant phylogeny, with some plant families having an over-representation of declining species [43]. This phylogenetic clumping was attributed to the fact that these families could be characterized as having flowering times that do not closely track temperature implying greater vulnerability to climate change. However, sometimes clumped impacts may reflect a combination of traits and geography. Baillie et al. [44] examined red list species (reflecting climate change and other threats) and found that a number of families have significantly more threatened species than would be expected on average, while others have far less entire evolutionary lineages are likely to go extinct very quickly.
Climate change impacts may be well-dispersed over a given phylogenetic tree, but reveal hotspots of clumped impacts at a finer phylogenetic resolution. For example, based on an existing large phylogenetic tree for corals, Faith et al. [18] observed that, while threatened species appear well-dispersed on the overall phylogenetic tree, sometimes entire monophyletic groups (existing families and genera) within the tree fall into IUCN threatened (or near-threatened) classes. These included all species within the genera Catalaphyllia, Physogyra, and Euphyllia. Faith et al. suggested that these may represent phylogenetic tipping points given that each group represents the only descendent taxa of a relatively long phylogenetic branch (see Figure 1). Furthermore, an important preliminary assessment of PD loss for corals [45] suggests that there is no significant phylogenetic clumping of extinction risk. On the other hand, this study reports that key traits that possibly relate to vulnerability were typically shared by close relatives.
These studies raise challenges for the development of PD-based indices that reflect changes in phylogenetic diversity. The overall impacts from climate change may be phylogenetically dispersed, and overall PD loss consequently small, but certain parts of the phylogenetic tree nevertheless may show clumped impacts. This is made more complicated by the fact that measured in a number of ways, and measures applied to the overall tree may not reflect patterns for sub-trees [35]. The corals studies referred to above (and others) suggest that the conventional focus on the complete phylogeny of a group (where the conclusion may be that there is relatively small PD losses) may conceal the impacts at the finer phylogenetic scale. Further, studies so far suggest that loss of PD is not the only impact of interest. For example, net losses of PD can be small but nevertheless, there can be dramatic changes in the geographic distribution of PD [38]. In the sections below, we describe our study context and then explore PD indices to address these issues.
Phylogeny of Acropora
Rationale and Methods
Introduction to Acropora as a Model Group for Extinction Risk Studies
The genus Acropora (commonly known as staghorn corals) is an ideal model group for studies of extinction risk because 50% of Acropora species are predicted to face an elevated risk of extinction this century under IUCN categories and criteria [23]. Species in this genus display complex spatio-temporal patterns and an extensive literature exists on their global ranges [46 49]. Many species of Acropora are described as rare, occurring in small, restricted, isolated and/or disjunct populations [47,49,50]. In general, species of Acropora are extremely susceptible to coral bleaching [51], changes in water quality, disease [52,53] and predation (e.g., the corallivorous starfish Acanthaster planci; and the gastropods Coralliophila abbreviata and Drupella spp.).
Acropora is the largest extant genus of reef-building corals. The genus Acropora was formally known as Madrepora (Linnaeus 1758) before the name Acropora was introduced by Oken in 1815 and reintroduced by Verrill in 1901. In 1999, the first full monograph of the genus was published since Brook (1893) [47]. Under this system, 114 species were described. Although another 250+ species of Acropora have been described some are represented by fossil material only, and others are nomina nuda, meaning they are not officially validated (i.e., some of the species described in [49]); and many others were placed into synonymy after extensive examination of type material [47]. However, 14 of the new Acropora species described in [49] have recently been validated [54,55]. In addition, 6 new species have been described since the 1999) revision [54, 56 59].
Currently, the morphological phylogeny portrays the rudis group to be the oldest living lineage and the echinata group to be the youngest [47]. However some fundamental differences are apparent with the published molecular phylogenies. For example, mitochondrial DNA suggests the Atlantic species A. cervicornis and A. palmata are the basal lineage [60,61] rather than members of the rudis group, A. longicyathus occurs in a near basal position in the molecular phylogeny [60] despite [47]. Most Acropora phylogenies have included only common and widespread species, however the inclusion of rare species in a new phylogeny [62], suggest that not all rare species are recently evolved and hence, there is substantial risk that novel phylogenetic information will be lost if rare and threatened species are driven to extinction by the end of this century as predicted [23].
For this study, the threatened status of 173 coral species was downloaded from the IUCN Red List ( [63], see Tables 1 and 2). For this assessment, nearly all extinction risk assessments were made with the IUCN criterion that uses measures of population reduction over time [23]. Most reef building corals do not have sufficient long-term species-specific monitoring data to calculate actual population trends; thus, to complete the global coral threatened species assessments, a less quantitative (surrogate) approach was adopted to assess the status of coral species. Specifically, the vast majority of coral species were assessed under a criterion in which population reductions are in habitat quality. They were based therefore on a conservative interpretation of the most current global and regional estimates of coral reef status in 17 regions across the world [64]. For each species, a weighted average was calculated by multiplying the area of reef within the species distribution by the percent of total coral cover loss or the combined percent of total coral cover loss and critically declining reef [64]. Overall, estimates of habitat loss (in conjunction with life history traits and susceptibility) are used as a surrogate for population reduction, assuming the generation time of corals is 10 years [23]. Therefore, rates of population decline for each species have their basis in the rate of habitat loss within its range adjusted by an assessment of the species-specific response to habitat loss (i.e., more-resilient species have slower rates of decline).
In the next sections, our calculations using estimated extinction probabilities will follow the [65] to convert these categories to estimated extinction probabilities (Table 1). [23]. Conversion to extinction probability follows [65].
Phylogenetically-Informative Markers
Various markers are available to assess phylogenetic relationships within the Acropora, including ribosomal DNA (rDNA) and ITS (internal transcribed spacer) sequences [ 66 68]. However, rDNA performs suboptimally when testing evolutionary relationships in the Acropora because it is a fast evolving genus [69], and extremely high rDNA diversity can predate species divergence [70]. Single-copy nuclear markers have also been used in Acropora phylogenetics (e.g., Mini-C [70,71], Cnox2 [72]; Calmodulin [73]); and an extensive published datasets exists for the Pax-C 46/47 nuclear intron [60,62]. However extensive intra-individual polymorphisms are observed when this marker is cloned (i.e., when different alleles are sequenced from a single PCR product from a single individual, the different alleles can occur in divergent clades), and such genetic heterogeneity greatly complicates the interpretation of extinction risk.
For the purposes of this study we utilize a single-copy mitochondrial marker, the putative mitochondrial control region rns-cox3, for which an extensive amount of data is publicly available (via Genbank, see Table 2). Being maternally inherited, there is one haploid per genome unlike repetitive markers (such as rDNA) that occur in multiple copies. Further, single-copy mitochondrial introns are expected to accumulate mutations relatively rapidly providing many potentially phylogenetically informative characters; lineage sorting is expected to occur relatively rapidly which further benefits the interpretation of evolutionary relationships [74].
Molecular Phylogenetic Analysis by Maximum Likelihood Method
Mitochondrial rns-cox3 data with sufficient coverage was available for 65 species and these were downloaded from Genbank [75] (See Table 2 for species names and source details). A total of 640 positions were included in the final dataset and 1st + 2nd + 3rd coding positions + non-coding positions were included. All positions with less than 95% site coverage were eliminated. That is, fewer than 5% alignment gaps, missing data, and ambiguous bases were allowed at any position. For the purposes of this study, evolutionary analyses were conducted in MEGA5 [76]. (Wallacew, 1994) EU918284.1 [62] loripes group V Acropora kimbeensis (Wallace, 1999) EU918268.1 [62] nasuta group V Acropora kirstyae (Veron and Wallace, 1984) EU918215.1 [62] horrida group V Acropora latistella (Brook, 1891) AY026443.1 [60] latistella group LC Acropora loisetteae (Wallace, 1994) EU918273.1 [62] selago group V Acropora lokani (Wallace, 1994) EU918270. (Wallace, 1999) EU918234.1 [62] elegans group V Acropora yongei (Veron and Wallace, 1984) youngei2_15IF [78] selago group LC Isopora cuneata (Dana, 1846) AY026429.1 [60] Genus Isopora (included as outgroup) V The evolutionary history was inferred by using the Maximum Likelihood method based on the Kimura 2-parameter model [83] plus a discrete Gamma distribution was used to model evolutionary rate differences among sites (5 categories (+G, parameter = 0.9062)). The tree with the highest log likelihood ( 2,744.0067) chosen by Bayesian Information Criterion is shown. The bootstap support values are shown next to the branches based on 1,000 replicates. The tree is drawn to scale, and branch lengths were measured as the number of substitutions per site. Isopora cuneata is included as the outgroup.
PD and Probabilities of Extinction Expected Calculations
In our brief review, we cited examples where species threats or extinctions were dispersed on the phylogenetic tree, so that consequent loss of PD was relatively small, for the given level of species loss. Based on our estimated phylogeny for the Acropora corals (Figure 2), we can conclude that we have a similar case of phylogenetically dispersed threats. However, the Acropora phylogeny suggests the need to move beyond the standard summary of PD loss over whole trees. For Acropora, a finer phylogenetic scale is of interest, because some parts of the tree show clumped threats. In one part of the tree (Figure 3), the three Acropora species found in the Caribbean form a monophyletic group, and two fall into the critically endangered category, while the third is not yet assessed (Figure 2).
How do we best measure the potential impact on PD, including the loss of deeper branches, given the clumped impacts in this part of the tree? Here, we will consider indices that convert the IUCN threat categories (Table 1) to extinction probabilities. Higher threat categories are assigned higher probabilities of extinction [65,84]. Mooers et al. [65] discussed several possible transformations of category to extinction probability, and analyzed the sensitivity of conservation priorities to choice of probabilities [65] (Table 1).
One existing priority-setting approach based on extinction probabilities is the EDGE program [84]. EDGE takes phylogeny into account in its calculations of priority scores for threatened species. A given species gains a credit, or partial contribution, from a given ancestral branch equal to 1/n, where n is the number of descendants of that branch. The total of these credits over all ancestral branches is then multiplied by the estimated extinction probability for the species for a final score. Species with higher scores receive higher conservation priority they have ancestral branches with relatively few other descendants, and have highly threatened status.
The EDGE programme has promoted the practical use of phylogeny in conservation priority setting. However, it has been noted [85] that the method could be improved by incorporating existing PD- [86]. A key advantage over conventional EDGE calculations is that complementarity among species is accounted for effectively. (Figure 2) showing the three Acropora species found in the Caribbean. Relevant branch lengths are shown below branch, in italics. Bold numbers indicate probabilities of extinction, with probability for a deeper branch equal to product of probabilities of descendants.
Here, the complementarity value of a given species reflects the degree to which related species do not already ensure the persistence of shared branches. A phylogenetically distinctive species that is only moderately threatened nevertheless may be given high conservation priority because it has high complementarity there are no closely related secure species to ensure the persistence of its ancestral branches. The importance has been illustrated for forest songbirds in the genus Myadestes [85] where the relatively high conservation priority for Myadestes obscures, reflecting not only its increased extinction risk but also its elevated importance, given the expected loss of sister species, in We will illustrate this approach by examining two parts of the phylogeny of Figure 2, re-drawn in Figure 4. Suppose two species that are currently near-threatened, A. nasuta and A. pichoni, are competing for conservation priority. In each case, we will focus on the protecting the left-most branch shown in Figure 4 (the more ancestral branches to these are assumed secure, Figure 2). If we apply standard EDGE methods, A. pichoni would gain a large credit for its left-most ancestral branch (1/3, given the 3 descendants of that branch). A. nasuta has smaller credit for its left-most branch (1/4, given 4 descendants). The EDGE score then would be produced by multiplying these credit values by the estimated probability of extinction of the species. Here, near-threatened species, A. nasuta and A. pichoni, have the same probability, and A. pichoni gains EDGE priority, because it has greater credit for its ancestral branch (Figure 4).
In contrast, expected PD calculations take into account the degree of threat to fellow descendants of any ancestral branch. For A nasuta, two of these species are vulnerable, with probabilities of extinction of 0.8 ( Figure 2). The total probability of loss of the deep branch is 0.10. For A. pichoni, the sister species are quite secure, including one species of least concern (Figures 2 and 4). The total probability of loss of the deep branch in this case is a much smaller value of 0.03. Therefore priority for A. nasuta would result in much greater gain in the expected persistence of phylogenetic diversity. In contrast, A. pichoni has relatively secure sister species, and so its ancestral PD is already well protected. It can be assigned lower priority. This example illustrates how the EDGE methods provide weaker priority setting when the objective is to maximize persistence of phylogenetic diversity. (Figure 2). Two species that are currently near-threatened, A. nasuta and A. pichoni, are competing for conservation priority. Numbers indicate probabilities of extinction, with probability for a deeper branch equal to product of probabilities of descendants. A. pichoni has relatively secure sister species, while A. nasuta does not. Priority for A. nasuta would imply greater gain in expected PD.
With these lessons in mind, we now consider the Caribbean species (Figure 3). Two of the species, A. cervicornis and A. palmata, are critically endangered, and the third species, A. prolifera For purposes of this example, we will consider a future scenario in which A. palmata is also designated critically endangered. The probability of loss of any terminal branch for critically endangered species is 0.99, based on [65] for the conversion of the IUCN categories to extinction probabilities ( Table 1). The probability of loss of the longer deep branch of length 35 (Figure 3) can be estimated as 0.99 × 0.99 × 0.99 = 0.97, assuming independent probabilities of extinction. This is the standard assumption for EDGE and expected PD methods. However, we note that the co-distribution of these 3 species in the heavily affected Caribbean region may mean that the probability of loss of the deeper branch is even higher.
In this example, we can use expected PD calculations to derive an expected PD loss. This is the sum of each branch length times its corresponding probability of loss. The total expected PD loss is 82.29 branch length units = 32(0.99) + 17(0.98) + 35(0.97). We can see that the contribution of the deeper branch (0.97 times length of 34 = 33.0) to this expected PD loss is a high proportion of the total expected loss of 82.29. This calculation provides one simple index of the expected PD loss for phylogenetically-clumped impacts in a portion of a phylogenetic tree.
A more informative index in practice would reflect the changes in expected PD loss that could be achieved under a particular candidate conservation action, for a given species or set of species. We extend the basic formulae [85,87], following the approach of [86,88], where the expected-PD change value for a set of species, S, is given by: ExpectedPD(0) is the expected PD value, with extinction probabilities from current IUCN categories), and expectedPD(1) is the value when the probabilities are converted to a smaller value as a result of a conservation action. We will calculate an expected PD change value for the set of three Caribbean species (Figure 3). Here, expectedPD(0) is the expected PD calculation for this part of the tree, with extinction probabilities from current IUCN categories of 0.99 for each species. The expectedPD(1) is the value when the probabilities are converted to a smaller value e.g., an extinction probability for the near-threatened category (0.4). Because the change value reflects a change in expected PD as a consequence of changes in extinction probability for just these species, we do not have to calculate the expected PD for the entire tree.
This index reflects more than just the consequence of individual-species PD losses and can be applied to provide a score for conservation action for a region, as illustrated here for the Caribbean Acropora. When we assume that conservation action can change the extinction probabilities to those for the near-threatened category (0.4; Table 1), we have: The improvement in expected PD, of 64.67 branch length units, is large compared to the original expected PD loss of 82.29. This improvement occurs even with an outcome where the species are still near-threatened.
Phylogenetic Risk Analysis
It has been argued that the focus of expected-PD assessments on finding worst-case losses of PD [85] and that alternative risk aversion approaches for phylogenetic risk analysis are needed. Faith [85] presented a hypothetical example where the conservation option that best increased expected PD nevertheless implied a high probability of high PD loss, compared to other conservation options. An alternative conservation option would better avoid worst-case losses of PD, with only a small decrease in expected PD. We will explore this problem for a real phylogeny, again focusing on one portion of the larger corals tree ( Figure 5). (Figure 2). Relevant branch lengths are shown below branch, in italics. Estimated probabilities of extinction (following Table 1) for branches and for species are shown above the branches. Extinction probability for a deeper branch is the product of probabilities of descendants.
Suppose we have some conservation budget that allows for either of two actions. A. batunai could be protected to the extent that its current probability of extinction of 0.8 could be reduced to a probability of extinction of 0.6. A. abrotonoides alternatively could be protected to the extent that its probability of extinction of 0.2 could be reduced to a probability of extinction of 0.0 ( Figure 5). In order to maximize expected PD, we calculate the current expected PD and the change in expected PD for each alternative conservation action. Then we choose the conservation option with the greatest gain. The change in expected PD is the sum of all branch lengths, each multiplied by the change in the probability of extinction of the given branch. The conservation option focused on A. batunai would imply a change in expected PD of 22.6 (113 × 0.2; Figure 2). The alternative option focused on A. abrotonoides would imply a greater increase in expected PD (change value) of 24.1 (0.2 × 66 + 0.2 × 0.2 × 154 + .032 × 150 = 13.2 + 6.2 + 4.7; Figure 5). Maximizing expected PD therefore points to the option focused on A. abrotonoides. This choice, however, leaves open the high probability (0.8) that the relatively long branch leading to A. batunai will be lost. On average, the choice of A. abrotonoides may be a good one; however, choosing the best average outcome in this case is not sufficiently risk averse regarding the worst case loss of the relatively long branch of length 113 ( Figure 5). If the A. abrotonoides conservation option is not selected, the probability of a PD loss as high as 113 is only 0.04. However, the possibility that maximizing an expected diversity outcome may not be sufficiently risk averse regarding possible worst case losses may be an important issue for conservation of corals PD; there are many cases where a threatened species is similar to A. batunai in being the sole descendent of a moderately long branch.
No procedure for phylogenetic risk analysis was provided by [85]. However, expected PD calculations are flexible enough to be used to address this problem. While there seems to be no one formula that would provide a simple protocol for phylogenetic risk analyses, the best general guideline is to proceed as follows: n , and then assess the current probability of occurrence of these larger PD losses. Conservation options then can be assessed according to implied changes in these probabilities of worst-case losses, using the normal calculations for expected PD.
Phylogenetic Spatial Homogeneity and Heterogeneity Analyses
Our review of studies of climate change impacts on PD pointed to a notable case where PD loss was relatively small, but there were dramatic projected changes in the geographic distribution of PD [38]. This study suggested that the tree of life faces a trend towards homoge but these changes were not quantified. Measures of PD homogenization (or the opposite: PD spatial heterogeneity) would be useful for tracking changes over time and for making comparisons among taxonomic groups and among regions.
The term bioti refers to the increase in the biotic similarity of different localities in a region, and is now recognized as a major issue in biodiversity science (for review, see [89]). Typically, the focus is on the species level, but these indices suggest analogous measures of PD homogenization. Olden et al. [89] describe several species-homogenization is calculated using species presence or absence data to examine the degree of similarity in community composition, and can be quantified using any one of a suite of similarity indices, diversity indices, cluster analyses or ordination approaches Olden et al. [89] noted that most homogeneity analyses use the Jaccard measure to calculate dissimilarity among localities. The Jaccard measure has a much-used phylogenetic counterpart based on PD [90]. The dissimilarity of two places/communities in this case reflects the count of the number of branches found (represented) in one place but not the other (see also [91]). This phylogenetic dissimilarity provides a simple PD-based spatial heterogeneity measure. Consider two localities or ecosystems, i and j. Let DPD(i,j) be the PD dissimilarity. We define the PD heterogeneity of the region, HPD1, as the average value of DPD(i,j) over all i and j.
This measure may be useful for understanding changes in the PD distribution of corals as a result of possible climate change impacts. The geographic distribution of corals has been described based on 141 different eco-regions or ecosystems [92]. T heterogeneity assessments. Would loss of endangered coral species increase or decrease the global PD spatial heterogeneity? The three critically endangered species (Figures 2 and 3) are only found in the Caribbean ecosystem types. This implies that a total branch length of 35 + 32 + 17 = 84 currently contributes to the PD dissimilarity of a Caribbean ecosystem to other global ecosystems. If the three Caribbean species were lost, the PD dissimilarity of a Caribbean ecosystem to other global ecosystems would appear to be smaller, contributing to a smaller HPD1. However, this extreme case where all Acropora are lost from the region has further implications. Those three species are the only Acropora species in the Caribbean. Consequently, not only those relatively short branches counted above but also the branch uniting all Acropora (Figure 2) is lost from the Caribbean ecosystem types. This loss of the deeper branch increases the PD dissimilarity of a Caribbean ecosystem to other global ecosystems, which have retained one or more Acropora species. Thus, the HPD1 measure would indicate an increase in spatial heterogeneity in this extreme case. In general, HDP1 could indicate lower heterogeneity for initial losses of species and branches and then greater heterogeneity as some ecosystems lose deeper branches while other ecosystems retain them.
A possible limitation of HPD1 is that species and branches that are rare for corals, found in few ecosystems contribute to relatively few dissimilarities, among all i and j, and so will not have much influence on the overall value. Loss of rare PD therefore typically will not be reflected as a large change in HPD1.
We suggest an alternative phylogenetic spatial heterogeneity measure that is more sensitive to changes in rare species, including the loss of rare PD. We follow other workers (e.g., [93] in creating a phylogenetic version of a general measure of diversity and evenness, developed by Hill [94]: We follow the notation in Chao et al. [93], where q D is the effective number of species, S is the number of species, and p i is the proportional abundance of the ith species. The parameter q can be varied in order to give more or less weight to the most abundant features When the q D large, and we have greater heterogeneity.
Chao et al. [93] extended this fundamental measure of diversity and evenness to PD: L i is the length of branch i from the phylogenetic tree, and a i is the total abundance of all species descended from branch I (T is the total of all a i ). When q is 0, the measure is equivalent to PD. Chao et al. [93] offered no general phylogenetic model as a rationale for this measure. However, the basic rationale for PD branch lengths as counting features (Section 2) provides a simple justification for this PD-based Hill number . It can be interpreted as equivalent to applying the standard species-level Hills measure, but with features (as indicated by PD) substituted for species.
For Chao et al., the abundance of a branch is the summed abundance over descendant species. Here, we generalize that is, some measure of geographic range or rarity. We will consider number of localities (or ecosystems in the case of corals) containing a given branch as the abundance measure for that branch. This differs from the Chao et al. formulation in that the abundance for a branch is not the sum but the union over the abundance of descendants. We then define: Our notation is the same as above, except that we use r i to refer to the use of a range-type count as our abundance measure. When HPD2 is large, the effective amount of PD is large, and we have greater heterogeneity. We will explore the application of the measure for q equal to 1. The Hill number is undefined for q = 1, but, following [93,94], it is defined by a limit as q approaches 1, yielding: We now will apply this measure to the Caribbean species, using counts of the number of ecosystems containing a branch/species as the basis for abundance of that branch/species ( Figure 6). For this example, we do not have information on the numbers of ecosystems for all other species and branches from the overall tree ( Figure 2). However, we can illustrate properties of the method by considering an additional branch of length 32 that is found in a large number of ecosystems. Figure 6. Subset of the larger tree ( Figure 2) with the three Acropora species endemic to the Caribbean. Relevant branch lengths are shown below the branch, in italics. Numbers in bold at the top of the branches indicates the number of ecosystems in which the species or branch is represented (present) (according to [92]).
We begin by considering a case where the number of ecosystems for A cervicornis has dropped from 8 ( Figure 6) to just 2 (scenario A in Table 3). The value of HPD2 is calculated as follows: Now we compare this value of HPD2 to the value obtained for scenarios B and C (Table 3). In scenario B, A cervicornis becomes even rarer (present in only 1 ecosystem). HPD2 goes down, reflecting lower spatial PD heterogeneity. In contrast, in scenario C, the common species, at the end of a branch also of length 32, is now absent from one of its 50 ecosystems. HPD2 goes up, reflecting higher spatial PD heterogeneity. Thus, these simple scenarios suggest that HPD2 provides a measure of phylogenetic spatial heterogeneity that is sensitive to reductions in abundance of rare species/branches. This captures an important aspect of geographic homogenization from climate change impacts, and may provide a basis for the monitoring of PD change that complements the conventional monitoring of PD losses. Table 3. Scenarios of species loss from coral ecosystems, based on the partial tree of Figure 6. Column L i has branch lengths, including a hypothetical branch of length 32 from another part of the tree. Columns A, B, and C correspond to 3 scenarios for number of ecosystems for each species/branch. The bottom row reports, for each scenario, the corresponding HPD2 value. A B C 32 2 1 2 17 8 8 8 35 9 9 9 HPD2 67.7 64.6 68.2
Discussion
In our Introduction, we argued that phylogenetic diversity can contribute in two ways to a more inclusive earth systems science. First, we argued that phylogenetic pattern is a good proxy for the features provides both current and possible future benefits. This reflects a long-standing rationale for the conservation of diversity [17]. The PD measure and its extended calculations introduced here are specifically designed to capture these phylogenetically-based option values of biodiversity. This perspective contrasts with a recent review and commentary on the conservation of phylogenetic diversity [95], which failed to recognize PD as a measure of feature diversity and associated option values. The result was an incoherent framework, with no clear rationale for the conservation of phylogenetic diversity and little basis for distinguishing among the large number of existing phylogenetic indices.
We noted that our focus on PD and option values complements standard systems approaches that largely focus on biodiversity links to ecosystem functions. Corals conservation highlights these complementary perspectives. Here, addressing the conservation of option values implied a focus on the entire phylogeny for that group (or sometimes hotspots of clumped threats within the phylogenetic tree). In contrast, a functioning coral ecosystem in a given place will be concerned with the subset of local taxa. This defines a portion of the larger phylogeny. Indeed, the most effective measure of biodiversity for assessments of ecosystem functions may be PD applied within ecosystems (see e.g., [96]). We have not addressed within-ecosystem PD analyses in this paper, but see the integration of whole-tree PD analyses and within-ecosystem PD analyses as a key challenge for an inclusive systems approach to sustainability. This links to our second argument, that indices of change in phylogenetic diversity should assist in planning and decision-making for sustainability [97]. We see phylogenetic diversity conservation as contributing to a larger systems approach to sustainability [21]) that investigates how we can balance different needs of society. These efforts will be progressed by efficient algorithms that can calculate best-possible changes in expected PD for large phylogenies, and integrated with human-environment factors, including conservation opportunity costs [98].
Our study has echoed some common themes from systems science, and perhaps has neglected some other important themes. On the one hand, we show that core systems ideas relating to tipping points and ris . On the other hand, our study did not explore the various uncertainties associated with these assessments. Naturally, phylogenetic diversity itself is recognition of inherent uncertainties regarding which species and which features will provide uses and benefits for future generations. However, use of phylogenetic pattern introduces other uncertainties. For example, as demonstrated here for Acropora, molecular data were not available for all species. Consequently, the species tree was based on a subset of the entire Acropora diversity. While the positioning of clades in the current phylogenetic topology is consistent with other published phylogenies [60,62], the inferred phylogeny is likely to be refined when additional species are included. PD assessments must take these uncertainties into account [99 101]. We hope that an inclusive systems approach, and availability of quantitative indices for phylogenetic diversity assessment, will promote new studies that overcome some of these uncertainties. | 2016-03-01T03:19:46.873Z | 2012-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "5b1a59b173ff9179226addf9507ff87d67e7160e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-7737/1/3/906/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b1a59b173ff9179226addf9507ff87d67e7160e",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
744235 | pes2o/s2orc | v3-fos-license | On the Power Allocation for Hybrid DF and CF Protocol with Auxiliary Parameter in Fading Relay Channels
In fading channels, power allocation over channel state may bring a rate increment compared to the fixed constant power mode. Such a rate increment is referred to power allocation gain. It is expected that the power allocation gain varies for different relay protocols. In this paper, Decode-and-Forward (DF) and Compress-and-Forward (CF) protocols are considered. We first establish a general framework for relay power allocation of DF and CF over channel state in half-duplex relay channels and present the optimal solution for relay power allocation with auxiliary parameters, respectively. Then, we reconsider the power allocation problem for one hybrid scheme which always selects the better one between DF and CF and obtain a near optimal solution for the hybrid scheme by introducing an auxiliary rate function as well as avoiding the non-concave rate optimization problem.
I. INTRODUCTION
In cooperative communication networks, power allocation over channel state may bring rate gains [1]. However, it is not easy to find the optimal power allocation because the exact capacity of most of the wireless networks has not been known. There were some useful cooperation strategies put forward in the literatures which provided efficient approach to transmit information and gave lower bounds for the rate performance of the system. For instance, two relay protocols, Decode-and-Forward (DF) and Compress-and-Forward (CF), were proposed in [2] to evaluate the information rate for relay channels (RC). Particularly, the DF protocol was shown to be able to achieve the capacity of degraded RC [2] and sender frequency division RC [3]. Due to the effectiveness of DF and CF, they has been widely used in cooperative communication networks, achieving good rate performance in various networks [4] [5].
Based on DF and CF protocols, power allocation can be naturally extended to general networks to combat the varying channel states. Take the RC as an example again. Since there are only three wireless links in the system, it is available for the source and the relay to know the current channel gains before transmissions via timely feedback from the receiver. The global power allocation over fading channel problem in RC has been studied in [6]. By assuming that the source and the relay subject to a sum power constraint, the authors provided algorithms on how to find the optimal power allocation. It is also noted that the power allocation established in [6] achieved the maximal throughput of the relay-receive phase and relaytransmit phase in half-duplex relay channels (HDRC). The result was implicitly based on a buffer at the relay such that if the relay-destination channel is worse, it can store the message and transmit them when the relay-destination channel becomes better. In practice, if the relay has a finite storage and limited processing capability, the system may become unstable and the power allocation gain will degrade.
To improve the achievable rate, selecting better relay protocol among multiple protocols provides another alternative. This intuition comes from theoretical analysis on combining DF and CF in static RC [7]- [9]. It was found superposition structure of the DF and CF codewords provides some rate gain with penalties of decoding complexity [2] [10]. Moreover, a general insight was also obtained that DF outperforms CF for only some of the channel gain combinations while the relationship reverses for the others. This implies that in fading relay channels, a hybrid scheme which selects the better one between DF and CF according to the channel state may provide some rate gains while avoiding the complicated codeword design.
Instead of using other techniques, e.g., [11]- [13], to combat channel fading, in this work, we thoroughly analyze the relay power allocation over channel state when the relay adopts both DF and CF protocols.
The remainder of this paper is organized as follows. In Section II, we introduce the system model and establish a general framework for the relay power allocation problem. In Section III, we present a parameterized form solution for the problem corresponding to DF and CF, respectively. In Section IV, we further investigate the relay power allocation corresponding to the hybrid scheme and discuss the optimal solution by introducing an auxiliary rate function.
II. SYSTEM MODEL AND PROBLEM PRELIMINARY
Let us consider a HDRC as illustrated in Fig. 1. In the figure, N 1 , N 2 and N 3 represent the source, the relay and the destination, respectively. We assume the relay is operated in half-duplex manner. Due to the multipath effect, the channel gains are varying along with the time. We assume that the channel gains are holding constant for a fixed time length which is referred to as a block and the channel gains varies independently between consecutive blocks. The signal transmissions in each block are divided into two phases as depicted in Fig. 1. In Phase 1, the source transmits signal while the other two nodes listen. In Phase 2, the source and the relay transmit signals to the destination. To distinguish the signal in different phases, let us denote the complex baseband signal transmitted at N i (i = 1, 2) and received at N j (j = 2, 3) in Phase k (k = 1, 2) by X (k) i and Y (k) j , respectively. For simplicity, we use H ji and h ji to represent the channel gain variable and its realization for N i -N j link in each block. Accordingly, transmissions in the HDRC can be expressed as where Z (k) j (j = 2, 3; k = 1, 2) is additive white Gaussian noise (AWGN) corresponding to N j in Phase k. For simplicity, we consider that the system is operated in unit bandwidth and we assume that Z (k) j obeys complex Gaussian distribution with unit power spectrum density, i.e., Z (k) j ∼ CN (0, 1). The source and the relay are assumed to know the channel gains at the beginning of each block. In particular, as the channel phase-shift is well-recovered at the receiver side, we focus on p |H ji | (|h ji |), the distribution of the amplitude of H ji .
For the reason of synchronization and power management, we assume that the source transmit signal with the same power and the same time length for the two phases. Denote the channel state by h = (h 31 , h 21 , h 32 ). By assuming that the block length is long enough to support one time entire signalling, we can regard the system as a static relay channel for each block. In general, in the static case, the rate performance is a function of the receiver side signal to noise ratio (SNR) of the three links. To focus on the relay power allocation, we denote the receiver side SNR of relay-destination link and the rate function in static HDRC by S 2 2|h 32 | 2 P 2 and R(S 2 ), respectively.
In fading HDRC, we consider long time average power constraint P i at N i (i = 1, 2). Then the source transmits signal with power P 1 regardless of the channel state. However, the relay can adjust P 2 adaptively w.r.t the channel state h in each block. For clarity, we denote the relay power allocation by P 2 ( h). The interest of this paper is to find the optimal power allocation P ⋆ 2 ( h) achieving the best rate performance of the system. Define S 2 ( h) 2|h 32 | 2 P 2 ( h). Regarding the average rate as the measurement of the rate performance and taking the average power constraint into consideration, we can specify the relay power allocation problem as where It should be noted that only if the rate function R(S 2 ) is concave w.r.t. S 2 should the solution of (6) be the optimal S 2 ( h).
STRATEGIES
In this section, we first analyze the concavity of DF rate and CF rate. Then following necessary condition (6), we present the optimal power allocation for DF and CF based on the inverse function of the derivation of the rates.
A. Concavity of the DF rate and CF rate
The rate achieved by DF protocol and CF protocol with Gaussian signaling were presented in Proposition 2 and Proposition 3 of [6], respectively. Taking a constant fixed source power and the equal-phase assumption into account, the DF rate can be rewritten as 1 2 max 0≤ρ≤1 min C(|h 21 | 2 P 1 ) + C ρ 2 |h 31 | 2 P 1 , C(|h 31 | 2 P 1 ) + C(|h 31 | 2 P 1 + 2|h 32 | 2 P 2 + 2ρ 2|h 31 | 2 P 1 |h 32 | 2 P 2 ) , where ρ 2 = 1 − ρ 2 ; C(x) log 2 (1 + x) represents the Shannon formula for complex based model [1]; ρ represents the correlation coefficient of X Then we can further express the DF rate and CF rate as functions of S 2 : Theorem 1: Both the DF rate R DF (S 2 ) and CF rate R CF (S 2 ) are concave w.r.t. S 2 .
Proof: First, we analyze the concavity of R DF (S 2 ). In R DF (S 2 ), the optimal ρ can be found by considering where η (1+tS 1 )/(1+S 1 ). Note that the first and the second terms in the minimum operation of (7) are monotonically decreasing and increasing w.r.t. ρ, ρ ∈ [0, 1]. Then we have As minimum operation is a concavity-preserving [14], to show R DF (S 2 ) is concave, we only need to show all the three terms in the minimum operation are concave. The concavity of C(tS 1 ) is trivial. Note that logarithmic function is concave. According to the composition law of concavity, to show the rest two terms in (10) are concave, it is equivalent to show ( √ S 1 + √ S 2 ) 2 and g d (S 2 ) S 1 + S 2 + 2ρ * S 1 S 2 are concave w.r.t. S 2 [14]. On the one hand, it is not hard to see which implies the concavity of ( √ S 1 + √ S 2 ) 2 . On the other hand, with some manipulations, one has That is, g d (S 2 ) is also concave. This implies the concavity of the DF rate R DF (S 2 ). Next, we show that R CF (S 2 ) is concave. Let us define According to the composition law of concavity [14], it is equivalent to show that g c (S 2 ) is concave. Note that Then the CF rate R CF (S 2 ) is concave w.r.t. S 2 .
B. Optimal power allocation corresponding to DF and CF
As both the DF rate and CF rate are concave w.r.t. S 2 , one can derive the optimal relay power allocation according to the necessary condition (6) by well-defining the reverse function of R ′ DF (S 2 ) and R ′ CF (S 2 ). For σ ∈ {DF, CF }, let us denote the reverse function of R ′ σ (S 2 ) by T σ (ν). We have the following theorem on the power allocation corresponding to DF and CF.
Theorem 2: For σ ∈ {DF, CF }, the optimal relay power allocation corresponding to protocol σ is given by where µ ⋆ σ satisfies (5). Proof: The solution can be naturally derived from (6) by regarding it as an equation of S 2 . Further noting that S 2 ( h) = 2|h 32 | 2 P 2 ( h), we can express the optimal power allocation corresponding to strategy σ as (21).
Next, let us analyze T CF (ν) and T DF (ν) in detail. First, it is straightforward to see In fact, [1 + (t + 1)S 1 + S 2 ] 2 g c (S 2 ) is a quadratic polynomial w.r.t S 2 . Then T CF (ν) can be expressed as the positive solution of quadratic equation in S 2 : According to (10), R DF (S 2 ) is a continuous piecewise function. Due to that R ′ DF (S 2 ) is not continuous, analysis on T DF (ν) becomes complicated. By comparing the three terms in (10), it is not hard to rewrite R DF (S 2 ) in a piecewise form as In fact, if t > 1, then ρ * > 1 is equivalent to That is, If t − η > 1, or equivalently t > S 1 + 2, it arrives that Similarly, if t > 1, then ρ * < 0 is equivalent to After some manipulations, we have It is easy to verify that g ′ d [f 2 (S 1 )] = 0. Hence, does not exist. According to these analysis, we can define the reverse function of R ′ DF (S 2 ) as follows.
With the definition of T σ (ν) (σ ∈ {DF, CF }), one can search for ν ⋆ σ in Theorem (2). This not only helps implementation for power allocation but also provides clues for analyzing the power allocation in combining DF and CF protocols.
IV. OPTIMAL POWER ALLOCATION BASED ON SELECTING THE BETTER ONE BETWEEN DF AND CF
As stated previously, the protocol with selecting a better rate between DF and CF can be expressed as Then, in a static relay channel, R(S 2 ) is achievable by switching to the better one between DF and CF protocols according to the channel gains.
The selection is significant by noting that if t > 1, neither DF nor CF outperforms the other for all the relay power. Define One can easily verify that, if It is noted that Therefore, R(S 2 ) is not concave w.r.t. S 2 anymore. In a fading HDRC, we cannot use (6) to find the optimal power allocation corresponding to R(S 2 ) as what we have done for the case using DF/CF protocol. To find some possible solutions, let us introduce the concave envelops of R(S 2 ), R(S 2 ). In general, R(S 2 ) ≥ R(S 2 ) for all S 2 ≥ 0 and R(S 2 ) is concave. Particularly, for any concave functioñ R(S 2 ) satisfyingR(S 2 ) ≥ R(S 2 ) for all S 2 ≥ 0, one has R(S 2 ) ≥ R(S 2 ).
As both R DF (S 2 ) and R CF (S 2 ) are concave and monotonically increasing functions of S 2 , it is easy to deduce that R(S 2 ) is made up of three parts: a curve coincident with R DF (S 2 ), a line segment connecting two points and another curve coincident with R CF (S 2 ). In particular, the two end points of the line segment should be located on R DF (S 2 ) and R CF (S 2 ), respectively. What's more, if R σ (S 2 ) (σ ∈ {DF, CF }) is smooth at the end point, the line segment should be tangent with R σ (S 2 ). Assume the two end points of the line segment are (S d , R DF (S d )) and (S c , R CF (S c )), respectively where S d < S c . Then, the slope of the line segment is given by Besides, it also has R . Accordingly, we can express R(S 2 ) as Naturally, the derivation of R(S 2 ) can be expressed as Let us denote the reverse function of R ′ (S 2 ) by T (ν). If S d < S 2 < S c , then R(S 2 ) = K always holds. Therefore, one can define uncountable version of T (ν). Similar to the definition of T DF (ν), let us define T (ν) as follows.
• If there is a non-empty set S satisfying that for each It can be readily seen from the definition of T (ν) that the smallest receiver side SNR of the relay-destination link, or equivalently, the least relay power, is selected among those satisfying the necessary condition (6). In fact, for T (ν) < S d and T (ν) > S c , this definition of T (ν) is the same as that of T DF (ν) and T CF (ν), respectively. This specific definition of T (ν) induces a near optimal solution for power allocation based on R(S 2 ). We summarize the result in following theorem.
Theorem 3: Given where µ ⋆ satisfies (5). Then P ⋆ 2 ( h) is a near optimal solution for relay power allocation problem P based on R(S 2 ) which is achieved by selecting the better protocol between DF and CF.
Proof: Similar to what we have done for R DF (S 2 ) and R CF (S 2 ), if we use R(S 2 ) as the static rate performance of the system, we can get an optimal power allocation P ⋆ 2 ( h) following from (6) and T (ν).
Interestingly, the obtained average rate corresponding to R(S 2 ( h)) also can be achieved by R(S 2 ( h)) since R(S 2 ( h)) = R(S 2 ( h)) holds for the solution S 2 ( h). This can be verified by noting that R(S 2 ) > R(S 2 ) holds if and only if S d < S 2 < S c . In fact, for any S 2 satisfying S d ≤ S 2 ≤ S c , it has R ′ (S + 2 ) ≤ K ≤ R ′ (S − 2 ) and T (K) = S d . Due to the fact that R(S 2 ) ≥ R(S 2 ) holds in general, the obtained power allocation can guarantee a near-optimal rate performance.
V. CONCLUSION
We investigated relay power allocation over channel state in fading HDRC based on both DF protocol and CF protocol. By proving the concavity of the DF rate and CF rate, a parameterized form solution for the optimal power allocation has been presented. Furthermore, we considered a hybrid DF and CF protocol and introduced an auxiliary function which helped find a near optimal solution of the corresponding relay power allocation problem. | 2014-12-24T00:45:43.000Z | 2014-12-24T00:00:00.000 | {
"year": 2015,
"sha1": "76ccc19586ee919d239aad0058cbf1f854be3eae",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "76ccc19586ee919d239aad0058cbf1f854be3eae",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
17353075 | pes2o/s2orc | v3-fos-license | QCD Phase Transition in Hot Hadronic Matter
We analyse the QCD chiral phase transition in the nonlinear and linear $\sigma$-model. The strategy is the same in both cases. We fix the parameters of the effective meson theory at temperature $T=0$ and extrapolate the models to temperatures in the vicinity of the phase transition. The linear $\sigma$-model in $SU(3)\times SU(3)$ gives a crossover around $T_c\approx190$ MeV. Around this temperature chiral $SU(2)\times SU(2)$ is almost restored. We also calculate meson masses as a function of temperature.
Introduction
Relativistic nucleus-nucleus collisions with cm-energies E cm ≥ 20 GeV/nucleon create large systems of sizes R > 20 fm at freeze-out with > 10 4 pions [1].It is natural to try statistical methods to describe such hadronic fireballs. A good starting point may be to use equilibrium thermodynamics with pions when one is interested in the later stages of these collisions, where the temperature is below possible phase transitions. The low energy interaction of pions is fully determined by chiral symmetry [2], [3]. Below T ≈ 120 MeV this interaction can be parametrized by the nonlinear sigma model O(4) with one scalar (σ) and three pseudoscalar π-fields, which are constrained by the condition σ 2 + π 2 = f 2 π , where f π is the pion decay constant. Above this temperature region also heavier hadrons give a non-negligible contribution to the condensates and thermodynamic quantities. One way of including part of the heavier mesons is provided by the choice of SU(3) × SU(3) as chiral symmetry group rather than SU(2) × SU (2). Obviously a massless pseudoscalar octet does not provide an adequate approximation to the experimentally observed meson spectrum. Therefore we include explicit symmetry breaking terms to account for the physical mass values of the octet-fields. A determinant term in the meson fields guarantees the correct mass splitting of the η − η ′ masses which is due to the U(1)-anomaly. It reflects the 't Hooft-determinant on the quark level.
The experimental challenge is to measure the equation of state of pions from the inclusive pion spectra. The theoretical task is to calculate this equation of state. For this purpose we need reliable techniques to treat field theories at finite temperature. One can then extrapolate from the measured physics at T = 0 to the yet unknown physics at high temperatures. A very accurate treatment of the soft modes with lowest mass is essential at low temperatures. We calculate the partition function Z in terms of a selfconsistent field which is chosen to extremize ln Z. It gives effective masses to the meson fields. This saddle point approximation to the partition function corresponds to the leading order of a 1/N expansion.
The paper is organized as follows. In section 2 we discuss the nonlinear σ-model for SU(2) × SU(2). In section 3 we calculate the partition function in the linear σ-model for SU(3) × SU(3). Section 4 is devoted to a short discussion.
The Nonlinear σ-Model: SU(2) × SU(2)
The partition function Z for the SU(2)×SU(2) nonlinear σ-model is given in terms of the O(4)-multiplet (n 0 , n) = (σ, π) with n 2 = σ 2 + π 2 as: At zero temperature T := 1/β = 0 the parameters of the model are well known.The pion decay constant f π equals to 93 MeV. The classical vacuum expectation value of n 0 is determined as < n 0 >= f π by minimizing the vacuum energy. Expanding the dependent field n 0 = f 2 π − n 2 to leading order in n 2 /f 2 π , one obtains the mass of the pion as m 2 π = c/f π . The basic idea of our method is to eliminate the nonlinear constraint n 2 = σ 2 + π 2 = f 2 π by the introduction of an auxiliary field λ(x) via an integral along the imaginary axis After shifting the zeroth component n 0 toñ 0 we obtain a Gaussian action for the O(N) multiplet fieldñ, when we evaluate Eq.(2) in a saddle point approximation. The resulting partition function is given as Here we have dropped the λ-integration and chosen λ(x) = λ = const. The optimal choice for λ will be determined later, cf. eq. (11). Upon Gaussian integration over the four (N = 4)ñ-fields we end up with a partition function of a free relativistic Bose gas with N = 4 components and effective masses Here U T denotes the contribution from thermal fluctuations and U 0 the zero point energy We regularize the k-integrations in Eqs. (7) and (8) with a cut-off Λ, since we do not believe our π-effective theory to be correct for momenta beyond Λ. At momenta k > Λ the compositness of the pions manifests itself in resonance excitations and/or higher derivative couplings of the pion states, which are neglected. For the numerical calculations we take cut-off values Λ = 700 MeV, 800 MeV, 1000 MeV. After regularization we adopt the following renormalization procedure. We define a renormalized potential at arbitrary T according to The two subtractions guarantee the two renormalization conditions at T = 0 It is well known that the nonlinear sigma model is not renormalizable in four dimensions. Therefore higher order divergences can only be compensated by higher order derivative terms in the original action. The coefficients of these higher order terms have to be determined by experiment. We do not include such terms in contrast to ref. [2]. In section 3 we will extend the calculation to the linear σ-model (SU(3) × SU(3)) which contains higher masses and strange mesons.
The thermodynamic observables at finite T are obtained from the partition function Z approximated as where λ * (T ) extremizes ln Z at a given temperature T = 0. The saddle point equation for λ * is solved numerically, since in the interesting temperature range the relevant parameter m ef f /T = √ 2λ * /T can have values in the range 0 ≤ m ef f /T < ∞. Let us first study, how the order parameter of chiral symmetry breaking < n 0 > behaves as a function of temperature. In Fig. 1 we present the result for <n . In quark language this ratio corresponds to the ratio of the quark condensate <qq(T ) > at finite temperature over the quark condensate at T = 0, since the symmetry breaking term of the O(4) Lagrangian L SB = cn 0 is identified with the symmetry breaking term L SB = −2mqq in the QCD-Lagrangian. We also show the result of the linear σ-model SU (3)×SU(3), which are presented in the next section, and the results of chiral perturbation theory [2]. The result for<qq(T ) > / <qq(T = 0) > is rather insensitive to the cut-off. It agrees well with the three loop calculation of ref. [2]. Chiral symmetry is only very gradually restored. At low temperature the ππ-interaction is weak and <qq > does not change very much.
The Linear σ-Model: SU(3) × SU(3)
For a Euclidean metric the Lagrangian of the linear sigma-model is given as where the (3 × 3) matrix field Φ(x) is given in terms of Gell-Mann matrices λ ℓ (ℓ = 0, . . . , 8) as Here σ ℓ and π ℓ denote the nonets of scalar and pseudoscalar mesons, respectively. As order parameters for the chiral transition we choose the meson condensates < σ 0 > and < σ 8 >. The chiral symmetry of L is explicitly broken by the term (−ε 0 σ 0 − ε 8 σ 8 ), corresponding to the finite quark mass term 2m qq q + m ss s on the quark level. The chiral limit is realized for vanishing external fields ε 0 and ε 8 . Note that the action S = d 3 xdτ L with L of Eq. (12) may be regarded as an effective action for QCD, constructed in terms of an order parameter field Φ for the chiral transition. It plays a similar role to Landau's free energy functional for a scalar order parameter field for investigating the phase structure of a Φ 4 -theory [4].
The six unknown couplings of the sigma-model (Eq. (12)) (µ 2 0 , f 1 , f 2 , g, ε 0 , ε 8 ) are assumed to be temperature independent and adjusted to the pseudoscalar masses at zero temperature. Further experimental input parameters are the pion decay constant f π = 94 MeV and a high lying (O + ) scalar mass m ση = 1.20 GeV (cf. Output The interpretation of the observed scalar mesons is controversial. There are good reasons to interpret the (0 + ) mesons at 980 MeV as meson bound states. The model underestimates the strange quark mass splitting in the scalar meson sector, the value for m K * 0 comes out too small. The effective theory can be related to the underlying QCD Lagrangian by comparing the symmetry breaking terms in both Lagrangians and identifying terms with the same transformation behaviour under SU(3) × SU(3). Taking expectation values in these equations we obtain the following relations between the light quark condensates, strange quark condensates and meson condensates We usem ≡ (m u + m d )/2 = (11.25 ± 1.45) MeV and m s = (205 ± 50) MeV for the light and strange quark masses at a scale Λ = 1 GeV [5]. From the scalar meson condensates at T = 0, σ 0 = 0.14 GeV and σ 8 = −0.03 GeV we get in accordance with values from PCAC relations [5] within the error bars. Since we treat the coefficients ε 0 , ε 8 of < σ 0 > and < σ 8 >, andm, m s of <qq > and <ss > as temperature independent, we will use Eqs. (14) for all temperatures to translate our results for meson condensates into quark condensates. We also check that the pseudoscalar meson mass squares, in particular m 2 π and m 2 K are linear functions of the symmetry breaking parameters ε 0 , ε 8 . Varying ε 0 , ε 8 while keeping the other couplings fixed we can simulate the sigma model at unphysical meson masses. Since the current quark masses are assumed to depend linearly on ε 0 and ε 8 , an arbitrary meson mass set can be related to a mass point in the (m u,d , m s )-plane by specifying the choice of (ε 0 , ε 8 ). This may be useful in order to compare our results for meson (and quark) condensates with lattice simulations of the chiral transition. The thermodynamics of the linear sigma model is determined by the partition function with the Lagrangian of Eq. (12) We will treat Z again in a saddle point approximation. As mentioned above, the saddle point approximation amounts to the leading order of a 1/N-expansion. In this model N = 2N 2 f = 18. Note that L of Eq. (12) would be O(N)-invariant, if f 2 = 0 and g = 0. Our input parameters lead to non-vanishing values of f 2 and g, therefore the O(N)-symmetry is only approximately realized.
We calculate the effective potential as a constrained free energy density U ef f (ξ 0 , ξ 8 ), that is the free energy density of the system under the constraint that the average values of σ 0 and σ 8 take some prescribed values ξ 0 and ξ 8 . The values ξ 0 min and ξ 8 min that minimize U ef f , give the physically relevant, temperature dependent vacuum expectation values, i.e. < σ 0 >= ξ 0 min , < σ 8 >= ξ 8 min . Hence we start with the background field ansatz where σ ′ 0 and σ ′ 8 denote the fluctuations around the classical background fields ξ 0 and ξ 8 . All other field components are assumed to have zero vacuum expectation value, i.e. σ ℓ = σ ′ ℓ for ℓ = 1, . . . , 7 and π ℓ = π ′ ℓ for ℓ = 0, . . . , 8. The relation between the effective potential U ef f and Z is given by Next we insert the background field ansatz (17) in L and expand the Lagrangian in powers of Φ ′ = 1 The constant terms in Φ ′ lead to the classical part of the effective potential U class . Linear terms in Φ ′ ℓ vanish for all ℓ = 0, . . . , 8 due to the δ-constraints in Eq. (18). Quadratic terms in Φ ′ define the isospin multiplet masses m 2 Q , where Q = 1, . . . , 8 labels the multiplets. The cubic part in Φ ′ will be neglected, while the quartic term L (4) (Φ ′ ) is quadratized by introducing an auxiliary matrix field ( x, τ ). This is a matrix version of a Hubbard-Stratonovich transformation [6].
In the saddle point approximation we drop DΣ and use a SU(3)-symmetric ansatz with a diagonal matrix = diag(s, s, s). Hence the effect of the quadratization procedure is to induce an extra mass term (s + µ 2 0 ) and a contribution U saddle to U ef f , which is independent of ξ 0 and ξ 8 . This way we finally end up with the following expression forẐẐ where ϕ ′ Q denotes σ ′ Q for Q = 1, . . . , 4 and π ′ Q for Q = 5, . . . , 8, g(Q) is the multiplicity of the isospin multiplet. We have g(1) = 3 for the pions, g(2) = 4 for the kaons, g(3) = 1 = g(4) for η, η ′ , respectively. Correspondingly, the multiplicities for the scalar nonets are g(5) = 3, g(6) = 4, g(7) = 1, g(8) = 1 for the a 0 , K * 0 , f 0 , f ′ 0 -mesons. Thus we are left with an effectively free field theory. The only remnant of the interaction appears in the effective mass squared X 2 Q via the auxiliary field s. The choice of a self-consistent effective meson mass squared has been pursued already in Refs. [7,8]. This is an essentially new ingredient compared to earlier calculations of the chiral transition in the linear sigma model [9]. The positive contribution of s to the effective mass extends the temperature region, where imaginary parts in the effective potential can be avoided. In general, imaginary parts are encountered, when the effective mass arguments of logarithmic terms become negative. They are an artifact of the perturbative evaluation of the effective potential and of no physical significance, as long as the volume is infinite. In our application the optimized choice for s will increase as function of temperature and lead to positive X 2 Q over a wide range of parameters.
Gaussian integration over the fluctuating fields Φ ′ in Eq. (19) giveŝ where and ω 2 n ≡ (2πn/T ) 2 (23) denote the Matsubara frequencies. In contrast to our former approach [8] we keep all Matsubara frequencies and evaluate n∈Z in the standard way, see e.g. [10]. The result isẐ Here we have indicated thatẐ and U eff still depend explicitly on the auxiliary field s.
The linear sigma model is a renormalizable theory, and in principle the zero point energy can be calculated and renormalized. We do, however, believe that this model is an effective description for QCD already at tree level. Now we are prepared to determine the temperature dependence of the order parameters < ξ 0 > (T ), < ξ 8 > (T ) from the minima of U eff (ξ 0 , ξ 8 ; s * ). Thermodynamic quantities like energy densities, entropy densities and pressure can be derived from Z in the standard way, if Z is approximated aŝ For the parameters of Table 1 we vary the temperature and determine for each T the extremum of U eff as a function of ξ 0 , ξ 8 and s. The extremum is a minimum with respect to ξ 0 and ξ 8 and a maximum with respect to s. For the search of the saddle point it is necessary to continue the effective potential into the region of complex effective masses X 2 Q (cf. Ref. [11]). In Fig. 2 we show the variations of <qq>(T ) <qq>T =0 and <ss>(T ) <ss>T =0 as a function of temperature obtained from < ξ 0 > (T ) and < ξ 8 > (T ) with the help of Eq. (14). We observe a gradual decrease of the light quark condensate, whereas the strange quark condensate stays almost constant.
In our lowest order calculation the temperature dependence of these masses is determined by the temperature dependence of the condensates [cf. Fig. 3]. The masses m 2 π and m 2 σ η ′ become degenerate after the crossover. The π − K-splitting is increased rather than reduced. Accordingly the strange meson contribution to the energy density in this temperature region is reduced compared to the low-temperature hadron gas.
In Fig. 4 we give the energy density u/T 4 , entropy density s/T 3 and pressure p/T 4 . Sizeable contributions to u come mainly from 8 degrees of freedom, the pions, the kaons and the f ′ 0 meson.
Discussion of the Results
For low temperatures the physics of the nonlinear SU(2)×SU(2) and linear SU(3)× SU(3) σ-model are identical. In Fig. 1 we show the light quark condensates calculated in both models. Above T ≈ 120 MeV the extra degrees of freedom in the SU(3)×SU(3) calculation become important. At higher temperatures T ≫ T 0 ≈ 190 MeV also the linear sigma model will certainly fail as an effective model for QCD due to the lack of quark-gluon degrees of freedom. Nevertheless it would be interesting to study, at what temperature the full SU(3) × SU(3) symmetry is restored. At very high temperatures the effective potential becomes proportional to Q X 2 (Q)T 2 , the linear terms proportional to σ 0 and σ 8 in the masses of O + and O − mesons cancel and temperature tries to fully restore the broken symmetry.
Finally we remark that the crossover occurs around T 0 ≈ 190 MeV, which is rather close to the Hagedorn temperature T H (T H ∼ 160 MeV). This may not be entirely accidental. In our model the 1/N-expansion means a large number of flavours, since N = 2 · N 2 f . In order to keep QCD an asymptotically free theory also the number of colours N c has to increase. Correspondingly our approximation is similar to Hagedorn's description of the hadron gas as a resonance gas. We expect that corrections from subleading terms in our 1/N f -expansion will implicitly amount to corrections also to the large N c -limit. The chiral transition for unphysical values of strange quark and light quark masses will be investigated in a future publication [11]. Fig. 4: Entropy density s/T 3 , normalized energy density u/T 4 and pressure p/T 4 vs temperature. Errors are only indicated for s/T 3 . | 2014-10-01T00:00:00.000Z | 1994-10-11T00:00:00.000 | {
"year": 1994,
"sha1": "e00346c4f4597b37c086d5121d2cdb4772a8c9ad",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c03af58c55b3774b2e20fd638a3b39725b35179e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
242569102 | pes2o/s2orc | v3-fos-license | Research Ethics Training Needs in Thailand and Vietnam
Background To describe the types of research being conducted and the availability of research ethics training and research ethics review in Thailand and Vietnam. Methods An English survey with four major domains, Research Area, Societal Conditions, Research Ethics, and Basic Information was translated into Thai and Vietnamese by native training partners from the NIH Fogarty Research Ethics Training Program. Setting/Participants The survey was administered in two modes - an online survey distributed via an email link in Thailand, and an onsite paper survey in Vietnam. Participants were Thai and Vietnamese trainees and investigators from prestigious universities. in of teachers, materials and delivery platform. Our the Medical Education (ACME) Fogarty Research Ethics Training Program, funded by the US Fogarty International Center, aimed to increase the capacity for research ethics training by applying the “train the trainer” model, using tele-education and in-person training sessions. 30 Prior studies have shown that “train the trainer” programs are very effective in building capacity in regions where there is a lack of local resources of experts of a field. 31,32. Our training program has trained more than 20 scholars in Vietnam, Thailand and Taiwan from 2014 to 2017; seven of the scholars have taken leadership positions in their Institutional Review Boards and ethics training programs. ethics
Conclusions We identified gaps in research ethics training in these two South East Asian countries undergoing rapid socioeconomic transition and identified future curricular focus opportunities.
Background
The quest to find cures and treatments for diseases has propelled the research enterprise globally; much of the progress was made possible due to animal and human experiments, and there is a need to train researchers to conduct research ethically. In the words of Claude Bernard-"The principle of medical and surgical mortality consists in never performing on man an experiment which might be harmful to him in any extent even though the result might be highly advantageous to science i.e. to health of others" 1 Thailand and Vietnam are two countries undergoing rapid economic transition, making them attractive locations for foreign investment and healthcare developments. 8,9,10. A report from The Association of Southeast Asian Nations (ASEAN) Economic Community (AEC) showed that Southeast (SE) Asian countries are competing for influence as medical hubs for billions of dollars of foreign direct investment (FDI). Between 2016 and 2017, the net FDI inflows increased from 2.81 bn to $8 bn for Thailand, and from $11.8 bn to 14.1bn for Vietnam, according to The World Bank Report Year 2018 11 this has been rapidly rising in recent years, according to ASEAN Investment Report Year 2018 and Bloomberg News. 12, 13. To maintain their reputations as auspicious potential healthcare partners, Thailand and Vietnam have aggressively invested in public and private healthcare infrastructure and workforce development, as seen in their increasing health expenditures as a total percentage of GDP from 1995 to 2014, where Thailand increased from 3.5% to 4.1% and Vietnam from 5.2% to 7.1%. 14,15,16. Pharmaceutical and medical device developments in major and provincial locations in the two countries validate market potential for burgeoning returns and positive impacts on health care accessibility and affordability. 17,18. If foreign partners align their expected outcomes with Thailand and Vietnam's social and economic goals and comply with government regulations, there are opportunities for mutual economic, healthcare, and quality of life advancements. 19. In Thailand, the National Research Council of Thailand was established in 1975. A 12-rule "Guidelines for Biomedical Research Involving Human Subjects" was distributed to research institutions throughout the country. 20 In addition, the Medical Council of Thailand, created by the Medical Profession Act, issued rules on "the observance of the medical ethics", acting as an additional oversight to medical ethics review at individual institutions in Thailand. 21 Today, ethics in research is a discipline of its own that requires rigorous training. The In research area, participants were asked to respond to several questions regarding their area of research. Participants could select multiple responses. For example, for the question: "What has been your main research area for the past five years?" There were seven large categories to choose from: non-communicable disease, genetics, infectious diseases, maternal and child health, non-clinical fields, nutrition and metabolism, public and environmental health, and other. Each of these categories had specific subcategories, City, and Hue University of Medicine and Pharmacy), it was determined that two modes of the survey administration were needed to fit societal survey practices. The online survey sent via email was created for Thailand based on the recommendation of our training partners, and an onsite paper survey was found to be the preferred mode for Vietnam.
In Thailand, at the recommendation of Mahidol University information technology department, an URL link to an online survey was emailed by co-investors at Mahidol University to faculty members (academia only, not including support staff) at Mahidol University, Burapha University (Chonburi Province) and Naresuan University (Phitsanulok Province), using an email list provided by those universities. The online survey was completed between October and December 2016.
In Vietnam, surveys were administered through convenience sampling in October 2016. Among the respondents, 23% were male and 75% were female from Thailand; 41% were male and 57% were females from Vietnam ( Table 1).. The proportion of respondents who had a PhD/Doctoral degree from Thailand was 47% compared to 10% from Vietnam. The majority of the respondents from both countries (>95%) work in their home country; 81% (Thailand) and 53% (Vietnam) of the respondents work in academic institutions.
Among those who conduct research, 81% in Thailand and 92% in Vietnam reported that their research involves human subjects (Table 2). Among these, 26% each from Thailand and Vietnam reported working on "clinical observations" (imaging, EKG, exams [physical exams and lab tests], study of nature of the disease); 29% from Thailand and 26% from Vietnam reported that their work involves "clinical interventions" (drugs, devices, biopsies, imaging with contrast).
In terms of [general] research ethics training received in these countries, 34% of the respondents in Thailand and 45% of the respondents in Vietnam had received formal education through their degree programs (i.e. medical doctor, nursing, social work, etc.).
In addition, 50% and 27% of respondents received ethics training on-the-job in Thailand and Vietnam respectively.
We examined the number of respondents who work on human subjects research and had prior training in ethics (not shown in tables). Among the respondents who reported conducting research involving human subjects from Thailand, 84% reported having ethics training; however, only 44% reported they had formal training, i.e. from their health professional degree programs. Among respondents from Vietnam who reported conducting human subjects' research, 66% reported having ethics training. Among those who received training, 72% reported they had formal training (from their health professional degree programs).
With regard to institutional review (see Table 3
Discussion
Our needs assessment survey fielded in Thailand and Vietnam has allowed us to gain an insight of the current landscape in research ethics training and research ethics review in these two SE Asian countries. By identifying the training gaps at the major training centers of these two countries, we can begin to identify opportunities to address research ethics needs in these two countries. Furthermore, it provides a unique opportunity to expand research ethics-training programs in SE Asia that are tailored to each country's needs and unique cultural context.
According Accreditation of Human Research Protection Programs 2018 report, 94% of institutions have their own IRB, only 6% do not have their own IRB. Most research institutions, universities, and health-care facilities have at least one IRB, and the majority has more than one 25 , in addition, there are a number of independent or commercial IRBs. 26 Our finding that 97% of Thai survey respondents reported having established ethical guidelines for research is reassuring and higher than previous estimates of 88% 20 however, we found that 6% reported not having an institutional review board in their institutions, a potential problem for research ethics reviews. Similarly, in Vietnam, even though 89% reported having established ethical guidelines in their institutions, 18% reported they do not have an institutional IRB (5% did not respond to this question). This Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Competing Interests
The authors declare that they have no competing interests. The study described current research ethics training landscape in Thailand and Vietnam.
The survey was designed with considerable input from our local training partners in Thailand and Vietnam and the translation was done by native Thai and Vietnamese research scholars taking into consideration the language, social and cultural differences between the east and the west.
Survey questions were specifically designed to probe the unmet needs in research ethics training from the survey participants.
Because the online email survey in Thailand and a paper survey using convenience sample in Vietnam were conducted from leading universities, our findings may under-estimate the current gaps in research ethics training across the overall landscape.
Two different survey modes were used based on local conditions, which may limit our ability to directly compare the results between Thailand and Vietnam. | 2019-10-03T09:12:36.624Z | 2019-09-26T00:00:00.000 | {
"year": 2019,
"sha1": "fad71ad4605f55c3d5372eb9c6363617593e1572",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-5896/v1.pdf?c=1592538684000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e116bc2d678c5e0c3393fdb3b3b42e4172b5f875",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": []
} |
14772260 | pes2o/s2orc | v3-fos-license | Increasing gene discovery and coverage using RNA-seq of globin RNA reduced porcine blood samples
Background Transcriptome analysis of porcine whole blood has several applications, which include deciphering genetic mechanisms for host responses to viral infection and vaccination. The abundance of alpha- and beta-globin transcripts in blood, however, impedes the ability to cost-effectively detect transcripts of low abundance. Although protocols exist for reduction of globin transcripts from human and mouse/rat blood, preliminary work demonstrated these are not useful for porcine blood Globin Reduction (GR). Our objectives were to develop a porcine specific GR protocol and to evaluate the GR effects on gene discovery and sequence read coverage in RNA-sequencing (RNA-seq) experiments. Results A GR protocol for porcine blood samples was developed using RNase H with antisense oligonucleotides specifically targeting porcine hemoglobin alpha (HBA) and beta (HBB) mRNAs. Whole blood samples (n = 12) collected in Tempus tubes were used for evaluating the efficacy and effects of GR on RNA-seq. The HBA and HBB mRNA transcripts comprised an average of 46.1% of the mapped reads in pre-GR samples, but those reads reduced to an average of 8.9% in post-GR samples. Differential gene expression analysis showed that the expression level of 11,046 genes were increased, whereas 34 genes, excluding HBA and HBB, showed decreased expression after GR (FDR <0.05). An additional 815 genes were detected only in post-GR samples. Conclusions Our porcine specific GR primers and protocol minimize the number of reads of globin transcripts in whole blood samples and provides increased coverage as well as accuracy and reproducibility of transcriptome analysis. Increased detection of low abundance mRNAs will ensure that studies relying on transcriptome analyses do not miss information that may be vital to the success of the study. Electronic supplementary material The online version of this article (doi:10.1186/1471-2164-15-954) contains supplementary material, which is available to authorized users.
Background
Blood is a valuable resource to probe an animal's physiological and pathological status as well as to obtain repeated samples before harvest, for example, monitoring the dynamic change of gene expression in response to disease, treatment, or aging, for which the onset of gene expression response is not known. However, transcriptomic analysis of blood samples is a challenge since blood is composed of heterogeneous cell types including red blood cells (99%), platelets (1%) and white blood cells (<1%; e.g., neutrophils, monocytes, basophils, lymphocytes and eosinophils) [1,2]. In human blood, HBA and HBB are the most abundant transcripts (~52-76%) [3,4]. The high level of globin transcripts in blood was reported to be the most limiting factor for accurate and sensitive detection of gene expression, especially for the less abundant transcripts [3][4][5]. This issue is a great concern for sequence-based approaches, in which the globin transcripts will be highly abundant and limit the potential coverage and detection of other transcripts from blood [3].
To date, several globin RNA reduction protocols have been successfully applied to gene expression studies in human [6][7][8][9]. GLOBINclear TM (Ambion, Austin, TX, USA), a commercial product widely used in human clinical research, removes up to 95% of the HBA and HBB transcripts in human whole blood samples and improves the efficacy of gene expression assays [4,10,11]. Further approaches developed by Affymetrix (Affymetrix Inc., Santa Clara, CA, USA) [5,11] or PNA Bio Inc. (Thousand Oaks, CA, USA) [9,10] also have differential reduction rates of globin transcripts in human blood. Globin RNA reduction improved the sensitivity and reproducibility of high throughput mRNA expression analysis of whole human blood samples [3][4][5]7,9,10]. There is, however, neither a commercial GR product available nor any literature demonstrating the efficiency and effects of GR at global level for porcine whole blood [2].
Our objectives were to develop a porcine specific GR protocol and to evaluate the effects of GR treatment on gene discovery and coverage in RNA-seq experiments for swine.
Comparisons of globin reduction methods
To determine the suitability of the GR process for porcine whole blood samples, we initially evaluated the efficacies of three distinct methods (GLOBINclear TM , biotinylated PNA and RNase H) with whole blood samples drawn from 12 pigs collected in either PAXgene TM (n = 3) or Tempus TM (n = 9) tubes. To evaluate and compare GR efficiency, we performed qPCR analysis of HBA and HBB transcripts with a pooled sample for GLOBINclear and PNA methods and 5 randomly selected samples for the RNase H method (Additional file 1: Table S1). The GLOBINclear TM -Human Kit (Ambion, Austin, TX, USA), commonly used in human samples, seemed to have merit as it employs a non-enzymatic magnetic method but its reduction efficiency in pig barely reached 64% and 67% for HBA and HBB transcripts, respectively (Additional file 1: Figure S1). The manufacturer confirmed that porcine HBA and HBB sequences had low sequence homology to their corresponding human oligonucleotide probes used in the GLOBINclear TM -Human Kit, but the degree of dissimilarity is not known because the human probe sequences used in the GLOBINclear TM Kit are not publicly available. Next, we designed porcine specific biotinylated PNA oligonucleotides and used them with the GLOBINclear TM Kit. This PNA oligo method, however, reduced levels of HBA and HBB transcripts only 40% and 34%, respectively (Additional file 1: Figure S1). Third, we evaluated the RNase H mediated GR method using porcine specific oligonucleotides modified from the Affymetrix GeneChip GR Protocol developed for reduction of human globin transcripts [5]. We examined the sequence similarities of HBA and HBB, especially the oligonucleotide sequences on 3′ UTR, between human and pig using Clustal Omega (Additional file 1: Figure S2) [12]. Due to a lack of consensus, we designed two sets of porcine specific oligonucleotides each for HBA and HBB (Table 1). This revised RNase H mediated GR protocol resulted in an average reduction of 94% of HBA and 92% of HBB transcripts from porcine whole blood samples (Additional file 1: Figure S1). Thus we determined that the RNase H GR method using our custom designed porcine specific oligos was the most efficient of the three GR methods tested here and confirmed its efficacy by RNA-seq (Additional file 1: Table S2).
Performance of GR protocol in an RNA-Seq experiment
After determining the valid and highly efficient GR method, we evaluated the effects of the RNase H GR treatment on gene discovery and coverage in an RNA-seq experiment. Since the above study on the GR method included samples collected using different blood collection tubes and RNA isolation methods, we evaluated the effects of the RNase H GR treatment on gene discovery and coverage in an RNA-seq experiment with a different set of 12 porcine blood samples collected in Tempus TM tubes and for which the RNA was isolated by a magnetic bead based MagMax TM kit.
More than 653 million (M) sequence reads generated from 12 pre-and post-GR samples passed Illumina's CASAVA (v.1.8) filtration (Table 2). These reads were then aligned to the pig genome build 10.2 by TopHat (v. 2.0.8). After GR treatment, total filtered reads and mapped reads were reduced by an average of 6.1 M and 6.8 M reads, respectively, and globin reads were reduced by an average of 11.4 M reads. The percentage of globin reads among all aligned reads averaged 46.1% and of these, 84.7% were removed by GR treatment. The proportion of globin reads to mapped reads were 46.1% and 8.9% in pre-and post-GR samples, respectively, and proportions of HBA and HBB reads to mapped reads were significantly reduced to 5.2% from 26.1% and to 3.7% from 20.1%, respectively (p <0.001, Figure 1). Considering that human globin transcripts constitute 50-70% of the blood Table 1 Porcine specific globin oligonucleotides used in RNase H-mediated globin reduction assay RNA [3,4], the level of pig globin transcripts in pre-GR samples is comparatively low. A possible explanation for the lower level of porcine globin transcripts is that the pigs used in this study were only 1-2 months old, an age associated with rapid decreases in erythrocyte population size and hemoglobin concentration. Although pigs at birth have similar hematological values to adult pigs, by three days of age a 25% reduction in hemoglobin concentration has occurred and hemoglobin concentration then increased gradually from the age of 3 months due to the pig's tremendous early growth rate, as much as eight times faster than humans [13,14]. Thus, we expect the GR protocol will reduce more globin transcripts in newborn and adult pig blood RNAs.
Classification of samples based on RNA integrity number (RIN)
We examined the RIN changes after GR treatment of pig blood RNA and its effect on sequencing results. The quality of RNA was not changed overall (p >0.1); though 8 samples showed a reduction in RIN after GR treatment, only 3 samples showed a marked decrease of RIN (0.4-0.6) after GR. However, there was a reduction in RNA yield following GR treatment with only 33.3-78.2% of total RNA being recovered. Studies on GR treatment in humans also reported the reduction of RNA yields ranging from 52-95% of total RNA [3,4,7,15]. The reasons for the significant reduction and the wide variation in RNA yield are not clear. To offset the RNA loss accompanying GR treatment, it would be desirable to prepare sufficient amounts of initial RNA. Because we identified possible bias introduced by RIN from the preliminary sequencing results (data not shown), we empirically classified the samples into three categories based on RIN after GR treatment: high (RIN ≥7), moderate (5 ≤ RIN <7), and low (RIN <5) representing ideal, critical and inferior RNA integrity for RNA-seq experiments, respectively.
Increased coverage of non-globin genes in post-GR samples
Following an approach similar to that described by Mastrokolias et al. [3], we investigated the effect of GR on enhancing the coverage of non-globin genes and the sensitivity of gene expression detection. Read count data was normalized by library size and DE genes between pre-and post-GR samples were determined using edgeR (see Methods). Compared to pre-GR samples, 11,046 genes showed higher level of detection (expression) and 34 genes (Table 3), excluding HBA and HBB and ENSSSCG00000014727 (hemoglobin subunit beta-like), showed lower level of detection after GR treatment (FDR <0.05) (Figure 2a). We checked for sequence similarities among these 34 genes and the four globin oligonucleotides for possible non-target specific hybridization, but found none. Figure 2b depicts a heatmap of the normalized log2 transformed expression of the 11,046 genes with higher level of detection in post-GR samples compared to pre-GR samples. It was observed that a large set of genes in the low RIN samples (within the boxes in Figure 2b) was considerably lower expressed than the corresponding set in the high/moderate RIN samples, both pre-and post-GR. We believe that these are the genes with the greatest degradation in the low RIN samples. We then examined the variation in gene body coverage from 5′ to 3′ in high/ moderate and low RIN samples, respectively. Low RIN samples showed strong bias toward increased coverage at the 3′ end ( Figure 3). Among the low-quality RNA samples, pre-and post-GR treatment showed the same trend of bias, which affirmed that the RNase H treatment was not the determining factor. High quality samples showed better coverage from 5′ to 3′ as well as at the ends in both pre-and post-GR treated samples. All low quality samples were biased toward increased coverage at the 3′ end, possibly due to the degradation of RNA. However, the number of unique genes detected did not differ significantly between low and high RIN samples.
The lower detection levels of a small number of genes in post-GR samples could also be due to the effect of RIN. We investigated all genes with decreased level of detection after GR (fold change <0) from each sample independently, regardless of statistical significance (Additional file 1: Figure S3). We observed that samples with the most notable RIN change after GR (RIN reduction ≥0.4) had the highest number of genes with decreased expression level (samples 4, 7 and 8; Additional file 1: Figure S3). In addition to the effect of RIN, technical variations or sampling effects could also contribute to differences in detection levels of genes.
Increased number of non-globin genes identified in post-GR samples
The number of detected genes (read counts >5) in post-GR samples was significantly increased compared to pre-GR samples (paired t-test) (Figure 4a). GR treatment increased the gene detection rate by 8.6% in high RIN samples, 2.2% in moderate and 5.4% in low RIN samples. It was also noticed that the number of additional genes identified in post-GR samples was higher for samples with a high RIN (Figure 4b). It may be noted that the detection rate was higher in high RIN samples compared to low RIN samples despite being sequenced at half the depth. Pre-GR, an average of 93 genes were uniquely detected in the high RIN group, whereas 243 genes were uniquely detected in the moderate/low RIN group. Post-GR, the corresponding uniquely detected genes in the two groups were 1,157 and 753, respectively (Additional file 1: Figure S4).
We next determined genes expressed in porcine whole blood using all 12 samples, based on the criterion that a gene was detected at read counts above 5 in at least 5 of the 12 samples. We identified 12,588 genes in post-GR samples and 11,826 genes in pre-GR samples with an overlap of 11,773 genes ( Figure 5). Excluding the overlap, 815 genes were detected only in post-GR samples, whereas 53 were specific to pre-GR samples. The small number of genes found specific to pre-GR samples may be due to the effect of RIN or technical variations. A comparison of the mean expressions of the set of 11,773 genes detected in both pre-and post-GR samples and the 815 genes detected only in post-GR samples revealed increased expression in post-GR samples (Additional file 1: Figure S5). The mean expression of the 815 additional genes in post-GR samples was well below the lower quartile of the expression levels of genes common to both pre-and post-GR samples. Thus GR treatment increases the ability to detect genes expressed at very low levels.
Conclusions
The porcine specific GR protocol described here successfully removed a significant proportion of the HBA and HBB transcripts prior to sequence analysis. The range of gene discovery from RNA sequencing was extended with significant increases in number of identified genes via improved coverage. Our DE analyses using the GR samples showed increased sensitivity, with no observed strong negative effects as a result of the GR protocol. We also demonstrated the effects of RIN on blood RNA-seq analyses. Thus, the GR protocol incorporated into porcine blood transcriptomics will help advance pig physiological, pathological and blood biomarker studies, by providing
Blood samples and RNA isolation
Animal protocols were approved by the Kansas State University and University of Alberta Animal Care and Use Committees. A total of 24 blood samples were used to conduct two independent studies: comparisons of three GR methods to select the best method and evaluating the effects of the selected GR method on an RNA-seq experiment. For the first study, 3 mL of blood samples from 9 pigs of 1-2 months age produced from Landrace x Large White selected from a commercial populations used in the Porcine reproductive and respiratory syndrome Host Genetics Consortium (PHGC) studies [16] For the second study evaluating the effects of RNase H mediated GR protocol on RNA-seq, another set of 12 blood samples were drawn from crossbred pigs of Duroc x (Landrace x Yorkshire) in a PHGC population. Three mL of blood from each pig at 1-2 months of age was collected into Tempus TM Blood RNA Tubes at Kansas State University. Total RNA was isolated using the MagMax TM for Stabilized Blood Tubes RNA isolation kit according to the manufacturer's protocol.
RNA concentration was quantified using a NanoDrop ND-1000 spectrophotometer (Nano-Drop Technologies, Wilmington, DE, USA) and RNA quality was assessed using an Agilent Bioanalyzer 2100 (Agilent Technologies, Inc., Santa Clara, CA, USA). To determine an accurate 28S/18S rRNA ratio in the pig, we aligned the human 28S sequence against pig genome build 10.2 using BLAST and identified 97-100% of similarity on pig chromosome 6: 871128-866484 (Ensembl release 73). The sizes of the 28S and 18S genes in pig were estimated to be 4645 bp and 2302 bp, respectively, yielding an rRNA ratio of 2.02, whereas the rRNA ratios in human and mouse are known to be 2.69 and 2.53, respectively (the ratio obtained from Genbank database; M11167 and X03205 in human and NR003279 and NR003278 in mouse).
Design of porcine specific oligonucleotides
We first tested GLOBINclear TM Human Kit (Ambion, Austin, TX, USA) which hybridized biotinylated oligonucleotides with globin transcripts by binding to Streptavidin Magnetic Beads. Second, we designed porcine specific biotinylated Peptide Nucleic Acids (PNA) oligos to inhibit reverse transcription of globin transcripts (HBA: 5′-CGAGGCTCCAGCTTA-3′ and HBB: 5′-CACCAGC CACCACCT-3′). Third, we designed four porcine specific antisense oligonucleotides for HBA and HBB using Primer Table 1) to hybridize with globin transcripts prior to digestion with RNase H. To design porcine specific oligonucleotides, we first used Clustal Omega (http://www.ebi.ac.uk/Tools/msa/clustalo/) to align the porcine HBA (ENSSSCT00000008741) and HBB (ENSSSCT00000016076) transcript sequences in the current assembly of the pig genome (build 10.2) with their orthologues from human, mouse, cow and pig obtained from the Ensembl database (http://www.ensembl.org) and then checked the similarity of the 3′ end hybridization sites (Additional file 1: Figure S2).
Globin reduction treatment
GR treatment with porcine specific oligonucleotides was performed using a modified Affymetrix GR protocol [5]. In brief, 10X GR oligonucleotides mix was prepared adding 100 uL each of two HBA Oligos at 30 uM, two HBB Oligos at 120 uM per reaction, yielding a final concentration of 7.5 uM HBA Oligos and 30 uM HBB Oligos. Three ug of denatured total RNA was hybridized in a thermal cycler at 70°C for 2 min with the 400 uL 10X GR oligonucleotides mix in hybridization buffer (100 mM Tris-HCL, pH 7.6; 200 mM KCl) at 70°C for 5 min, then cooled to 4°C. The RNA-DNA hybrids were digested with 2 U RNase H (Ambion) in the reaction buffer (100 mM Tris-HCl, pH 7.6, 20 mM MgCl 2 , 0.1 mM DTT, SUPERase-In) at 37°C for 10 min and cooled to 4°C. The reaction was stopped by addition of 1.0 ul 0.5 M EDTA. RNase H treated RNA was immediately purified with the RNeasy MinElute Cleanup Kit (Qiagen, Toronto, Canada, Cat. No.: 74204) according to manufacturer's instructions. RNA quality of GR treated samples was assessed using an Agilent Bioanalyzer 2100 (Agilent Technologies, Inc.).
Quantitative real-time PCR (qPCR) analysis
We quantified the mRNA level of the porcine HBA and HBB transcripts by SYBR Green I based qPCR using a StepOne TM Real-Time PCR System (Applied Biosystems, Foster City, CA, USA). First strand cDNA was synthesized using SuperScript® II reverse transcriptase (Invitrogen) and random hexamer primers in a final volume of 20 μL following the manufacturer's instruction. SYBR Green I based qPCR was performed in a total volume of 10 μL per reaction comprising 2 μL of template, 1 μL of the assay-specific primer mix, 5 μL of the Fast SYBR® Green Master Mix Bulk Pack (Applied Biosystems) and 2 μL of water. The reaction conditions used were one cycle of 95°C for 3 min for initial denaturation, 23 cycles of 95°C for 30 s and 60°C for 30 s. The primer sequences are shown in Additional file 1: Table S1.
Library preparation for sequencing
Poly-A + fractions from the GR treated samples and respective non GR treated samples (1.5 μg RNA each) were purified by using oligo-dT magnetic beads (Illumina, Inc., San Diego, USA), and used to construct cDNA libraries. The Poly (A+) RNA was primed with random hexamers and fragmented at 94°C for 8 min. Second strand cDNA was synthesized after the first strand cDNA using SuperScript II (Invitrogen). The cDNA fragments were end-repaired and a single ' A' nucleotide was added to 3′-ends to prevent them from cross ligation during the adapter ligation step. Then individual RNA adapter index oligos were ligated to the end-repaired cDNA and subsequently amplified using Veriti Thermo cycler (Applied Biosystems). The initial denaturation was performed at 98°C for 30 seconds, followed by 15 cycles at 98°C for 10 seconds, annealing at 60°C for 30 seconds and extension at 72°C for 30 seconds. The final extension was followed at 72°C for 5 minutes, and held at 10°C.
The quality and size (~260 bp) of the resulting cDNA libraries were assessed using the High sensitivity DNA Kit (Agilent Technologies, Inc.) in an Agilent Bioanalyzer 2100 (Agilent Technologies, Inc.). The quantification was performed using StepOne TM Real-Time PCR System (Applied Biosystems), as suggested in the Sequencing Library qRT-PCR Quantification Guide (Illumina, Inc.). The KAPA SYBR® FAST ABI Prism qPCR Kit (Kapa Biosystems, Inc., Woburn, USA) was used for the qPCR reactions. The individual libraries were pooled into 2 nM after quantification.
Sequencing was performed on the HiSeq System (Illumina, Inc.). The pooled 10 μL of the 2 nM libraries were diluted and denatured. The pooled cDNA libraries (12 pM) were loaded on the cBot (Illumina, Inc.) for clustering on a flow cell, and single-read cluster generation proceeded using the TruSeq TM SR Cluster Generation Kit v3 (Illumina, Inc., Cat.: FC-930-3001). A portion of each library was diluted to 10 nM and stored at -20°C. Fifty cycles of sequencing-by-synthesis using the paired-end protocol was performed on a HiSeq (Illumina, Inc.) according to manufacturer's instructions. Real-time analysis and base calling was performed using the HiSeq Control Software Version 1.4.8 (Illumina, Inc.).
Bioinformatic analysis
Sequence reads with base quality scores were produced by the Illumina sequencer. Raw reads were processed using the Illumina CASAVA (v. 1.8) to filter out the low-quality reads. Sequence reads were then aligned to the pig genome reference assembly (build 10.2; [17]) using TopHat 2.0.8 [18] with default parameters. The number of reads uniquely mapped to each gene (Ensembl 71 annotation) was determined using Htseq-count (v0.5.3.p3; [19]). To determine number of genes identified in each sample, we required a read count >5.
To identify genes detected at decreased or increased levels between the globin reduced and non-reduced samples, the read count data were analysed using edgeR (version 3.0.8) [20] in R (version 2.15.2), as described [3]. Count data was normalized by the library size to account for different numbers of reads obtained from each sample. To determine differences in detection levels between the two groups, an exact test for the negative binomial distribution was used. The genes were considered to be differentially detected at FDR <0.05. RSeQC (v2.3.3) [21] was used for read distribution over gene body to check 5′/3′ bias. We used BlastN (v2.2.25) [22] to perform the alignment between globin oligos and the genes with decreased levels after GR treatment.
Additional file
Additional file 1: Table S1. Primer sequences used in qPCR. Table S2. Blood collection tube, RNA isolation methods, sequence statistics, number of expressed genes and globin reads count in pre-and post-globin reduction samples. Figure S1. qPCR results for HBA and HBB gene expression comparing three different globin reduction methods. Figure S2. Alignment of orthologous HBA and HBB cDNA sequences in human, mouse, cattle and pig. Figure S3. Differential gene expression in pre-and post-GR samples. Figure S4. Individual Venn diagrams showing the number of genes detected by RNA-seq in pre-and post-GR samples. Figure S5. Comparison of the mean expressions of the set of genes detected in both pre-and post-GR samples and the genes detected only in post-GR samples. | 2017-06-30T22:10:26.484Z | 2014-11-04T00:00:00.000 | {
"year": 2014,
"sha1": "20761c3b68bc6471a93ac33692fdb87dbdf8e2c8",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-15-954",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d296b94735073784b8450ccda3e8c794b64da886",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
211792447 | pes2o/s2orc | v3-fos-license | Research on Carbon Accounting Method and Economy of Electric Vehicle Charging Facilities Participating in Carbon Emission Permits Trading
The development of electric vehicles(EV) has been widely established as an important way to ensure energy security and transform low-carbon economy in the world. However, the construction and operation cost of EV charging facilities is high and the profit channel is single, which makes operators in a long-term loss state. It is necessary to study the participation of EV charging facilities in carbon trading in order to find new profit growth points. On this background, firstly, the calculation method of carbon emission of whole life cycle of EV charging facilities is studied from the construction stage and the operation stage, which shows that the carbon emission accounting has technical feasibility. Secondly, the economic benefits of EV charging facilities participating in carbon trading market is analyzed, which shows that participating in carbon trading has economic rationality. Finally, a case study of the carbon emission accounting and economic benefits of EV charging facilities participating in carbon trading is carried out. The result shows that the economic benefits of EV charging facilities participating in carbon trading increase, and with the increase of charging amount and carbon trading price, the economic benefits increase continuously.
Introduction
With the advantages of energy saving and environmental protection, EV have become an effective way to solve the problem of energy and resource shortage and serious air pollution. The development of EV has been widely established as an important way to guarantee energy security and transform low-carbon economy in the world [1]. However, the construction and operation cost of EV charging infrastructure is relatively high, the profit channel is single, a large number of charging facility operators are in a state of loss for a long time, and various capital investment power is insufficient [2]. Therefore, in order to promote the healthy and orderly development of the EV charging facility industry, it is urgent to broaden the profit channels of charging facilities and explore new profit growth points. As the core link of the low-carbon industry of EV, charging facilities can increase revenue by participating in the carbon trading market, which requires accounting the carbon emissions of EV charging facilities participating in carbon trading.
In terms of carbon emission accounting, literature [3] introduces the spatial effect into the analysis of the influencing factors of China's traffic carbon emissions. Based on the "top-down method", carbon emission factors are used to calculate the traffic carbon emissions of various provinces and cities, and the time-space change trend of the traffic carbon emissions of various provinces is analyzed through the calculation results. Literature [4] sorted out the existing carbon verification policies and analyzed the carbon accounting based on the actual cases of the chemical industry. Literature [5] takes the accounting scope as the entry point, proposes the carbon emission accounting method, and defines the boundary conditions of carbon emission accounting.
In terms of measuring the economic benefits of carbon trading, literature [6] established a carbon trading revenue model for wind power projects based on multi-stage real options and dynamic recursion theory. Literature [7] uses the cost-income economic model of electric bus charging and changing station to calculate the economic benefits with the example of electric bus charging station. Literature [8] puts forward the mathematical model of economic benefits of the international maritime carbon trading mechanism, and conducts numerical simulation calculation on the ship type of river-sea direct container ship to obtain the economic benefits.
It can be seen from the relevant literature that the research on the participation of EV charging facilities in carbon trading is still in the preliminary stage. In this context, this paper first studies the accounting method of carbon emissions in the whole life cycle of EV charging facilities from the two levels of construction stage and operation stage. Secondly, this paper analyzes the economy of EV charging facilities participating in carbon trading market. Finally, this paper analyzes the carbon emission accounting and economic benefits of EV charging facilities participating in carbon trading with an example.
Research on accounting method of carbon emission in whole life cycle of EV charging facilities
The whole life cycle of EV charging facilities mainly includes the construction stage and the operation stage, and carbon dioxide emissions are generated in each stage. This section will study the accounting method of carbon emissions in whole life cycle of EV charging facilities based on the above two stages.
Accounting of carbon emissions at the construction stage
The carbon emissions in the construction stage of EV charging facilities are mainly composed of the carbon emissions in the construction of charging facilities and the carbon emissions in the equipment purchased. The carbon emissions can be calculated according to the advanced carbon dioxide emission intensity of related industries. Therefore, the carbon emission accounting method in the EV charging facility construction stage is shown as follows: Where CE1 is Carbon emissions during the construction of EV charging facilities; Vei is Advanced carbon dioxide emission intensity of the real estate industry, value for 29.13kg CO2/m 2 ; Vec is Advanced carbon dioxide emission intensity of electronic components and components manufacturing industry, values for 31.920 kg CO2 per thousand; S is area of the building covered by EV charging facility; D C is Acquisition cost of charging and distribution facilities.
Accounting of carbon emissions at the operation stage
The carbon emissions at the operation stage of EV charging facilities are mainly determined by the charging amount for the vehicle and the average loss rate of power technology transmission and distribution for charging. The calculation method is shown as follows: , i n TDL is Average loss rate of power technology transmission and distribution for charging EV in year n; i is Vehicle models, such as large vehicles(buses) and small vehicles(private cars, taxis).
Life cycle carbon emissions accounting
The carbon emissions in the whole life cycle of EV charging facilities are equal to the sum of the carbon emissions in the construction stage and the carbon emissions in the operation stage. The calculation formula is shown as follows: 1 2, , , ei ec D n n elec i n PJ i n i n n i 3. Economic analysis of EV charging facilities participating in carbon trading EV charging facility operators can trade the carbon emission reduction of EV users in the carbon market and obtain certain benefits. This section will analyze the costs and benefits of EV charging facilities.
Cost analysis of EV charging facilities
Assuming that the exit cost of ev charging facilities is not taken into account, the construction cost of EV charging facilities mainly includes land cost, construction cost, acquisition cost of charging and distribution equipment, and monitoring system cost, and the operation cost mainly includes electricity purchase cost, charging station maintenance cost and labor cost within the operating cycle. The construction and operation cost structure of ev charging facilities are shown in (1) Construction cost of EV charging facilities The construction cost of EV charging facilities C1 is mainly composed of land cost, construction cost, acquisition cost of charging and distribution equipment and monitoring system cost. The calculation is shown as follows: C is land cost; B C is building cost; D C is charging and distribution equipment costs; S C is monitoring system cost. ①Land cost( L C ) is an important part of the initial construction investment of charging stations, which is mainly determined by land price per unit area and land area occupied by charging facilities. The calculation is shown as follows: = L P S C L L (5) Where LP is land price per unit area; LS is land area occupied by charging facilities. ②Building cost( B C ) is the construction cost of electric vehicle charging facilities, mainly including office area, parking area, power distribution room, monitoring room and other buildings. The calculation is shown as follows: Where C is construction fee of office area and parking area; CF is construction fee of power distribution room and monitoring room.
③The acquisition cost of charging and distribution equipment CD is the most important expense in the initial stage of EV charging facilities construction. Among them, the charging equipment mainly includes ac charging pile and dc quick charging machine. Distribution equipment mainly includes l0kV switch cabinet, transformer, low voltage distribution cabinet, etc. The acquisition cost of charging and distribution equipment CD is calculated as shown as follows: Where 1i P is Unit price of class i charging equipment; 1i N is Purchase quantity of class i charging equipment purchased; 2i P is Unit price of class i distribution equipment; 2i N is Purchase quantity of class i distribution equipment.
④Monitoring system cost(CS) refers to the purchase cost of equipment and system software for EV charging facilities, power distribution system monitoring, charging security and defense monitoring, etc.
(2)Annual operating cost of EV charging facilities The annual operating cost of EV charging facilities C2,n are mainly composed of electricity purchase cost, labor cost and charging equipment maintenance cost. Its calculation is shown as follows: Where , E n C is Annual electricity purchase cost; , H n C is Annual labor cost; , M n C is Annual charging equipment maintenance cost.
①The purchase cost of electric vehicle charging facilities( , E n C )mainly refers to the electricity charge purchased by the power supply company for charging needs, as well as the electricity charge required by the normal operation of equipment in the station and the work and life of employees, which is mainly determined by the electricity purchase price and the total demand for electricity. The calculation of annual electricity purchase cost is shown as follows: Where E P is Unit price of power purchase; E N is quantity of electricity purchase.
Economic benefit analysis of EV charging facilities (1)The revenue composition of EV charging facilities
The main business of EV charging facilities is to provide charging services to customers, which is their main source of revenue. In addition, EV charging facilities enjoy state and local subsidies. During the operation period of EV charging facilities, the emissions of carbon dioxide is effectively reduced. With the large-scale development of future city EV and the continuously open of carbon emissions trading market, the benefit of CO2 emission reduction will increasingly highlight, so the contribution of carbon trading income to EV charging facilities income will be greater and greater. In addition, some residual income will be generated when the project is returned.
It can be seen from the above analysis that the income of EV charging facilities mainly includes four parts: charging service income, carbon emission reduction trading income, national and local subsidy income and residual value income. The income composition of EV charging facilities is shown in Fig 2. EV charging facility income ①The annual revenue of charging service n I is mainly determined by charging price and annual charging quantity. The annual revenue of charging service is calculated as shown follows: ④Residual income ( R I ) is determined by the residual return rate r and the size of fixed assets, as shown follows: Where r is The residual return rate; D C is Acquisition cost of charging and distribution equipment.
Basic data
In order to verify the practicality and effectiveness of the model, this paper selects an EV charging facility as the research object to calculate the total life cycle carbon emissions and economic benefits involved in carbon trading. Tab 1 shows the basic data of this EV charging facility.
Carbon emission measurement of EV participating in carbon trading
(1) Carbon emissions during construction According to Tab 1, this EV public charging station occupies an area of 1,500 square meters and the cost of charging and distribution equipment is 1 million yuan. Therefore, according to equation (1), the carbon emission of this EV public charging station in the construction stage is 75.62 t CO2.
(2) Carbon emissions during operation From table 3-1, it can be known that the CO2 emission factor of this EV public charging station is 0.666 tCO2/MWh, the typical daily charging amount of fast charging equipment is 75 KWH, the typical daily charging amount of slow charging equipment is 40 KWH, there are 560kW dc fast charging devices and 157kW ac slow charging devices. According to equation (2), the carbon emission of this EV public charging station in the operation stage is 237.01 t CO2/ year.
In addition, it can be known from table 3-1 that the CO2 emission factor of fuel vehicles is 0.157 t CO2/MWh, and it can be calculated that the carbon emission of fuel vehicles is 372.48 t CO2/ year under the same driving distance of EV. Thus, it can be calculated that the carbon emission reduction of EV public charging stations in the operation stage is 135.47 t CO2/ year.
Measurement of the economy of EV participating in carbon trading
The EV charging infrastructure belongs to public property. According to local policy, the government reduce operating costs of charging infrastructure construction through fiscal subsidies, free of charge transfer in electrical installations construction sites and other ways. At the same time, the construction cost of charging facilities and distribution network facilities shall be borne by the power grid enterprises. Therefore, in the process of calculating the construction cost, the land cost, supporting power grid and road network construction cost can be temporarily ignored, and the related equipment purchase and installation cost can be mainly considered.
The Tab 1 shows that the purchase price of EV public charging station is 0.45 yuan/KWH, fast charge equipment selling electricity price is 1.45 yuan/KWH, slow charge equipment selling electricity price is 1.25 yuan/KWH, the typical daily charge capacity of a single fast charging device is 75 KWH, the typical daily charge capacity of a single slow charging device is 40 KWH, the annual operation and maintenance cost (including labor cost) of the charging station is 150,000 yuan. 1)Calculation of economic benefits of EV charging facility construction without considering carbon trading Put the data in table 3-1 into the economic measurement tools of 2.1 and 2.2, and do not consider the benefits of carbon trading, the financial indicators can be obtained as shown in Tab 2: It can be seen from Tab 3 that, without considering the carbon trading case, the total initial investment cost is 1 million yuan, annual operation maintenance cost is 150,000 yuan, charging service income is 472,200 yuan, annual after-tax profits is 211,000 yuan, the cumulative net present value in the sixth year of operation is -21,400 yuan and the internal rate of return is 7.24%. The total cost cannot be recovered in the whole operation cycle.
2)Calculation of economic benefits of EV charging facilities construction considering carbon trading Put the data in table 3-1 into the economic measurement tools of 2.1 and 2.2, and take the revenue of carbon trading into consideration. The financial indicators are shown in Tab 3: It can be seen from table 3-3 that, in the case of the consideration of carbon trading, total initial investment cost is 1 million yuan, annual operation maintenance cost is 150,000 yuan , charging service income is 472,200 yuan, in carbon trading income is 8100 yuan, annual after-tax profits is 216,100 yuan, (2)Economic analysis of EV charging facility construction participating in carbon trading market It can be seen from Fig 3 that in the two cases, additional carbon trading income is obtained when the carbon trading case is considered, and the cumulative net present value during the operation period is always higher than the case without considering the carbon trading. In addition, in the case that carbon trading is not taken into account, all costs cannot be recovered in the whole operating cycle, while in the case that carbon trading is taken into account, all costs can be recovered at the end of the operating period. At the end of the sixth operating period of the charging station, the cumulative net present value (NPV) considering the carbon trading situation was 0.47 million yuan, which was 26,100 yuan higher than that without the carbon trading situation.
(3)Sensitivity analysis of EV charging facility construction participating in carbon trading market When other factors unchanged, under the condition of considering whether to participate in carbon trading, one of these factors (EV charging infrastructure construction cost, carbon trading and charging quantity) was reduced or increased in proportion, we can get different construction investment, carbon trading price, charge level under the net present value. Then we list the different construction costs, carbon trading, charging amount of the net present value of the result, the result is shown in Fig 4 and Tab 4.
As can be seen from Fig 4 and Tab 4, the sensitivity degree of the change level of different variables to the influence of NPV is ranked from big to little : charging quantity > input > carbon trading price . With all variables unchanged, it is necessary to participate in carbon trading if the cumulative net present value of the carbon trading scenario is positive, while the cumulative net present value of the carbon trading scenario is negative. No matter how the input, charge quantity and carbon trading price change during the construction period, the economic benefit of considering carbon trading is always better than not considering carbon trading. With the increase of charging quantity and carbon trading price, the economic benefit of carbon trading is continuously improved.
Conclusion
(1) It is technically feasible for EV charging facilities to participate in the carbon trading market. The whole life cycle carbon emission accounting of EV charging facilities can be obtained by calculating the carbon emission accounting of construction stage and operation stage respectively.
(2) From two aspects of cost and economic benefits, the economic efficiency of EV charging facilities participating in carbon trading can be calculated, and EV charging facilities participating in carbon trading has additional benefits.
(3) The technical feasibility and economic rationality of electric vehicle charging facilities participating in carbon trading are proved by case study. With the increase of charging amount and the rise of carbon trading price, the economic benefits of carbon trading are continuously improved. | 2019-10-24T09:13:04.034Z | 2019-10-19T00:00:00.000 | {
"year": 2019,
"sha1": "4ef92b41b3cb4dbf8e0ff201c9bc2f1102ee0de5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/612/4/042019",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "50a618550c58a9156e07166afef779abddf5172a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
} |
59059510 | pes2o/s2orc | v3-fos-license | A generic decision model of refueling policies : a case study of a Brazilian motor carrier
Considering the high impact of the transport on the total logistics cost, the transportation management is fundamental to maintain the company’s competitiveness. Particularly in Brazil, fuel represents a significant cost for the motors carriers. So, this paper presents the development of a generic mathematic model that optimizes the fuel cost and assists the company’s decision making on refuelling policy choices. Basically, in order to reduce the total cost, this model analyses the fuel prices variations in a road network and, thereby, it define: (i) which truck stop(s) to use, and (ii) how much fuel to buy at the chosen truck stop(s). Therefore, to assist the development of this model, a set of publications was raised related to the techniques of refuelling optimization. As opposed to the papers analyzed, in which, the models are validated using simulations, the presented model uses a case study as a reference and, in this case, the model provided a decrease of 2.3% in total fuel cost. B T P S Brazilian Transportation Planning Society www.transport-literature.org JTL|RELIT
Introduction
Administration of transport is, undoubtedly, one of the biggest challenges of companies today.The process of logistics integration and the growing demand for products and services in a cost and time each moment lesser make firms to value their logistic system so that waste of resources and time are avoid.The transport cost in Brazil represents about 60% of logistics costs influencing the final product price and, consequently, the company's competitiveness.Thus, the transportation has an important role in the logistical process, because its cost contributes for a significant share of expenditure of the companies and influences the level of service provided [1].Academics works like Lima [2], Lopes [3] and Rittiner [4] corroborate that the fuel is the main cost of the motors carriers.Lima [2] showed that in 2004 the fuel cost represented 31,8% of the total cost and this values could achieve 41,8% when analyzed only long routes.In this same study, the author estimated that 55% of all diesel consumed in Brazil in 2004 were for cargo route transportation, equivalent to 21.7 billion liters and 32.7 billion BRL.According Rittiner [4], fuel is the main input for the motor carriers and represents on average 30% of total costs.Based on the representativeness of the fuel cost compared to the road transport cost and the impact of transportation in logistics costs, it is possible to estimate that, on average, the cost of fuel represents about 18% of logistics costs of a company that uses road transport as the manly way of transport.Given this scenario, where the fuel cost has a significant impact on the logistics costs and, consequently, on the competitiveness, this paper will develop an optimization model, based on linear programming model, aiming to reduce the fuel cost for the motors carriers.
Review of Fuel Optimizers
Fuel optimizers are decision models that reduce motor carriers fuel costs using information on prices at every gas station within a route.Thus, the optimal fueling schedule for each route is determined, including: (i) which truck stop(s) to use, and (ii) how much fuel to buy at the chosen truck stop(s) to minimize the cost of refueling.The basic concept of this model is to take advantage of such price variances across truck stops to reduce the cost of buying fuel.The model's goal is to buy more gallons at truck stops where the fuel is cheap, and buy fewer gallons at truck stops where the fuel is expensive.This model typically work in conjunction with truck-routing software, so that users can first compute the optimal (shortest) route for a given origin-destination pair and then optimize the fueling operations along this route.Research on vehicle refueling has been conducted by both the academic researchers and practitioners.According to Suzuki [5] most of the early works were conducted by practitioners in early 1990s during the software development phase.These products were designed in order to resolve the concern of many companies that fuel prices fluctuate, sometimes substantially, between a truck stop and the next truck stop in the same route.Suzuki [6] lists: (i) ProMiles, (ii) Expert Fuel, and (iii) Fuel & Route as the most famous fuel optimizer products.Basically, all these applications use mathematical programming models by selecting the optimal locations and quantities of truck stop previously given a route origin and destination.The following factors are the main required inputs by these models: • Vehicle tank capacity, • Average fuel consumption rate for the role trip, • Retail diesel price, • Among of fuel in tank at origin (starting fuel), • Distance from truck stop i-1 (0 if i=1) to truck stop i, • Minimum among of fuel to be maintained in the tank all time, • Minimum gallons or liters to be maintained in tank at all times.The majority of these commercial products allow their users to include some restrictions on the model that reflect your corporate policies and preferences so that the solutions become not only possible, but also practical in the point of view of implementation.Restrictions such as the removal of some truck stop on the models that do not meet the minimum specifications acceptable to the company.These specifications can be reference related quality aspect or related the minimum distance of the truck stop about the route defined.According to Huff [7], these applications may require significant adjustments of the optimal solution in order to achieve specific goals of each company, such as refueling only in truck stop that exist contract even if their prices are high.Despite the proliferation of actual software products in the field, academic researchers did not study the type of vehicle refueling problem until recently.According with Suzuki [6], the first scholarly that considered the refueling problem with the focus on total fuel cost is Lin et al. [8].They considered the fixed route vehicle refueling problem similar to that addressed by the commercial fuel optimizers, and developed a linear time greedy algorithm for finding optimal fueling policies.This algorithm developed by the authors based on in the special case of the capacity inventory lot size problem where there is a limit capacity inventory, the costs of preparation is zero, the costs of maintenance inventory is zero and production costs are linear.Lin [9] extended the work of Lin et al. [8] by developing an algorithm that jointly determine the optimal path (route) from origin to destination, and the optimal refueling decisions along the path, its denote that in this model the route is no more predetermined.Other scholar works that investigated vehicle refueling problems include Khuller et al. [10] and Suzuki ([5,6]).Khuller et al. [10] studied several models of vehicle routing problems related to shortest path and traveling-salesman problems and incorporated in these models the refueling cost and the restriction of capacity fuel tank with the goal of finding solutions to various optimization refueling problem.Base on interviews with motor carriers, fuel optimizer vendors and users, Suzuki [5] proposed a "generic" approach to the vehicle refueling problem by considering not only the fuel cost, but also several other costs of vehicle operation.Suzuki [6] added new restrictions on the commercial refueling problem aiming to reduce the fuel cost without confiscating the freedom to choose truck stops by the drives.The author expected to reduce the higher drive turnover rates with this proposed model.Our review of literature indicates that the all the earlier studies have made important contributions to the vehicle refueling literature.However, no studiers have considered the use of these models in a practical situation showing that real savings.
Methodology
Aiming to develop, through a case study, a model that meets the characteristics of optimizing refueling policies, the work was divided into three stages.The first stage consisted of gathering information about the transport operation done by the company.At that point, we attempted to obtain information about the technical characteristics of vehicles, total distance, type of routes, number of vehicles, the truck stops and current costs charged in the gas stations.The second step was to adapt the model, using concepts of linear programming (LP), the characteristic of the transport operation of the company and the final stage consisted of implementing and testing the model using the solver application of Michosoft® Excel.Also was studied the necessary adjustments so that the model becomes applicable to others transport operations with similar characteristics to the proposed work.Furthermore, we evaluated the financial impact and the indirect benefits generated after applying the model on the route analyzed.
Description of Transport Operation
The problem analyzed were based on a road transport operation of auto-parts from southeast to northeast of Brazil with the objective of meet the demand of automotive industry in Camaçari city in the state of Bahia.This operation is complex because it involves the just in time supply at a distance of approximately 2,000 km between the place of loading and delivery of cargo.In this transport operation they use 67 dedicated trucks with capacity of 30 tons each.To meet the system just in time of the client, it is necessary that the trucks work 24 hours a day, seven days a week and, with the aim of increasing the trucks occupancy, the company has developed a methodology of driver exchanges whose plans to replace the driver at certain points of the route so as not to leave any driver over eight hours driving, in other words, in this system the driver rests, but the vehicle remains the route.
The trucks always start from the southeast region, especially the region of the great Sao Paulo city, where are the main suppliers of auto parts, and will continue until the final destination that is Camaçari in the state of Bahia.To accomplish this journey, the company has the option to carry the loads of three different routes: The main one is the route that uses the BR381 (Fernão Dias) which goes to the city of Belo Horizonte and, from this point, goes from the BR116 highway, as shown in Figure 1.The company chose to define the mandatory truck stops on the routes of this operation.At these points can be done drives exchanges, refueling, preventive maintenance, inspection of documents and the conference of the load.The letters A, B, C, D, E, F, G and H highlighted in green in Figure 1 represent the obligatory truck stop.The letter A refers to the main point of the operation, where is the company's headquarters and where it is done all the logistical planning.From this point, the vehicles follow to the point B to carry the load on suppliers through a logistics system called milk-run.The milk run system is a planning deliveries or collections, maintained by a transportation company, where for each day the company makes a collection of components from each supplier in predetermined amounts with the goal of delivering to the manufacturer [11].After completion the collections in suppliers of ABC, trucks follow until the final destiny which is Camaçari, Bahia, illustrated by point H in Figure 1.Along the way the vehicles are passing some of the mandatory truck stops that are the points C, D, E, F and G.In all these points can be realized the refueling, change of drivers and cargo conference.Find below the location of each point described in Figure 1: • Analyzing the historic supply data of this operation, it was found that there is wide variation in retail diesel prices from the gas stations located in the truck stops and, due to storage capacity of fuel and long distances, it was observed that often the trucks were refueled in the points A, D, E, F and G.In all refueling process were asked to fill the tank completely and thus, the vehicle always left the gas station with the tank fully and went to the next point.One of the reasons to always completely fill the truck is to minimize the risk to have some truck stopped on the road for lack of fuel.This fact has become a constraint of the model to be presented in section 3.2.With parameter of this restriction, the company determined that the truck in this operation could not be less than 50 liters in the fuel tank.After this stage of rising information about the transport operation, we elaborated a mathematical modeling to determine the ideal amount of fuel to be supplied at each truck stop in order to minimize the total cost so that the capacity refueling constraints must to be met.In addition, other restrictions were considered, such as autonomy between the truck stops and minimum quantity in liters in the tank.
Model Formulation
The model is based on the basic concepts of refueling optimizers whose goal is to minimize the total cost of fuel in the transport operation.In this model, routes must be fixed and the truck stops need to be previously defined.Variables logistics of this operation and technical characteristics of vehicles are the input variables of the model.The amount of fuel to be supplied at each truck stop are the output figures.Index: i = index of stop points.Variables: q i = quantity of fuel supplied at point i on the trip to the first destiny, q' i = quantity of fuel supplied at point i on the way back, v i = quantity of fuel on tank at the point i on the trip to the first destiny, v' i = quantity of fuel on tank at the point i on the way back, z = fuel total costs.Constants: c i = fuel price (R$/liters) at point i, d i , i+1 = distance from point i to the next point ( i+1 ), k = average consumption rate (km / l), Q = fuel tank capacity (liters), S = minimum acceptable quantity in the tank (safety characteristics).Find below the graphical representation of the model: n = quantity of truck-stops Subject to: Fuel tank capacity restrict: Restriction of non-negativity: Equation ( 1) is the objective function to be minimized and represents the total fuel cost considering the complete cycle route.The restrictions of the model are represented by equations ( 2) to (7).Equations ( 2) and ( 3) are the security restrictions of the model.The number (2) ensures that the vehicle will always have a minimum of S liters of fuel in the tank when it moves from origin to final destination.The constraint (3) guarantees this same minimum quantity S in the tank of the vehicle when the vehicle returns to the origin.The restrictions of a maximum capacity of fuel tanks are described in ( 4) and (5).Constraint (4) guarantees the maximum fuel when the vehicle moves from origin to destination and the equation ( 5) ensures the limit to the return of the vehicle.To complete the model is need to add the restrictions ( 6) and ( 7) of not negativity, that is, there is not negative refueling.
Computation development
Aiming to facilitate the use of the proposed model, we used the Solver tool, available in Microsoft® Excel application, in order to make the calculus of the model developed.Solver uses the simplex method with limits variables for solving linear and integer problems so they must be inserted all the sets of equations of mathematical programming model in cells of a worksheet.To solve the model proposed were created a spreadsheet composed of two parts, the first for the input data and the second for the output data of the model.In the first part, the analyst must enter the data of possible points of support or truck-stops that can be done the refueling process, the points of origin and destination of the cargo, the distances between these truck stops, the maximum fuel capacity in liters, the cost per liter in every gas station and fuel consumption rate.Table 1 shows the data about the route, distances and fuel costs in the truck stops of the manly route of transport illustrated in Figure 1 in section 3.1.
Table 1: Input data
In the second part of the worksheet is the output information of the model.This information is presented after application Solver already parameterized with the characteristics of the model presented in section 3.2.Basically, the mainly output information of the model are: the truck stops that will be done the refueling, the optimal quantity of fuel to be supplied at these points and the total operation cost to fuel.A new simulation should be done whenever there are changes in gas station prices in the transport operation.These updates can be done through annotations by drivers themselves during the route or making a download from specialized sites.
Adaptations of the model
The optimization model of refueling policies proposed in this paper can be applied to all the dedicated operations involving the movement of trucks on a fixed route.However, some modifications may be made in order to better adjust the model to the reality of the operation.On some routes, mainly due to the different types of topography and road quality, there are significant differences in fuel consumption rate.In this case it is necessary to consider the average of fuel consumption rate (k) for each displacement between the truck stops.Thus, constraints (2) and (3) presented in section 4.2, would be as follows: An important adaptation in the model is in the case of the vehicle go back to the origin by a differente route from the main route, and, consequently, different fuel stations.In this case, you should consider returning as a step in the main route and dismiss all the variables on the return operation showed in model development.
Because the model parameters are described in spreadsheets and solved by the Solver tool, there is a great facility to perform simulations and to introduce new restrictions, so it is important to do some simulations comparing several scenarios of operation before validating the effective implementation of the model optimization.
Results and Discussions
The considered model was used to carry through the optimization of the refueling politicies in the transports operations of auto parts analyzed.To verify the effectiveness of this model there were collected information about truck stops, amount of supplied fuel as well the fuel cost in each fuel station before the implementation of the model.Based on this information, it was observed that the vehicles always were supplied in A, D, E, F and G, as described in the section 3.1 and that the refueling was always completed, what means that the vehicle left the fuel station with the complete tank.After this first analysis, the total cost of the fuel spent per each vehicle, before the application of the model, was compared with the total fuel cost per vehicle after the implementation.The Table 2 Table 2: Comparisons between the current refueling politics and the proposal of the model When comparing the politics of refueling before the application of the optimization model with the new politic recommended in the model, significant differences in the amounts refueled can be observed in specifics fuel station.In the station D, for example, a difference of 440 liters for trips was observed.Those differences, when multiplied for the cost of the fuel of each truck stop, had generated a fuel economy of 104 BRL (Brazilian Reals) per each vehicle in the analysis.This value represents the reduction of 2,3% in the total cost of fuel and, based on the daily average amount of trips completed in the route analized (8 trips per day, 240 per month), we estimated that in one year the value saved with the application of the optimization model will be approximately 300,000 BRL.
Final Considerations
This paper approached the planning of the transport with regard to the refueling policies of trucks for motor carriers.A generic model of linear optimization was developed and applied in a case study.It was compared the current politics of refueling used for the company with the proposal for the model and a significant reduction of the fuel costs was observed in this comparison and, consequently, a reduction of the total cost of the transport.Moreover, it was verified that the model allows easily to be modified with the objective of better adjusting to the specific operation of transport of each company.A limitation of the considered model is that it assumes that vehicle has a route predetermined before initiating the operation of transport, that is, for each change in the original route a new simulation is necessary as well the change of the original data.Therefore, an interesting extension of this paper is the addition of vehicle routing programs with the refueling problems, in such a way to map the previously highway and get, consequently, a optimization of the total fuel cost independent of the chosen route.
Figure 1 :
Figure 1: Main route Point A: Represents the motor carriers location, • Point B: Represents the location of auto parts suppliers in the region of Sao Paulo city, • Point C: Represents the truck stop at Atibaia (ATB), the northern city of São Paulo, on the margins of BR381 (Fernão Dias), • Point D: Represents the truck stop in Belo Horizonte (BHZ), • Point E: Represents the truck stop in the Governador Valadares (GVD) city in state of Minas Gerais, • Point F: Represents the truck stop in the Vitoria da Conquista (VDC) city in state of Bahia, • Point G: Represents the truck stop in the Feira de Santana (FES) city in the state of Bahia, • Point H: Represents the location of the automobile industry, located in the city of Camaçari (CAM) in state of Bahia.According to the company, the main cost of this operation is the fuel.On average, to complete the cycle of the main route each truck uses 2300 liters of diesel.Trucks are constantly supplied at the truck stops, mainly due to capacity limitation in the fuel tank.Each vehicle has two fuel tanks with a capacity of 225 liters each, totaling a storage capacity of 550 liters.
Figure 2 :
Figure 2: Graphical representation of a fixed route with N truck stops This model can be mathematical defined like:
Figure 3 :
Figure 3: Output data compares the refueling politicies before and after the use of the model proposed. | 2018-12-18T09:25:15.270Z | 2013-10-01T00:00:00.000 | {
"year": 2013,
"sha1": "f360323c1a8dc58fbb3900ebd47bcbb8f86bb028",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/jtl/v7n4/02.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0ac552b6b200b0c525c6509935635812db77f924",
"s2fieldsofstudy": [
"Engineering",
"Business",
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
245554905 | pes2o/s2orc | v3-fos-license | Enoxaparin and Pentosan Polysulfate Bind to the SARS-CoV-2 Spike Protein and Human ACE2 Receptor, Inhibiting Vero Cell Infection
As with many other pathogens, SARS-CoV-2 cell infection is strongly dependent on the interaction of the virus-surface Spike protein with the glycosaminoglycans of target cells. The SARS-CoV-2 Spike glycoprotein was previously shown to interact with cell-surface-exposed heparan sulfate and heparin in vitro. With the aim of using Enoxaparin as a treatment for COVID-19 patients and as prophylaxis to prevent interpersonal viral transmission, we investigated GAG binding to the Spike full-length protein, as well as to its receptor binding domain (RBD) in solution by isothermal fluorescence titration. We found that Enoxaparin bound to both protein variants with similar affinities, compared to the natural GAG ligand heparan sulfate (with Kd-values in the range of 600–680 nM). Using size-defined Enoxaparin fragments, we discovered the optimum binding for dp6 or dp8 for the full-length Spike protein, whereas the RBD did not exhibit a significant chain-length-dependent affinity for heparin oligosaccharides. The soluble ACE2 receptor was found to interact with unfractionated GAGs in the low µM Kd range, but with size-defined heparins with clearly sub-µM Kd-values. Interestingly, the structural heparin analogue, pentosan polysulfate (PPS), exhibited high binding affinities to both Spike variants as well as to the ACE2 receptor. In viral infection experiments, Enoxaparin and PPS both showed a strong inhibition of infection in a concentration range of 50–500 µg/mL. Both compounds were found to retain their inhibitory effects at 500 µg/mL in a natural biomatrix-like human sputum. Our data suggest the early topical treatment of SARS-CoV-2 infections with inhaled Enoxaparin; some clinical studies in this direction are already ongoing, and they further imply an oral or nasal prophylactic inactivation of the virus by Enoxaparin or PPS for the prevention of inter-personal viral transmission.
Introduction
As of November 2021, the ongoing COVID-19 pandemic has claimed over 5.17 million lives and severely affected our social and economic lives. Severe acute respiratory syndrome (SARS) is caused by a considerably new beta coronavirus, globally known as severe acute respiratory syndrome virus 2 (SARS-CoV-2) [1,2], and it has caused a variety of clinical morbidities and high mortality rates [3]. Clinical manifestations are versatile, ranging from asymptomatic disease progression to flu-like symptoms including fever, cough, dyspnea and fatigue, to even multiorgan failure and rapid death [4]. Although common pathway of the coagulation cascade, causing the conversion of fibrinogen to fibrin through thrombin. LMW heparin blocks factor Xa and, therefore, inhibits the activation of thrombin out of prothrombin. This feature strongly reduces the risk of bleeding events in anticoagulant therapy and decreases mortality [16,19]. On a molecular level, LMW heparin is a mixture of the smaller polysaccharide fractions of the lower molecular size range, with a molecular mass of about 1.8-7.5 kDa [20]. In COVID-19, a number of studies suggests the positive effect beyond anticoagulation, as it seems to reduce disease severity and mortality [18,[21][22][23][24].
The heparin structural mimetic pentosan polysulfate (PPS) (Figure 1) is a semisynthetic, polysulfated polysaccharide, naturally deriving from beechwood, with a molecular mass of 1.5-6 kDa. Due to its striking structural similarities to heparin, PPS also exhibits anticoagulant properties, albeit to a 10-fold lesser extent than its GAG counterpart. It is included in a number of pharmacological formulations, including antithrombotic prophylaxis and inflammatory conditions [25]. In the form of Elmiron ® capsules, it is the only FDA-and EMA-approved oral treatment for bladder discomfort associated with interstitial cystitis, as already discovered over 30 years ago [26].
laxis and inflammatory conditions [25]. In the form of Elmiron ® capsules, it is the only FDA-and EMA-approved oral treatment for bladder discomfort associated with interstitial cystitis, as already discovered over 30 years ago [26].
At the present moment, a large number of efforts are being made to investigate the molecular and biophysical properties of Spike-GAG binding, which is shown to be crucial for infection, and to subsequently understand its potential importance in SARS-CoV-2 prophylaxis and therapy. In particular, many studies were undertaken to elucidate the role of heparin in this context, as it was proven to have an inhibitory effect on SARS-CoV-2 infection [3,16,23,27]. As already demonstrated by Hao et al. [28], HS binds to Spike and its subdomains in a sulfation degree-and position-dependent manner. When investigating heparin oligosaccharides with different sulfation patterns, it seemed that a higher degree of sulfation, especially 6O-sulfation, correlates with a higher binding affinity. In addition, the authors did not detect any influence of the chain length on heparin-ligand binding [28]. Lin Liu et al., [29] found different heparin binding affinities for the Spike proteins (1 µM for RBD, 55 nM FL), and they were able to identify HS hexa-and octasaccharides of IdoA2S-GlcNS6S as optimal ligands for Spike monomers, trimers and RBD [30]. Binding studies of the Spike protein and heparin conducted by Young et al., revealed a previously unobserved high binding affinity in the pM range [15]. After reporting a general binding of the Spike glycoprotein to heparin [16], Mycroft et al. [16] observed the significance of the 2O-and 6O-sulfation of the heparin ligand for Spike binding, as well as the minimal chain length of a hexasaccharide. In contrast, however, Kim et al. could not show a sulfation-specific dependence of the interaction, but suggested again the importance of the chain length [15]. Since all the previous studies used surface plasmon resonance methods to investigate Spike-GAG binding, for which one of the interaction partners needed to be labelled and immobilized, we performed in-solution isothermal fluorescence binding studies in order to avoid the potential influence of the surface and/or labelling on the ligand interaction. A chain length dependence of heparin binding to the Spike protein was observed but only for the full-length protein. This was not found in the viral infection inhibitory At the present moment, a large number of efforts are being made to investigate the molecular and biophysical properties of Spike-GAG binding, which is shown to be crucial for infection, and to subsequently understand its potential importance in SARS-CoV-2 prophylaxis and therapy. In particular, many studies were undertaken to elucidate the role of heparin in this context, as it was proven to have an inhibitory effect on SARS-CoV-2 infection [3,16,23,27]. As already demonstrated by Hao et al. [28], HS binds to Spike and its subdomains in a sulfation degree-and position-dependent manner. When investigating heparin oligosaccharides with different sulfation patterns, it seemed that a higher degree of sulfation, especially 6O-sulfation, correlates with a higher binding affinity. In addition, the authors did not detect any influence of the chain length on heparin-ligand binding [28]. Lin Liu et al. [29] found different heparin binding affinities for the Spike proteins (1 µM for RBD, 55 nM FL), and they were able to identify HS hexa-and octasaccharides of IdoA2S-GlcNS6S as optimal ligands for Spike monomers, trimers and RBD [30]. Binding studies of the Spike protein and heparin conducted by Young et al., revealed a previously unobserved high binding affinity in the pM range [15]. After reporting a general binding of the Spike glycoprotein to heparin [16], Mycroft et al. [16] observed the significance of the 2O-and 6O-sulfation of the heparin ligand for Spike binding, as well as the minimal chain length of a hexasaccharide. In contrast, however, Kim et al. could not show a sulfation-specific dependence of the interaction, but suggested again the importance of the chain length [15].
Since all the previous studies used surface plasmon resonance methods to investigate Spike-GAG binding, for which one of the interaction partners needed to be labelled and immobilized, we performed in-solution isothermal fluorescence binding studies in order to avoid the potential influence of the surface and/or labelling on the ligand interaction. A chain length dependence of heparin binding to the Spike protein was observed but only for the full-length protein. This was not found in the viral infection inhibitory experiments, in which Enoxaparin as well as PPS were active at doses >50 µg. The binding of PPS to the Spike proteins and of GAGs, as well as of PPS to ACE2, point to a potential treatment of COVID-19 with heparin and/or PPS. Both of these molecules were found to also be virally inactivating in human sputum, which could offer the development of an oral prophylaxis of inter-personal viral transfection.
Recombinant Protein Production
Full-length Spike protein (FL), Spike RBD and soluble ACE2 were recombinantly expressed in Expi293 cells and were kindly provided by Ossianix Inc. (Philadelphia, PA, USA). The amino acid sequences of all proteins used in this study (including tags) are shown in Supplemental Materials.
Preparative Size-Exclusion Chromatography of Enoxaparin Sodium
To generate size-defined heparin compounds, preparative size-exclusion chromatography was performed according to Kitic et al. [30]. In brief, Enoxaparin sodium (Lovenox ® , Sanofi-Aventis, Paris, France) was loaded onto a 220 cm Biogel P10 fine (Bio-Rad, Hercules, CA, USA) glass column, using 0.1 M ammonium bicarbonate buffer (Sigma Aldrich, St. Louis, MI, USA) as a mobile phase. LMW heparin was injected with a flow rate of 30 µL/min and fractions were collected every 40 min and the absorbance was recorded at 232 nm. After collecting was complete, the samples were concentrated and pooled according to the chromatogram. Concentrations were estimated using a linear heparin regression curve, measured at 232 nm. To check the purity and identity of the received dp fractions, sugar gels containing boric acid were prepared according to Gunay et al. [31] Shortly, lower chamber buffer containing 0.1 M boric acid (Sigma), 0.1 M Tris (Sigma), 0.01 M disodium EDTA (Sigma) and upper chamber buffer containing 0.2 M Tris, 1.24 M glycine (Sigma) were prepared, pH 8.4 both. An amount of 27% acrylamide (Carl Roth GmbH, Karlsruhe, Germany) gels was poured between glass plates separated by 1 mm spacers; 10 µg of each dp fraction was diluted 1:2 with lower chamber buffer containing 50% sucrose (Fluka Chemie GmbH, Seelze, Germany). Running conditions were 1 h at 160 V, followed by Azure A staining (Sigma).
Isothermal Fluorescence Titration
Isothermal fluorescence titration (IFT) experiments were performed using Jasco FP-6500 Spectrofluorometer at constant temperature of 20 • C, following Gerlza et al. [32]. In brief, fluorescence emission spectra of the examined proteins were recorded over the range of 300-400 nm upon excitation at 280 nm, with slit widths of 5 nm for both excitation and emission. Prior to the measurement, 100 nM protein solutions were prepared and equilibrated for 30 min. The respective GAG-ligands, LMW heparin (Lovenox ® ), HS (Celsus Laboratories Inc., Cincinnati, OH, USA), DS (Celsus), PPS, in-house produced dp4-12 and partially desulfated heparin (Iduron), were then added to final concentrations ranging from 100 nM to 3800 nM. After an equilibration time of 1 min after every GAG-ligand addition, the fluorescence spectrum was measured. For background correction, the respective GAG concentrations in PBS only were measured. The mean values of 3 independent measurements were plotted against the GAG-ligand concentration, and the resulting binding isotherm was analyzed by nonlinear regression using Origin 8.0 (Microcal Inc., Northampton, MA, USA). The dissociation constant (Kd [nM]) was calculated using following equation: cells (ATCC ® CCL-81™) under BSL-3 conditions. Vero-CCL81 cells were seeded in 48-well plates 24 h before infection at a cell density of 30,000 cells per well and incubated at 37 • C and 5% CO 2 in serum free Opti Pro medium. On the day of infection, the virus (MOI 0.001) was preincubated with and without Enoxaparin and PPS for 60 min in Opti Pro medium. Vero-CCL81 cells were infected with the virus-substance mix and the virus mix alone for 1 h at 37 • C and 5% CO 2 . Non-infected cells served as negative controls and determined the background of the infection assay. After 1 h, the infection mix was removed, and cells were washed twice with phosphate-buffered saline (PBS). Following the washing procedure, fresh, pre-warmed cell culture medium was added. Samples from the supernatant were transferred to Eppendorf tubes and inactivated with 560 µL of AVL Buffer from QIAamp Viral RNA Mini Kit for subsequent RNA preparation and RT-qPCR to determine the timepoint 0 (t0) values. The virus-infected Vero-CCL81 cells were further incubated at 37 • C for 48 h. After 48 h, the supernatant was harvested and the virus was inactivated in AVL Buffer again for further quantification of SARS-CoV2 RNA by RT-qPCR (see Figure 2). Cells were visually inspected under the microscope to determine cell death due to the added substances. To investigate these inhibitory effects on the background of a more natural biomatrix, Enoxaparin and PPS at higher concentrations were added to 250 µL sputum taken from a healthy donor. To mimic SARS-CoV-2 infection, Wuhan SARS-CoV-2 strain was added to the samples and Vero-CCL81 cells were infected as previously described. Samples from timepoint 0 h and, after 48 h incubation, were harvested in AVL Buffer again. mRNA was isolated from supernatant at timepoint 0 h and 48 h using QIAamp Viral RNA Mini Kit according to manufacturer's protocol. In brief, the collected and inactivated supernatant was transferred out of the BSL3 laboratory. Absolute ethanol (560 µL) was added before loading the samples completely onto the columns. Columns were washed with AW1 and AW2 buffer and the RNA was collected using 40 µL of nuclease free water (Ambion/Life Technologies, Carlsbad, CA, USA). Total isolated RNA (5 µL) was used for cDNA synthesis and qPCR, which was performed in one step using QuantiTect Probe RT-PCR (Qiagen GmbH, Hilden, Germany) on a StepOnePlus System (Applied Biosystems/Life Technologies, Carlsbad, CA, USA). CDC Primers, which were synthesized at Eurofins Scientific SE (Luxenburg), were used in a concentration of 0.4 µM and the Probe of 0.2 µM. The qPCR primers were as follows: N1 forward GAC CCC AAA ATC AGC GAA AT, N1 reverse TCT GGT TAC TGC CAG TTG AAT CTG, and N1 Probe FAM-ACC CCG CAT TAC GTT TGG TGG ACC-BHQ1. To assess RNA quality, RNase P Primers were used: RP Forward AGA TTT GGA CCT GCG AGC G, RP-Reverse GAG CGG CTG TCT CCA CAA GT and RP Probe FAM-TTC TGA CCT GAA GGC TCT GCG CG-BHQ-1. Samples were incubated at 50 • C for 30 min, heated to 95 • C for 15 min, followed by 45 cycles of 95 • C for 3 s and 55 • C for 30 s. Obtained Ct values after 48 h were subtracted from Ct values at timepoint 0 of infection and normalized between the positive control, which was set to 100% infection and untreated Vero-CCL81 cells, set to 0% infection.
Spike FL and Spike RBD Bind GAGs and PPS with Different Affinities and Discriminate Regarding Chain-Length and Modification
First, we investigated the affinity of the Spike RBD for the naturally occurring GAGs, HS and DS, in comparison to Enoxaparin, by isothermal fluorescence titrations ( Table 1). The highest affinity was detected for HS (Kd 600 ± 78.6 nM), which was slightly lower for Enoxaparin (Kd 678.4 ± 116.1 nM), but significantly lower for DS (Kd 912.5 ± 63.4 nM). Spike FL was found to bind to Enoxaparin (Kd 604.3 ± 67.4 nM) slightly better than HS (Kd 680.3 ± 66.8 nM); again, the affinity of DS was the lowest among the three GAG ligands (Kd 784.8 ± 65.6 nM). Binding studies using the natural heparin mimetic PPS revealed a high binding affinity (Kd 655 ± 118.5 nM) to Spike RBD, whereas PPS binding to Spike FL occurred with a rather low affinity (Kd 930 ± 95.7 nM). In order to investigate a potential chain-length dependence of the GAG binding affinities for the two Spike protein variants, both were subjected to IFT experiments using size-defined LMW heparin fractions ranging from dp4 to dp12. It seems that Spike FL exhibited a chain-length dependence with optimal binding to dp6 and dp8, which was less pronounced for Spike RBD (Table 1 and Figure 3). This chain-length dependence was preliminarily interpreted, as there was a minimum of one required hexa-or octasaccharide binding motif: smaller oligosaccharides cannot exert full contacts with the protein; larger oligosaccharides have too many conformational degrees of freedom which counteract highaffinity binding. In contrast to Mycroft et al. [16], who detected a binding dependent on 2Oand 6O-sulfation in SPR, our IFT interaction studies conducted with partially desulfated heparins did not provide any conclusive correlations to support their findings (Table 1). Our results were instead in agreement with the SPR results from Kim et al. [15], who found the Spike-GAG interaction to be dependent on chain length.
Next, we studied the GAG binding affinities of the SARS-CoV-2 protein receptor ACE2 on target cells. Interestingly, ACE2 exhibited the lowest binding to Enoxaparin (Kd 1232.58 ± 174.94 nM), and similarly higher affinities for HS and DS (Kd 1130.01 ± 145.33 nM and Kd 1057.73 ± 142.1 nM, respectively). The size-defined heparin oligosaccharides interacted with ACE2 with rather high affinities compared to the unfractionated GAGs (Table 2 and Figure 4). Additionally, the heparin mimetic PPS exhibited a lower Kd-value of 585 nM, compared to the µM affinity of the naturally occurring GAGs. Taken together, these results point to an involvement of GAG chains in the attachment of SARS-CoV-2 to cell surfaces, which could, therefore, be potentially prevented by exogenously added Enoxaparin or PPS.
Spike FL and Spike RBD Bind GAGs and PPS with Different Affinities and Discriminate Regarding Chain-Length and Modification
First, we investigated the affinity of the Spike RBD for the naturally occurring GAGs, HS and DS, in comparison to Enoxaparin, by isothermal fluorescence titrations ( Table 1). The highest affinity was detected for HS (Kd 600 ± 78.6 nM), which was slightly lower for Enoxaparin (Kd 678.4 ± 116.1 nM), but significantly lower for DS (Kd 912.5 ± 63.4 nM). Spike FL was found to bind to Enoxaparin (Kd 604.3 ± 67.4 nM) slightly better than HS (Kd 680.3 ± 66.8 nM); again, the affinity of DS was the lowest among the three GAG ligands (Kd 784.8 ± 65.6 nM). Binding studies using the natural heparin mimetic PPS revealed a high binding affinity (Kd 655 ± 118.5 nM) to Spike RBD, whereas PPS binding to Spike FL occurred with a rather low affinity (Kd 930 ± 95.7 nM).
In order to investigate a potential chain-length dependence of the GAG binding affinities for the two Spike protein variants, both were subjected to IFT experiments using size-defined LMW heparin fractions ranging from dp4 to dp12. It seems that Spike FL exhibited a chain-length dependence with optimal binding to dp6 and dp8, which was less pronounced for Spike RBD (Table 1 and Figure 3). This chain-length dependence was preliminarily interpreted, as there was a minimum of one required hexa-or octasaccharide binding motif: smaller oligosaccharides cannot exert full contacts with the protein; larger oligosaccharides have too many conformational degrees of freedom which counteract high-affinity binding. In contrast to Mycroft et al. [16], who detected a binding dependent on 2O-and 6O-sulfation in SPR, our IFT interaction studies conducted with partially desulfated heparins did not provide any conclusive correlations to support their findings (Table 1). Our results were instead in agreement with the SPR results from Kim et al. [15], who found the Spike-GAG interaction to be dependent on chain length.
Biomedicines 2022, 10, x FOR PEER REVIEW 7 of 13 des and 6O-des; (B) Mean Kd values of Spike RBD binding to different GAG variants with significant differences (*) for dp4 to dp6; dp6 to dp 12; dp8 and dp10 to dp12; dp12 to LMWH; HS to DS (LMWH, low molecular weight heparin = enoxaparin; HS, heparin sulfate; DS, dermatan sulfate; PPS, pentosan polysulfate; dp, depolymerisation degree; 6O-des, 6O-desulfated heparin; 2O-des, 2O-desulfated heparin; N-des, N-desulfated heparin; nd, so significance detected). Next, we studied the GAG binding affinities of the SARS-CoV-2 protein receptor ACE2 on target cells. Interestingly, ACE2 exhibited the lowest binding to Enoxaparin (Kd 1232.58 ± 174.94 nM), and similarly higher affinities for HS and DS (Kd 1130.01 ± 145.33 nM and Kd 1057.73 ± 142.1 nM, respectively). The size-defined heparin oligosaccharides interacted with ACE2 with rather high affinities compared to the unfractionated GAGs (Table 2 and Figure 4). Additionally, the heparin mimetic PPS exhibited a lower Kd-value of 585 nM, compared to the µM affinity of the naturally occurring GAGs. Taken together, these results point to an involvement of GAG chains in the attachment of SARS-CoV-2 to cell surfaces, which could, therefore, be potentially prevented by exogenously added Enoxaparin or PPS.
Heparin and PPS Inhibit SARS-CoV-2 Infection In Vitro
To investigate the hypothesis that heparin is able to prevent the viral entry of target cells, we performed aligned infection assays with real-time PCR read-outs of viral RNA in the cell supernatant after 48 h of propagation. These inhibition experiments of viral entrance showed that Enoxaparin was able to inhibit the infection of Vero-CCL81 cells with SARS-CoV-2 in a dose-dependent manner ( Figure 5). Inhibition was only achieved by incubating the virus with Enoxaparin, the pre-incubation of Vero cells with Enoxaparin did not yield inhibitory results (data not shown). Therefore, as a mode of action, we assume that Enoxaparin is able to interact with the viral Spike protein, thereby preventing interaction with its proteoglycan co-receptor on Vero cells. The Vero cells, on the other hand, cannot be targeted directly by Enoxaparin due to expected repulsive forces between proteoglycan and Enoxaparin. No significant dependence of infection inhibition efficacy on heparin chain length could be detected (data not shown). Our results are in accordance with recently published data [3,16,27]. Interestingly, the heparin mimetic PPS showed a similar inhibition efficacy of viral infection to Enoxaparin (Figure 5), which relates to the structural similarity of the two compounds.
Heparin and PPS Inhibit SARS-CoV-2 Infection In Vitro
To investigate the hypothesis that heparin is able to prevent the viral entry of target cells, we performed aligned infection assays with real-time PCR read-outs of viral RNA in the cell supernatant after 48 h of propagation. These inhibition experiments of viral entrance showed that Enoxaparin was able to inhibit the infection of Vero-CCL81 cells with SARS-CoV-2 in a dose-dependent manner ( Figure 5). Inhibition was only achieved by incubating the virus with Enoxaparin, the pre-incubation of Vero cells with Enoxaparin did not yield inhibitory results (data not shown). Therefore, as a mode of action, we assume that Enoxaparin is able to interact with the viral Spike protein, thereby preventing interaction with its proteoglycan co-receptor on Vero cells. The Vero cells, on the other hand, cannot be targeted directly by Enoxaparin due to expected repulsive forces between proteoglycan and Enoxaparin. No significant dependence of infection inhibition efficacy on heparin chain length could be detected (data not shown). Our results are in accordance with recently published data [3,16,27]. Interestingly, the heparin mimetic PPS showed a similar inhibition efficacy of viral infection to Enoxaparin (Figure 5), which relates to the structural similarity of the two compounds.
Heparin and PPS Inhibit SARS-CoV-2 Infection in a Biomatrix-Like Environment
Based on our results, a potential indication of Enoxaparin could be envisaged as an oral prophylaxis of viral transmission. For this purpose, the route of application would be via human sputum, e.g., in the form of Enoxaparin-containing lozenges. Therefore, we performed SARS-CoV-2 infection inhibition experiments on Vero-CCL81 cells in sputum collected from healthy individuals. Due to the complex biomatrix containing a number of GAG-binding proteins (the results of a comprehensive GAG-binding proteome study of sputum are currently prepared for publication; Almer et al.), the Enoxaparin dose had to
Heparin and PPS Inhibit SARS-CoV-2 Infection in a Biomatrix-like Environment
Based on our results, a potential indication of Enoxaparin could be envisaged as an oral prophylaxis of viral transmission. For this purpose, the route of application would be via human sputum, e.g., in the form of Enoxaparin-containing lozenges. Therefore, we performed SARS-CoV-2 infection inhibition experiments on Vero-CCL81 cells in sputum collected from healthy individuals. Due to the complex biomatrix containing a number of GAG-binding proteins (the results of a comprehensive GAG-binding proteome study of sputum are currently prepared for publication; Almer et al.), the Enoxaparin dose had to be increased to 1 mg/mL in order to achieve an inhibition efficacy similar to the efficiency in the cell culture medium ( Figure 6). Interestingly, PPS exhibited a strong inhibition efficiency already at a concentration of 500 µg/mL, similar to the efficiency obtained in the cell culture medium.
be increased to 1 mg/mL in order to achieve an inhibition efficacy similar to the efficiency in the cell culture medium ( Figure 6). Interestingly, PPS exhibited a strong inhibition efficiency already at a concentration of 500 µg/mL, similar to the efficiency obtained in the cell culture medium.
Discussion
The most abundant GAG in the ECM of the human lung, aside from CS/DS, is HS [33]. A large number of previous studies already identified HS as a crucial co-factor in the SARS-CoV-2 infection cascade and characterized the binding of HS to the Spike glycoprotein of SARS-CoV-2 [1,3]. Within our experimental series we were able to successfully compare binding affinities of the viral Spike glycoprotein with HS as well as with CS. Unlike in previous studies, we investigated these interactions in a solution, thereby avoiding immobilizing one of the interaction partners. Our experiments highlighted HS as a better binder compared with CS binding to both Spike FL and RBD. In binding studies, using LMW heparin (Enoxaparin) as a GAG ligand gave affinities similar to HS, suggesting Enoxaparin as a potential competitor of the HS-supported viral infection of target cells.
Since typical Enoxaparin preparations consist of a mixture of heparin molecules with different molecular sizes, we further sub-fractionated the Enoxaparin sample by means of SEC. The resulting oligosaccharides were analyzed with respect to a potential optimum binding chain length. Our binding studies showed the best binding for the medium-sized fractions of LMW heparin, presenting dp4 and dp12 as less potent in Spike binding. This effect of size discrimination was more pronounced in full-length Spike and not so much in RBD (Figure 3). In addition to the size dependence of the Enoxaparin-Spike interactions, the influence of particular Enoxaparin sulfation sites on Spike affinities was investigated (Table 1 and Figure 3). Again, full-length Spike proteins exhibited a certain potency in discriminating between partially desulfated heparins revealing N-sulfation sites as inhibiting high-affinity Enoxaparin binding. A selective binding of heparin to the spike protein can also be deduced from recently published modeled complex structures [34,35].
Taken together, these results, especially the HS-like binding affinity of LMW heparin to the Spike protein, suggest a potential competition between endogenous HS, as a coreceptor of the virus, and exogenously added heparin. Additionally, although heparin binding to the RBD of Spike was observed, the full-length Spike protein seems to be
Discussion
The most abundant GAG in the ECM of the human lung, aside from CS/DS, is HS [33]. A large number of previous studies already identified HS as a crucial co-factor in the SARS-CoV-2 infection cascade and characterized the binding of HS to the Spike glycoprotein of SARS-CoV-2 [1,3]. Within our experimental series we were able to successfully compare binding affinities of the viral Spike glycoprotein with HS as well as with CS. Unlike in previous studies, we investigated these interactions in a solution, thereby avoiding immobilizing one of the interaction partners. Our experiments highlighted HS as a better binder compared with CS binding to both Spike FL and RBD. In binding studies, using LMW heparin (Enoxaparin) as a GAG ligand gave affinities similar to HS, suggesting Enoxaparin as a potential competitor of the HS-supported viral infection of target cells.
Since typical Enoxaparin preparations consist of a mixture of heparin molecules with different molecular sizes, we further sub-fractionated the Enoxaparin sample by means of SEC. The resulting oligosaccharides were analyzed with respect to a potential optimum binding chain length. Our binding studies showed the best binding for the mediumsized fractions of LMW heparin, presenting dp4 and dp12 as less potent in Spike binding. This effect of size discrimination was more pronounced in full-length Spike and not so much in RBD ( Figure 3). In addition to the size dependence of the Enoxaparin-Spike interactions, the influence of particular Enoxaparin sulfation sites on Spike affinities was investigated (Table 1 and Figure 3). Again, full-length Spike proteins exhibited a certain potency in discriminating between partially desulfated heparins revealing N-sulfation sites as inhibiting high-affinity Enoxaparin binding. A selective binding of heparin to the spike protein can also be deduced from recently published modeled complex structures [34,35].
Taken together, these results, especially the HS-like binding affinity of LMW heparin to the Spike protein, suggest a potential competition between endogenous HS, as a co-receptor of the virus, and exogenously added heparin. Additionally, although heparin binding to the RBD of Spike was observed, the full-length Spike protein seems to be required to differentiate between different-sized and partially desulfated heparins, indicating two GAG binding sites on the Spike protein.
Considering the eminent binding of GAGs to the viral Spike protein, we were interested to find out if the ACE2 receptor would also bind to GAGs, thereby implying a triple complex consisting of Spike, ACE2 and GAG/HS to be the active form of the viral docking complex. Compared to the Spike protein, ACE2 exhibited an overall lower affinity for the three GAG classes investigated here (Table 2 and Figure 4). Only PPS was found to interact with ACE2, with a high binding affinity of 585 nM. Interestingly, all size-defined heparin oligosaccharides bound to ACE2 with a much higher affinity than the unfractionated Enoxaparin. This is indicative for a narrow GAG binding site on ACE2. The affinity of ACE2 for heparin was found to also be dependent upon the presence of certain sulfation, with Kd values decreasing significantly after 6O-< 2O-< N-desulfation. In short, N-desulfated GAG sections could, therefore, constitute the binding interface between HS and ACE2, which is indicative for flexible, largely desulfated loops of HS.
The preincubation of SARS-CoV-2 virus with LMW heparin (Enoxaparin) led to a concentration-dependent inhibition of SARS-CoV-2 infection of Vero-CCL81 target cells ( Figure 5). This is interpreted as a coating of the virus by Enoxaparin, which subsequently inhibits binding to the HS co-receptor(s) on the target cell surface, thus preventing infection. Additionally, although pre-incubation of the target cells with Enoxaparin, enabling potential ACE2 interactions, did not lead to a similar inhibition of viral entry/propagation in the target cells (data not shown), it can be assumed that a Spike/Enoxaparin complex would be unable to crack a pre-formed complex of ACE2 and cell-surface GAG/HS. It can thus be hypothesized that Enoxaparin not only prevents SARS-CoV-2 infection via blocking its HS co-receptor on target cells, but also impacts the binding of Spike to its primary receptor ACE2.
Pentosan polysulfate (PPS) is a natural GAG/heparin molecular mimetic derived from beech trees, which is in clinical use, but not as a blood anti-coagulant. Since we intend to profile the Spike/ACE2/GAG axis as a novel therapeutic interface for the treatment and prevention of SARS-CoV-2 virus transmission, a compound with less potential side effects (e.g., bleeding) than heparin was investigated. Interestingly, the binding of PPS to Spike was found to be significant but less efficacious than binding to ACE2. In viral infection experiments, PPS inhibited entry/propagation of the virus in a similar concentrationdependent manner as Enoxaparin. We can therefore assume, that both, Enoxaparin and PPS, are potent anti-COVID-19 compounds which prevent viral spread in vitro, and are highly efficacious at low active doses ( Figure 5).
As mentioned above, a very straightforward way to make use of the anti-viral activity of Enoxaparin and PPS would be to apply the compounds orally (or nasally) at low doses, thereby neutralizing the virus in the sputum (nasal fluid) which would prevent oral/nasal transmission of the virus very efficiently. We therefore tested the neutralizing ability of Enoxaparin and PPS in sputum samples of healthy donors ( Figure 6). Additionally, although the activity of the compounds to reduce viral spread is slightly reduced in sputum compared to PBS, the doses required to disable the virus in sputum are still in a very attractive range (500 µg/mL), allowing them to be formulated as a lozenge or a nebulizer for oral and/or nasal applications.
Our data show that enoxaparin is a very effective inhibitor of SARS-CoV-2 infection and/or of propagation of the virus. At the low doses applied, the compound is not expected to induce strong side effects, especially since it was recently shown that high oral doses of heparin, with the aim of systemic availability, did not show noticeable off-target reactions [36]. We can, therefore, expect a strong reduction in the active viral load in the oral cavity (and in the nose if applied as respective spray) of a SARS-CoV-2 carrier, which leads to a significant reduction in infectious viral transmission by the carrier. A direct therapeutic effect on the COVID-19 carrier, however, is not expected since the oral bioavailability of heparin is very poor [36]. Aside from this, a novel use (a re-purposing) of enoxaparin as a viral-inactivating compound, formulated as a mouth/nose spray or as lozenge, is strongly suggested by our results.
Conclusions
We presented evidence that enoxaparin interacts with the spike protein of SARS-CoV-2 virus, thereby preventing viral infection/propagation in target cells. The inhibitory effect of heparin was, moreover, found to be dependent upon the chain length and on the sulfation pattern present on the heparin molecules. A heparin structural analogue, PPS, showed a similar inhibitory activity of viral infection with a differentiated binding profile to the ACE2 receptor compared to enoxaparin. Since both compounds, enoxaparin and PPS, were found to be inhibitory of viral infection/propagation in human sputum, they are proposed to be applicable in a therapeutic, inhaled setting, as well as in an anti-transmission setup as a spray or as a lozenge. All routes of application are currently under investigation for clinical trials, with a prioritization given to the prevention of active virus aerosolization and transmission by a mouth rinse with low doses of enoxaparin. If the clinical trials lead to the expected results, i.e., a strong reduction in the active/infective viral load in the mouth (and nose) of a COVID-19-positive individual, we expect a strong social impact of our approach on interpersonal contacts in cases where potential viral bearers cannot be identified since SARS-CoV-2 tests cannot be performed easily and with the required high frequency (e.g., at dentists, schools, etc.).
Patents
A European priority patent submission with the title, "Novel use of heparin and heparin analogues" (EP 21155990.1), has been filed. | 2021-12-30T16:19:40.710Z | 2021-12-27T00:00:00.000 | {
"year": 2021,
"sha1": "fb7e4c3399c09771f4bcbb4e6818fb73334093ee",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8772983",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c4569bffacbdad2075b897befa27948da666264",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5736288 | pes2o/s2orc | v3-fos-license | Extraction Of Transvenous ICD Leads In An Over-ninety Years Old Patient.
There is a general consensus that once a part of an implanted cardiac device becomes infected, it is usually impossible to cure the infection without completely removing all prosthetic material from the body. Consequently the Heart Rhythm Society (HRS) included the pocket infection or erosion as a class I indication for pacemaker lead exctraction. However, the procedure still carries a high risk of life-threatening complications due to fibrotic attachments between leads, veins, valves or other endocardial structures, notwithstanding specific tools and techniques that have been developed to assist the lead removal, preventing tissue laceration.
Typically, in clinical practice, device removal is often delayed in favor of initial management with antimicrobial therapy. This approach usually performed in elderly can confer an ominous prognosis. We report a case of successful multiple leads extraction in an over ninety year old patient with cardiac device infection resulting in a severe sepsis while treated with antimicrobial therapy.
Case report
A ninety-two years old patient, affected by idiopathic dilated cardiomyopathy with ejection fraction 21%, class NYHA IV and LBB at ECG (QRS 145 msec) was implanted with CRT-D (Guidant Contact Renewal with Sprint Fidelis lead) six year earlier. Three years after implantation the device was replaced electively. Two years later, due to lead failure with recurrent inappropriate shocks, a new lead (single coil) was positioned in the right ventricle and the Sprint Fidelis lead was abandoned. A few months after the operation the patient presented with severe and extensive pocket infection, treated by pocket revision with extensive necrotic tissue excision. The new defibrillating lead in the right ventricle was abandoned and a CRT-P device was connected to the lead in the coronary sinus. One month after, the patient was referred to our institution by the family doctor because of recurrent fever and shivering since ten days. He was treated with antibiotics (amoxicillin plus clavulanic acid) without benefit.
On admission the patient was febrile (39°C) hypotensive (BP 90/60 mmHg) and tachycardic (HR 95 bpm). Physical inspection revealed clear lung fields, fast heart rate with 2/6 systolic ejection murmur and mild pretibial edema. The skin over the pace-maker pocket was red and warm. Blood analysis showed significant increase of inflammatory markers (White blood cells 30000/mm 3 , C Reactive Protein 260 mg/L). A chest X ray showed no infiltrates and enlarged heart shadow (figure1). The EKG revealed sinus rhythm at 95 bpm with constant leftventricular pacing. Hemoculture were drawn before vancomycin therapy was started. At a soft tissue echography of the inflamed area, a fluid collection of 9 x 6 x 40 mm was present. Transthoracic echocardiography showed marked biventricular dilatation, with depressed ejection fraction and moderate mitral regurgitation. The absence of intracardiac vegetation was confirmed by transesophageal echocardiography. Hemocultures were positive for Staphylococcus aureus (non MRSA) and the patient was treated with multiple antimicrobial agents (vancomycin-teicoplanin-rifampin-oxacillin-gentamicin), because of sepsis and soft tissue infection in a patient with implanted device. However, after more than one week of antimicrobial therapy, remitting fever was still present although the inflammatory markers consistently decreased; echocardiographic follow-up was persistently negative for vegetation. At this time we decided to proceed to lead extraction after obtaining informed consent.
Radioscopy before the procedure, showed two electrocatheters in right ventricle, one in coronary sinus and a right auricular catheter. A cardiac surgery operating room was on stand-by. Under local anesthesia, the pacemaker pocket was opened revealing massive purulent discharge. The extremities of defibrillation coils and sensing and pacing of sprint fidelis electrocatheter were capped together. All leads were been isolated and straightened until the venous entry site. A locking stylets to enable counter-traction is advanced until the top of the electrocatheter (Liberator™, Leechburg, PA, USA) in every single lead while proceeding with the extraction.
The Sprint fidelis electrocatheter and catheter in coronary sinus were removed by manual traction. A non-powered Evolution™ sheath (Cook Medical Inc) 9F was used for extraction of atrial and ventricular sensing and pacing electrocatheter. The Evolution sheath was advanced over the lead until the tip of the electrocatheter in order to mechanically disrupt the fibrosis and create sufficient room to remove the lead.
Toiletry of cardiac pacemaker pocket involving debridement was carried out and local antibiotic therapy was administered. The patient did well during the entire procedure and general anesthesia was not required. No complications occurred during the procedure. Post-operatively the patient did not experience further episodes of fever and blood chemistry panels improved continuously, enabling successful right-sided implantation of ICD-CRT two weeks later ( figure 2). Eventually, the patient was discharged in good condition.
Discussion
Device removal is mandatory in case of infection [1] as outlined by recent HRS guidelines [2]. However, in clinical practice it is often delayed in favor of initial management with antimicrobial therapy and pocket revision, as clearly appears in the case described.
We suppose that technical difficulties linked to the age of the patient and presence of multiple lead were the reasons to initially follow a conservative approach, which ultimately did not solve the infection. However, it is worth to remark that while the level of difficulty and complication risk of lead extraction is proportional to the number of leads, [3][4][5] as experienced by the Lead Extraction Registry, the age of the patient has not been shown to be predictor of complications [5] -even if older patients are more likely to progress to a calcified fibrosis which creates binding sites from which it is very difficult to free the lead. Moreover, recent evidences show that early device removal is critical in the management of leads infection, since delayed operation is associated with a three-fold increase in 1 year-mortality [6].
In our opinion in order to obtain the resolution of the septicemia, the eradication of the infected focus is even more important in elderly vulnerable patients. A critical aspect was definitely characterized by the tools chosen for the extraction process. In our case, the Evolution System was used as first line extraction tool considering the high chance to find severely calcified binding sites, where laser use would be ineffective [7]. Moreover, the Evolution Mechanical Dilator Sheath has a rotational mechanism with a stainless-steel bladed tip to overcome fibrosis and cut adherences [7].
In conclusion, infected lead extraction has no major age contraindication while it maintains its lifesaving clinical role even in the very aged. | 2014-10-01T00:00:00.000Z | 2011-09-01T00:00:00.000 | {
"year": 2011,
"sha1": "d088aea06a7fa38aff7e7d8202c4d8ab041da0b7",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d088aea06a7fa38aff7e7d8202c4d8ab041da0b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119204366 | pes2o/s2orc | v3-fos-license | Production of spin-controlled rare isotope beams
The degree of freedom of spin in quantum systems serves as an unparalleled laboratory where intriguing quantum physical properties can be observed, and the ability to control spin is a powerful tool in physics research. We propose a novel method for controlling spin in a system of rare isotopes which takes advantage of the mechanism of the projectile fragmentation reaction combined with the momentum-dispersion matching technique. The present method was verified in an experiment at the RIKEN RI Beam Factory, in which a degree of alignment of 8% was achieved for the spin of a rare isotope Al-32. The figure of merit for the present method was found to be greater than that of the conventional method by a factor of more than 50.
The immense efforts expended to fully comprehend and control quantum systems since their discovery are now entering an intriguing stage, namely the controlling of the degree of freedom of spin [1][2][3][4][5]. The case of nuclear systems is not an exception. In recent years, nuclear physicists have been focusing their efforts on expanding the domain of known species in the nuclear chart, which is a two-dimensional map spanned by the axes of N (number of neutrons) in the east direction and Z (number of protons) in the north direction. The key technique used to explore the south eastern (neutron-rich, or negative in isospin T z ) and north western (proton-rich, or positive in T z ) fronts of the map has been the projectile fragmentation (PF) reaction, in which an accelerated stable nucleus is transmuted into an unstable one through abrasion upon collision with a target. Several new facilities for providing rare-isotope (RI) beams by this technique, such as RIBF [6] in Japan, FRIB [7][8][9] in the United States, and FAIR [10][11][12] in Europe, have been completed or designed for exploration of the frontiers of the nuclear chart. Beyond such efforts toward exploring the N and Z axes, nuclear spin may be a "third axis" to be pursued. The study reported in the present article concerns the control of the spin orientation of an unstable nucleus produced in a RI beam at such fragmentation-based RI beam facilities. The ability to control spin, when applied to state-of-the-art RI beams, is expected to provide unprecedented opportunities for research on nuclear structure of species situated outside the traditional region of the nuclear chart, as well as for application in materials sciences, where spin-controlled radioactive nuclei implanted in a sample could serve as probes for investigating the structure and dynamics of condensed matter [13,14].
The fragmentation of a projectile nucleus in high-energy nucleus-nucleus collisions is described remarkably well by a simple model that assumes the projectile fragment produced in the PF reaction to be a mere "spectator" of the projectile nucleus; as a spectator, this fragment survives frequent nucleon-nucleon interactions, and the other nucleons ("participants") are abraded off through the reaction [15], as illustrated in Fig. 1. In the model, the projectile fragment acquires an angular momentum (in other words, a nuclear spin), whose orientation is determined simply as a function of the momentum of the outgoing fragment.
Although the spin orientation may practically be reduced due to cascade γ decays following the fragmentation, we assume that a significant amount of spin orientation survives in the fragment. Here the degrees of spin orientation of rank one and two, in particular, are referred to as spin polarization and spin alignment, respectively. This implies a unique relation be-tween the spin orientation and the direction of the removed momentum p n , as illustrated in Fig. 1, which can be utilized as an obvious means for producing spin-oriented RI beams. One advantage of this method of orienting the fragment spin is that the resulting spin orientation does not depend on the chemical or atomic properties of the RI. However, the method also shows a drawback in the sense that the spin orientation thus produced in the PF reaction tends to be partially or completely attenuated since the fragmentation generally involves the removal of a large number of nucleons from the projectile. This is quite a non-negligible flaw with respect to the yields attainable for spin-oriented beams since high-intensity primary beams are only available for a limited set of nuclear species, and consequently in most cases RIs of interest must be produced through the removal of a large number of nucleons from the projectile. Accordingly, there has been high demand for a new technique for preventing the attenuation in spin orientation caused by large differences in mass between the projectile and the fragment. In this paper, we present a method for producing highly spin-aligned RI beams by employing a two-step PF process in combination with the momentum-dispersion matching technique. Figure 2 illustrates three different schemes for producing spin-aligned RI beams, where each scheme uses a different configuration of elements, namely primary and secondary targets and slits for selection. The most basic scheme employs the configuration in (a), in which a nucleus of interest is directly produced from a primary beam through a single occurrence of the PF (a single-step PF reaction). As stated earlier, this scheme suffers from the drawback that the degree of spin alignment tends to be attenuated when the PF involves the removal of a large number of nucleons from the projectile. With the aim to overcome this problem, configuration (b) adopts a two-step PF reaction, where a beam of nuclei produced in the first PF reaction (secondary beam) is used to obtain a beam of the nuclide of interest through a second PF reaction. In particular, using a slit installed at a momentum-dispersive focal plane, the particles forming the secondary beam are chosen to be of a nuclide containing one proton or neutron more than the nuclide of interest. Thus, the target RI beam is produced via a PF reaction in which only one proton or neutron is removed. For the RI beam obtained with this scheme, the spin alignment is expected to be high due to the simplicity of the reaction [16,17]. We also note that a significant increase in the total production yield is suggested by experiments [18] in in-beam γ-ray spectroscopy. In the scheme presented above, however, the production yield is typically reduced by a factor of ∼1/1000 because the production of target nuclei requires the successive occurrence of two highly particular reactions.
A hint on how to eliminate the disadvantage of the scheme in (b) emerged from the recognition that the quantity that determines the spin alignment is solely the momentum change δp in the process of fragmentation that produces the nuclei of interest, and that the spin alignment is not sensitive to the momentum of the incident nucleus. In scheme (b), δp is selected with the aid of two momentum slits, the first of which is used to select the momentum of the secondary beam and the second slit determines the outgoing momentum of the secondary PF. The tremendous and unnecessary drop in yield is avoided by discarding the two-fold selection and introducing a single direct selection of the target δp itself. This is realized by placing a secondary target in the momentum-dispersive focal plane and a slit in the double-achromatic focal plane, as illustrated in Fig. 2 (c). The concept of realizing maximum spectral resolution in momentum loss by compensating for the beam momentum spread of the incident beam, as executed here, is known as dispersion matching in ion optics [19,20]. The important point of this technique is that the reaction products that acquire equal amounts of momentum change upon the second fragmentation are focused onto a single physical location. The application of this technique to PF-induced spin alignment can prevent the cancellation of opposite signs of spin alignment caused by momentum spread, which secondary beams unavoidably undergo.
The validity of scheme (c) was first tested with the in-flight superconducting RI separator BigRIPS [21] at the RIKEN RIBF facility [6]. The arrangement for the production of spinaligned RI beams with the present method is shown in Fig. 3. In the reaction at the primary target position F0, 33 Al was produced by a PF reaction of a 345-MeV/nucleon 48 Ca beam on a 9 Be target with a thickness of 1.85 g/cm 2 , chosen to provide a maximum production yield for the secondary 33 Al beam. A wedge-shaped aluminium degrader with a mean thickness of 4.05 g/cm 2 was placed at the first momentum-dispersive focal plane F1, where the momentum acceptance at F1 was ±3%. The secondary 33 Al beam was introduced to a second wedge-shaped aluminium target with a mean thickness of 2.70 g/cm 2 , placed at the second momentum-dispersive focal plane F5. The 32 Al nuclei (including those in isomeric state 32m Al) were produced through a PF reaction involving the removal of one neutron from 33 Al. The thickness of the secondary target was chosen such that the energy loss from the target was comparable with the theoretical estimate for the width of the momentum 5 distribution [22] for single-nucleon removal. In the present case, σ Goldhaber = 90 MeV/c, and the momentum width for 32 Al was measured to be σ = 80 MeV/c or 0.4%. The 32 Al beam was subsequently transported to focal plane F7 whereby the momentum dispersion between F5 to F7 was tuned to be with the same magnitude and opposite sign as that from F0 to F5 (momentum matching), effectively canceling out the momentum dispersion from the site of the first PF reaction to F7. We note that an admixture of 32 Al particles in the 33 Al secondary beam was found to be negligibly small, due to the difference in the magnetic rigidity. Thus, the 32 Al particles that were solely produced at F5 were transported to F7. The slit at F7 was used to select a region of momentum change at the second PF as δp/p = ±0.15% about the center of relative momentum distribution. The 32 Al beam was then introduced to an experimental apparatus, shown in the inset of Fig. 3, for time-differential perturbed angular distribution (TDPAD) measurements. (See "Methods" for details.) The degree of spin alignment A was determined from a ratio R(t) defined as where N 13 (N 24 ) is the sum of the photo-peak count rates at Ge 1 and Ge 3 (Ge 2 and Ge 4), which are two pairs of Ge detectors placed diagonally to each other, as depicted in the inset of Fig. 3, and ǫ denotes a correction factor for the detection efficiency. Theoretically, the R(t) ratio is expressed as a function of t as in terms of the rank-two anisotropy parameter A 22 , which is defined as Terms with higher ranks were evaluated to be negligible in the present case of 32m Al. Here, A denotes the degree of spin alignment where a(m) is the occupation probability for magnetic sublevel m, and I the nuclear spin.
B 2 is the statistical tensor for complete alignment, and F 2 is the radiation parameter [23].
The parameter ω L (Larmor frequency) is given by ω L = gµ N B 0 /h, where g is the g-factor of 32 Al in units of the nuclear magneton µ N , and α is the initial phase of R(t).
The 32 Al nucleus is known to exhibit an isomeric state 32m Al [24] at 957 keV with a half-life of 200 (20) ns. The spin and parity of 32m Al have not been fixed among the 4 + and 2 + candidates. It is known that 32m Al undergoes de-excitation by E2 transition [25] with emission of γ rays with an energy of 222 keV and subsequently decays in cascade to the ground state by emitting 735-keV γ rays. Figure 4 (a) shows a γ-ray energy spectrum measured with the Ge detectors, where 222-keV de-excitation γ rays are clearly observed as a peak. The time variations N 13 (t) and N 24 (t) of the intensities for this peak obtained with detectors pairs Ge 1 -3 and Ge 2 -4, respectively, are presented in Fig. 4 (b), in which the corresponding abscissas represent the time difference of the signals at either of the Ge detector pairs relative to the beam particle signal at a plastic scintillator placed in front of the stopper crystal. The R(t) ratio evaluated according to Eq. 1 is shown in Fig. 4 (c).
From the least χ 2 fitting of the theoretical function of Eq. 2 to the experimental R(t) ratio of Eq. 1, we obtained the degree of spin alignment as A = 8(1)%, and the g-factor of 32m Al was determined for the first time to be g = 1.32 (1). Also, the spin and parity were assigned to be I π = 4 + through comparison of the g-factor with theoretical calculations.
Detailed analysis and extended discussion regarding the 32 Al nuclear structure based on the obtained g-factor and spin-parity will be presented elsewhere.
A remeasurement of the degree of spin alignment was also performed during the experiment, in which the momentum acceptance in the F5 focal plane was narrowed to be ±0.5%, while maintaining other conditions unchanged. This measurement corresponded to the two-step PF reaction without dispersion matching (case (b) in Fig. 2). The degree of spin alignment derived from this measurement, 9(2)%, is consistent with the above value obtained with the proposed method, 8(1)%, thus confirming that the present method of producing spin-aligned RI beams is valid and performs well.
A supplementary experiment was carried out in order to compare the performance of the present method with that of the single-step method. 32 Al was directly produced in a PF reaction of a 48 Ca beam on a 4-mm thick Be target. The thickness of the production target was chosen such that the energy loss in the target was comparable with the Goldhaber width [22] (expected to be 4% in this case). In order to compare with the case of the twostep method under the equivalent condition, this measurement was carried out by selecting a momentum region of ±0.5% around the center of fragment momentum distribution. For this momentum region a maximum prolate alignment is expected. As a result, the spin alignment was measured to be less than 0.8% (2σ confidence level). A comparison of the two methods is summarized in Table I. The figure of merit (FOM) for the production of such spin-aligned RI beams should be defined to be proportional to the yield and the square of the degree of alignment. In the measurement with the single-step PF reaction, a primary beam whose intensity was deliberately attenuated by a factor of 1/100 was used in order to avoid saturation in the counting rate at the data acquisition system. Here, the FOM was compared on the basis of actual effectiveness without correction for the attenuation, in which the resulting FOM for the new method was found to be improved by a factor of more than 50. Note that the degree of spin alignment in the single-step PF reaction could not be determined within a measurement time comparable with that of the two-step PF reaction.
The superiority in FOM of the new method over the single-step PF reaction method should be even more pronounced for nuclei located farther from the primary beam.
Theoretically, the maximum of the spin alignment for the case of single-nucleon removal from 33 Al with a momentum acceptance of ±0.15% is estimated to be 30% in a way similar to that described in [26,27]. The estimation is based on a model proposed by Hüfner and Nemes [15], where the cross-section for the abrasion of one nucleon leading to a fragment of substate m with momentum p is proportional to the probability of finding a particle of substate −m with momentum −p at the surface of the target nucleus. The maximum evaluated in this way is in fact four times greater than that obtained experimentally, which may result from de-excitation from higher states populated through the PF reaction, such as the (4 − ) [28] and 1 + [25] states. This suggests that the ability to select the reaction path in populating the state of interest is key to achieving augmented spin alignment. Thus, spin alignment via PF reactions depends strongly on both the reaction mechanism and the nuclear structure. Under these circumstances, the achieved degree of alignment, 1/4 of the theoretical maximum which was obtained despite the situation that the reaction path to the isomeric state was not unique, is rather satisfactory. If we choose a nucleus produced by a unique reaction path, a degree of spin alignment closer to the theoretical maximum might be possible to achieve. Figure 5 shows the result of simulating the accessibility of unstable nuclei via the twostep PF method (red region) and the conventional method (blue region). In the simulation, the primary beam is assumed to be restricted to within a class of beam species which are available with high intensities at RIBF [6]. Clearly, the adoption of the two-step method drastically expands the set of accessible nuclei in the nuclear chart. In addition to a simplest case that the nucleus of interest was produced through the one-nucleon removal reaction as 8 presented in this article, the two-step scheme also allows to utilize few-nucleon removal reactions as well as few-nucleon pickup reactions which are known to produce significant spin orientation [30].
The FOM of our proposed method was found to be more than 50 times greater than that width of the momentum distribution and the relative width of selected region around the center of the fragment momentum distribution, respectively. Y ( 32 Al), Y ( 32m Al) and p( 32 Al) are the yield of 32 Al beam particle, the yield of its isomeric state 32m Al, and the purity of 32 Al particles in the RI beam, respectively, at the final focal plane for each method. The isomer to ground-state ratio r for the production of 32 Al was deduced as r = Y ( 32m Al)/Y ( 32 Al)d, where d is the correction factor for the in-flight decays of 32m Al from the production target to the final focal plane, which are 0.47 and 0.17 for the two-step and single-step methods, respectively. A is the degree of spin alignment.
The values of FOM are defined as proportional to A 2 × Y ( 32m Al) and normalized so that the FOM for the two-step method is unity. boxes indicate unstable nuclei. Among the latter, red boxes represent those "accessible" with the single-step PF method, and blue boxes represent nuclei which are only accessible with the two-step PF method, where "accessible" here means that the nucleus of interest is producible with its spin aligned and with a production yield sufficiently large to determine the g-factor of its isomeric state with a 5 σ confidence level in a one-day beam time. In the plot, primary beams are restricted to the typical beam particles which are available at high intensities at RIBF. Also, the following conditions are assumed: The degree of spin alignment is 10% for single-nucleon removal from the beam particle, and reduces exponentially to 1% down to 10-nucleon removal, as has been determined empirically; the values of intensity actually available at RIBF is assumed for each species of the primary beam; the cross-sections for the PF reactions are estimated based on parameter sets known as EPAX2 [29], and the cross-section for the secondary PF reaction is assumed to be 1/1000, as usual; the isomeric to ground state population ratio for the nucleus of interest in the PF reaction is 50%; and the external magnetic field up to 1 Tesla is available for the TDPAD measurement. | 2012-11-21T07:53:16.000Z | 2012-10-21T00:00:00.000 | {
"year": 2012,
"sha1": "2446a4c7151688a6ba9dde87addf7e72dfae7217",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1211.4955",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2446a4c7151688a6ba9dde87addf7e72dfae7217",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
32765352 | pes2o/s2orc | v3-fos-license | Protocol for universal gates in optimally biased superconducting qubits
We present a new experimental protocol for performing universal gates in a register of superconducting qubits coupled by fixed on-chip linear reactances. The qubits have fixed, detuned Larmor frequencies and can remain, during the entire gate operation, biased at their optimal working point where decoherence due to fluctuations in control parameters is suppressed to first order. Two-qubit gates are performed by simultaneously irradiating two qubits at their respective Larmor frequencies with appropriate amplitude and phase, while one-qubit gates are performed by the usual single-qubit irradiation pulses.
Single quantum bits displaying coherence in the time domain have now been implemented in various superconducting integrated electrical circuits [1]. Microwave spectroscopy [2], coherent temporal oscillations [3], and a conditional gate operation [4] have been reported in experiments on pairs of capacitively coupled qubits. In all these implementations, decoherence is by far the largest obstacle to be overcome for applications to quantum information processing. Yet, as Vion et al. have demonstrated, appropriate symmetries in circuit architecture and bias conditions can be exploited for suppressing to first order decoherence due to fluctuations in control parameters.
The schemes for performing two-qubit gates proposed so far require dynamical tuning of either the qubit transition frequencies [2] or of the subcircuit controlling the qubit-qubit interaction [5]. The former typically requires dc pulses that move the qubits away from their optimal bias points for coherence, while the latter requires additional control lines and non-linear elements that inevitably introduce additional couplings to uncontrolled degrees of freedom in the environment. In this Letter, we present a novel scheme which minimizes decoherence by maintaining both qubits at their optimal bias points, and by employing only noise-free fixed linear coupling reactances. Furthermore, this scheme takes advantage of the spread in circuit parameters occuring naturally in fabrication, instead of suffering from it.
Our strategy consists of constructing a circuit with fixed, detuned Larmor frequencies and fixed coupling strengths-a sort of "artificial molecule"-and realizing gates with protocols inspired by those of nuclear magnetic resonance(NMR) quantum computation [6]. The essential difference between our "molecules" and those used in NMR resides in the form of the qubit-qubit couplings and the way they are exploited. In NMR, the secular terms in the coupling Hamiltonian (i.e. those that commute with the Zeeman Hamiltonian and thus act to first order) dominate the spin-spin interaction. Two-qubit gates are realized as the spins precess freely, while refocusing pulses are applied in order to "do nothing". In our scheme, the coupling is purely non-secular, and has no effect to first order. So unlike in NMR, we must construct pulses to enhance the second-order effect of the coupling. We refer to this strategy with the (NMR-style!) nick-name: FLICFORQ, for Fixed LInear Couplings between Fixed Off-Resonant Qubits.
The superconducting register we have in mind may consist of charge qubits (controlled via charges on gate capacitors) interacting through on-chip capacitors or of flux qubits (controlled via fluxes through superconducting loops) interacting through mutual inductances. We focus for the moment on two-qubit registers ( Fig. 1), the simplest that allow the realization of a universal set of quantum gates, leaving the extension to larger systems to the discussion. The optimal bias conditions for the circuits shown are N g 1 = N g 2 = 1/2 for charge qubits, where N g = C g U/2e is the dimensionless gate charge, or Under these conditions, the systems become immune, to first order, to variations in the control parameters, such as 1/f charge noise in the Josephson junctions or substrate or noise due to the motion of trapped flux [7]. At optimal bias, these two-qubit systems are described by the reduced Hamiltonian ) is the Larmor frequency of qubit 1(2); ω rf 1 /2π (ω rf 2 /2π) is the frequency of the signal applied to the "write" port of qubit 1 (2); ω x 1 (ω x 2 ) and ω y 1 (ω y 2 ) are the amplitudes of the in-phase and quadrature components of the applied signals, respectively, and, when divided by 2π, are directly interpretable as Rabi frequencies; and ω xx /2π = (t swap ) −1 is the "swap" frequency (if only the σ x 1 σ x 2 term were present in H, the time needed to go from a product state to a maximally entangled state would be t swap /4). The Larmor frequencies are detuned from one another, as occurs naturally during fabrication, by δ = ω z 1 − ω z 2 , and remain fixed throughout. The swap frequency is fixed at the time of circuit fabrication and should satisfy ω xx ≪ δ to avoid significant entanglement of the qubits in the absence of external signals. For concreteness, we consider the case ω xx = 0.1δ = 0.01ω o , where ω o = (ω z 1 + ω z 2 )/2, but our results do not depend critically on these values. For simplicity, we limit ourselves in this paper to resonant RF pulses constrained to obey ω rf 1 = ω z 1 and ω rf 2 = ω z 2 and we perform all possible gates by playing with only four external knobs ω x 1 , ω y 1 , ω x 2 and ω y 2 . The difference between ω rf 1 and ω rf 2 suppresses cross-talk during gate operations, a crucial practical advantage of FLICFORQ.
The mechanism allowing the very weak interqubit coupling ω xx to produce maximally entangled two-qubit states is easily understood in the dressed atom picture of quantum optics [8,9]. When the RF fields and qubits are uncoupled, each qubit + field system has an infinite discrete ladder of doubly-degenerate energy levels, labelled by the qubit state |1 or |0 and the photon number |N , and separated by ω rf (Fig. 2, outer levels). Taking the qubit-field coupling into account lifts the degeneracy, causing the two states in each manifold to be split symmetrically by the field strength ω y 1 (Fig 2, inner levels). The two dressed qubits may then absorb and emit energy at frequencies ω z 1 ± ω y 1 and ω z 2 ± ω y 2 , respectively. The result of the irradiation is thus to split the single-mode qubit line into two sidebands at these frequencies. Choosing the RF amplitudes ω y 1,2 = δ/2 causes the upper sideband of qubit 1 to overlap the lower sideband of qubit 2, allowing the qubits to exchange photons of energy (ω z 1 − ω y 1 ) = (ω z 2 + ω y 2 ) through the coupling reactance.
The simplest protocol for generating entangled states in this system is to simultaneously irradiate each qubit with a signal of amplitude δ/2 [10]. If we choose ω y 1,2 = δ/2 and ω x 1,2 = 0, and initialize the state to ρ in = |00 00|, the qubits will become maximally entangled after a pulse time 4π/ω xx = 2t swap . If the RF is then switched off, the system will remain in an entangled state until it is measured in a local basis or it undergoes decoherence or relaxation. Note that this scheme allows us to produce entanglement on demand without dc excursions from the optimal bias point of either qubit.
The rotation realized when the system is irradiated in this manner, which we call "D", is not a pure σ x 1 σ x 2 rotation, but is rather a product of two commuting π/2 rotations: where we have used the rotation operator notation in which X 1 = iσ x 1 , X 1 X 2 = iσ x 1 σ x 2 , etc. This rotation maps the computational basis states to the Bell states with a relative phase, e.g. D: |00 → (|00 + i|11 )/ √ 2 [11], and can therefore be used to generate and study maximally entangled states. However, it is easy to verify that D 2 = −1, indicating that D is a π rotation, and therefore is not universal [12].
We propose to circumvent this problem by nulling out the σ z 1 σ z 2 factor of D. This is done by flipping the sign of the RF signal amplitude on one of the two qubits midway through the pulse. With this "refocusing flip" the unwanted σ z 1 σ z 2 rotation taking place during the first half of the pulse will be fully undone during the second half. This technique resembles the refocusing schemes used in NMR [13], though here we are modifying the pulse shape rather than performing additional π rotations.
Implementing the refocusing flip leads to the pure π/2 rotation ( This rotation, when augmented by one-qubit π/2 rotations, is known to generate the two-qubit Clifford group C 2 [11]. So along with all one-qubit unitaries, (X 1 X 2 ) 1/2 therefore constitutes a universal set of rotations.
We can thus turn to the construction of a protocol to perform U CNOT , the rotation corresponding to the stan- Outer : systems have an infinite ladder of doubly-degenerate levels corresponding to products of a photon number state (green, orange) and a qubit state (red, blue). Inner : Photonqubit coupling lifts degeneracy in each manifold by Rabi frequency ω y . Transitions between adjacent manifolds (wavy arrows) correspond to absorption/emission of a photon from dressed qubit system. Transition of qubit 1 at ω z 1 − ω y 1 and qubit 2 at ω z 2 + ω y 2 coincide when ω y 1 = ω y 2 = δ/2, putting qubits on speaking terms. Protocol constructs U CNOT from sequence of five "primitive" π/2 rotations. One-qubit σ x and σ y pulses have amplitude δ/8 and duration t sync 2 = 4π/δ; two-qubit pulses have amplitude δ/2. The σ z rotation occurs last, where it can be ignored if followed by measurement in computational basis. Gate is completed at time t CNOT . (b) Description of sequence in Heisenberg picture [14]. First and last columns (red, green) are connected by U CNOT . dard two-qubit logical gate CNOT. We first decompose U CNOT into a sequence of rotations that draws only on (X 1 X 2 ) 1/2 and one-qubit π/2 rotations. We use the sequence (time runing left to right), which, as required, performs the U CNOT mapping in the Heisenberg picture: 14]. Though similar in spirit to decompositions of U CNOT given elsewhere for other systems, e.g. [15], expression (2) is presented in the general language of Pauli rotation operators, making it applicable to any physical implementation. It can, with simple algebra, be adapted to systems where the core two-qubit gate is other than (X 1 X 2 ) 1/2 [11].
The full U CNOT pulse sequence is constructed by concatenating the pulses generating each of the constituent rotations in expression (2) (see figure 3).
We must briefly comment on how the difference between the Larmor frequencies is dealt with. In the absence of irradiation the natural evolution of the system consists of continuous rotations of each qubit about the σ z axis, resulting in a time-dependent phase between the qubits that vanishes every t sync o = 2π/δ. This phase is unimportant when considering one-qubit gates, as compensatory σ z rotations may be realized through simple waiting periods [13]. However, it must be taken into account when doing two-qubit rotations by initiating twoqubit pulses only at t sync m = mt sync o for integer m, i.e. when the qubits are in synchrony. This condition can be met by using one-qubit pulse amplitudes such that the one-qubit σ x and σ y rotations last t sync m . For the above chosen parameters, an amplitude of δ/8 for π/2 pulses is convenient (finer rotations may be generated by weaker pulses with the same duration), corresponding to t sync 2 = 4π/δ. The associated timing grid is shown with dashed vertical lines in figure 3. This scheme is generalizable to multiqubit registers (see below).
We have simulated the pulse sequence of figure 3 by numerically solving a set of fifteen coupled differential equations describing each component of the two-qubit density operator [16]. The simulation technique is exact in the sense that it does not rely on any approximations or perturbative expansions of the time-dependent Hamiltonian. Figure 4 shows the results of a simulation of two representative evolutions. The simultaneous vanishing of each component of the two reduced density operators indicates the generation of a fully entangled two-qubit state [12].
FIG. 4: Evolution of sample input states during U CNOT sequence. Components of the reduced density operators ρ1 = T r2ρ (row 1) and ρ2 = T r1ρ (row 2). ρ1 and ρ2 are plotted in reference frames rotating at ω z 1 and ω z 2 , resp. Dashed vertical line denotes t CNOT . Error visible at t CNOT is due to Bloch-Siegert shift and effect of coupling during one-qubit rotations [17].
What level of gate fidelity can we expect from this scheme? We first discuss the error in one-qubit gates. Choosing the one-qubit pulse time to be t sync 2 means that the coupling term ω xx will perform a parasitic rotation in this time by an angle arccos(ω xx t sync 2 ). This rotation can be nullified altogether using dynamic decoupling schemes, as is done in NMR [13]. In the present system, this would be done by performing appropriately timed π rotations about σ y , which anticommutes with the coupling term σ x 1 σ x 2 . However, for the range of practical circuit parameters δ 0.1ω z o and ω xx 0.01ω z o , the one-qubit gate error rate resulting from this parasitic rotation is already below 10 −3 , or two orders of magnitude better than the fidelity of presently available readout schemes [7], and the correction is an easily dispensable luxury. Also, though our simulations have used only square pulses, realistic pulse shapes should not cause a significant further loss of fidelity in the one-qubit operations, since, as is commonplace in NMR, pulse shapes requiring far less bandwidth could be used [13].
The two-qubit gate error rate will likely be dominated by errors resulting from imperfect RF pulses. The strength of the qubit-qubit interaction depends strongly on the amplitude of the simultaneous RF signals, so entangling gates will be sensitive to jitter or ringing in the pulse amplitudes. This problem could be minimized by using slowly-rising pulses which require less bandwidth rather than trying to approximate square pulses. The pulses could be constructed so that intended one-qubit rotations are implemented during the rise time.
Nonetheless, there is still some error present in our simulations of two-qubit gates, even though we have used ideal pulses. We have verified that this can be attributed to the counter-rotating term in the rotating wave framework [18], as the qubits are irradiated with strong fields for many Larmor periods during a two-qubit gate. This error can be reduced by choosing a stronger coupling ω xx , thereby reducing the time required to generate en-tanglement, or by reducing the detuning δ, which reduces the required field strength. Since these changes would increase the one-qubit gate error rate due to the fixed coupling, an NMR-style decoupling scheme will likely be needed once we require a gate error rate 10 −3 .
We believe a main advantage of the gate scheme presented in this paper is that it can be directly generalized to multiqubit registers with minimal extra hardware. A fixed linear coupling reactance between all pairs of qubits could easily be achieved by coupling each qubit to a common superconducting island, loop or cavity. Selective one-qubit gates would be realized by applying an RF signal at just the target qubit transition frequency, while the protocol generating D or the universal two-qubit rotation (X 1 X 2 ) 1/2 could be realized on any pair by simultaneously applying RF signals at the resonant frequencies of the two targeted qubits. Since each qubit in the register would be detuned from all the others, all these write pulses could be multiplexed on a single RF control line, a decisive advantage in seeking to limit stray couplings to the environment or crosstalk between qubits. Applying D to several qubits in a pairwise fashion would allow the direct production of multiqubit entangled states of the form |GHZ = (|0 ⊗n + e iφ |1 ⊗n )/ √ 2, which, for n > 2 can display maximal violations of Bell-type inequalities [19]. | 2017-10-20T09:34:00.076Z | 2004-12-01T00:00:00.000 | {
"year": 2004,
"sha1": "028c24850e919aa8f11174b1d230fade318d64c3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0412009",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "028c24850e919aa8f11174b1d230fade318d64c3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
90058008 | pes2o/s2orc | v3-fos-license | Isolation, identification of Phytophthora nicotianae var. parasitica and screening of tomato parental lines for buckeye rot resistance
Buckeye rot disease of tomato which is one of the most devastating diseases of tomato crop is caused by soil born fungus Phytophthora nicotianae var. parasitica. In present study, the pathogen was isolated, morphologically identified and its pathogenicity was proved on susceptible commercial variety Solan Lalima and resistant line EC-251649 of tomato. Isolation of pathogen from the infected tomato fruit was achieved on Corn Meal Agar (CMA) out of two different media viz., Potato Dextrose Agar (PDA) and CMA. Fungal inoculum was prepared on Corn Meal broth. Inoculation with 10 ml of inoculum was found optimum in plant parts namely, stem, leaves and fruits. Parental lines were inoculated to test their disease reaction to buckeye rot. Symptoms of infection appeared on leaves and fruits only. Solan Lalima was found to be highly susceptible to with disease severity of 92 % and 100 % disease incidence, while EC-251649 was found moderately resistant on the basis of 16 % disease severity and 10 % disease incidence to the disease. After confirmation of resistance and susceptibility, the parental lines were surveyed for polymorphism using 42 primers and 32 were recorded to be polymorphic revealing that the differences are present at DNA level also. This is the very first study which evaluated parental lines for buckeye rot disease reaction on morphological as well as molecular basis. These lines will be further used for quantitative trait loci (QTL) analysis/gene tagging for buckeye rot and marker assisted selection to provide improved varieties to the farmers.
INTRODUCTION
Tomato (Solanum lycopersicum L.), is one of the most economically important and widely grown vegetable crop in family Solanaceae. It is the second most consumed vegetable crop after potato with production of 163.4 million tonnes. China accounted for 31 % of the total, followed by India, the United States and Turkey (FAOSTAT, 2015). One of the problems of tomato cultivation is the damage caused by pathogens, including virus like Tomato mosaic virus (ToMV), Tomato spotted wilt virus, Potato virus Y, bacteria like Xanthomonas campestris, Clavibacter michiganensis, Pseudomonas syringae, nematode and fungi such as Alternaria solani, Alternaria tenuis, Phytophthora infestans, Phytophthora nicotianae var. parasitica, Colletotrichum phomoides, and Fusarium sp. One of the most devastating diseases is buckeye rot which causes 30-40 per cent crop loss which may rise with the prevalence and severity of disease depending upon the favourable weather conditions (Gupta et al., 2005). Sherbakoff in 1917 reported buckeye rot for the very ISSN : 0974-9411 (Print), 2231-5209 (Online) All Rights Reserved © Applied and Natural Science Foundation www.jans.ansfoundation.org first time from Florida (Wani, 2011). In India, this disease has been reported for the first time from Solan area of Himachal Pradesh (Jain et al., 1961). Buckeye rot of tomato fruit is caused by soil borne fungus Phytophthora nicotianae var. parasitica. Initial symptoms consist of a brownish, water-soaked circular spot that usually appears near the blossom end, or at the point of contact between the fruit and soil (Tiwari et al., 2014). The disease most commonly occurs under prolonged warm and wet conditions. The buckeye rot fungus may be introduced through infected seeds or transplants, by contact with infested soil or through plants from the previous crop. Temperatures between 23 °C and 30 °C are ideal for disease development. Spores can germinate in soil or on decaying debris. Splashing rain and surface water can disperse spores onto healthy plants. Susceptible tomato plants can be killed within three weeks of transplantation into infected soil. The fungus is also seed-borne and may be spread by contaminated seed leading to failure of germination. (AVRDC, 2004;Lu et al., 2013). As till date no variety with resistance to this disease is available, thus the only management options are cultural and chemical control (Bijaya et al., 2002;Olanya and Larkin, 2006). But in case of severe infection by the pathogen these control measures do not prove more effective. Thus, another way to combat this disease is by the development of resistant varieties by identifying a susceptible commercial variety and resistance source. The wild cherry tomatoes are supposed to contain resistant genes for this disease. The susceptible and resistant parents can then be used for breeding purpose. The present study was designed to fulfill following objectives: Isolation of Phytophthora nicotianae var. parasitica, morphological identification of the fungus, morphological and molecular screening of susceptible and resistant varieties.
MATERIALS AND METHODS
Isolation and morphological identification of pathogen: For isolation of pathogen, buckeye rot infected fruit of tomato, obtained from fields of Department of Plant Pathology, Dr YS Parmar University of Horticulture and Forestry, Nauni, Solan (HP) showing brownish pattern of concentric rings, was used ( Fig. 1). After this the fruit was washed with autoclaved distilled water. Then part of fruit around infected portion was used for culturing on two different media viz., corn meal agar (CMA) and Potato Dextrose Agar (PDA). For proper growth of fungal isolate, the culture room temperature was maintained at 25 0 C (Fig. 2). After this, morphological characterization of fungus ( Fig. 3) was carried which included colony morphology and microscopic structures of the fungus. The mycelium was hyaline, coenocytic and produced sporangiophores. Sporangiophores were sympodially branched, had swelling at the nodes and produced lemon-shaped, papillate sporangia. Maintenance of pure culture and preparation of fungal inoculum for screening: The culture of P. nicotianae var. parasitica was maintained on CMA medium in the petriplates by culturing a single bit of previously grown culture to obtain pure culture of pathogen. Then the culture was incubated at 25°C for 10 days till uniform fluffy growth was obtained. Thereafter, the culture plates with pathogen were covered properly and kept at low temperature (4 °C) to stop further growth. After morphological confirmation, a dilute suspension of fungal cells was prepared on Corn Meal broth (CM) (Fig. 4). After ten days of mycelial growth, the density of fungal inoculum was standardized using haemocytometer for inoculating fruits. Optimum density of 15-20 hyphae/ ml in haemocytometer was obtained by mixing 1 gm of fungal hyphae in 80 ml distilled water. Then the fungal inoculum was used to infect the fruits of tomato at different concentrations viz., 2 ml, 5 ml, 10 ml and 15 ml. Inoculation with 10 ml of inoculum was found most effective. Too low concentrations i.e. 2 ml and 5 ml did not cause much damage, while too high concentration i.e. 15 ml cause early and complete damage of fruit. Pathogenicity test of parents using fungal inoculum: Pathogenicity test was conducted on different parts of tomato plant viz, stem, leaves and fruits by injecting 10 ml of inoculum and incubated in humid chamber for 10 days. To conduct pathogenicity test two parental lines 'Solan Lalima' (susceptible) and 'EC-251649' (resistant) to buckeye rot were used. Infected material was observed periodically for the appearance of symptoms like formation of brownish spot and pattern of concentric ring of brown bands on the fruits. In case of fruits, screening was done by using two methods: Detached fruit method and Intact fruit method. In case of leaves, disease was measured by using 0-5 scale adopted by Dodan et al. (1995) and disease severity was calculated using the formula: After calculating disease severity, the scale given in Table 1 was used for assessing disease reaction on leaves, depending on which leaves were grouped in different categories. The disease incidence was calculated in case of fruits using following formula: After calculation of disease incidence, the scale given in Table 2 was used for assessing disease reaction on fruits, depending on it the fruits of each plant were grouped in different categories. Molecular screening of parental lines: Molecular screening of parental lines was done to find out polymorphism at genomic level. To carry out polymorphism studies among parental lines DNA from leaves of seedlings was isolated using the Cetyl trimethyl ammonium bromide (CTAB) method (Doyle and Doyle, 1987). For isolation extraction buffer was prepared which contained 1.5 % CTAB, 20 mM EDTA (pH 8.0), 1.4 M NaCl, 100 mM Tris HCl (pH 8.0), 1 % polyvinylpyrrolidone (PVP) and 0.2 % βmercaptoethanol. Isolated DNA was purified following treatments with RNase (10 μg/μl), chloroform-phenol, 3 M sodium acetate and absolute ethanol followed by suspension and precipitation using absolute alcohol to obtain the pure DNA pellet. The pellet was dissolved in Tris EDTA (TE) buffer (10 mM tris HCl and 1 M EDTA, pH 8) and stored at 4 o C until used. PCR-Amplification of genomic DNA: Isolated DNA was subjected to PCR for amplification by using primers (Table 3). A reaction mixture of 20 μl for PCR analysis was prepared using 1 X PCR buffer, 2 mM MgCl 2 , 1 mM dNTPs, 20 picomoles each primer (forward and reverse), 1 U Taq DNA Polymerase, 50 ng template DNA following a thermal profile as: 5 min of initial denaturation at 95 °C followed by 40 cycles of 1 min denaturation at 94 °C, annealing varied with Tm of each primer for 1 min and extension of 2 min at 72 °C, further followed by final extension of 5 min at 72 °C (Kaur et al., 2015). The amplified DNA was mixed thoroughly with 6 X loading dye (0.25 % Bromophenol blue, 40 % Sucrose) followed by electrophoresis in 3.5 % agarose gel supplemented 0.5 μg/ml Ethidium Bromide within 1 X TAE buffer (40 mM Tris-acetate, 1.0 mM EDTA). The gel was run at constant voltage at the rate of 5 V/cm for about 3 hours. For PCR amplification three different types of molecular markers viz., ISSR, genomic and EST-SSRs were used.
RESULTS AND DISCUSSION
Isolation and morphological identification of pathogen: Isolation of fungus was successfully achieved on CMA (Fig.2).The fungal mycelium were hyaline, coenocytic with sympodially branched sporangiophores, had swelling at the nodes and produced lemon-shaped, papillate sporangia (Fig.3). Pathogenicity test: Dilute suspension of fungal cells in Corn Meal broth inoculum was prepared as standardized (Fig. 4) which was then inoculated @ 10 ml in different plant parts. After injection of 10 ml inoculum in stem, fruits and spray on leaves, it was found that no symptoms of infection appeared on stem leading to the conclusion that this fungus has no delirious effect on stem portion (Fig. 5 a,b). Symptoms appeared as dead brown area on leaves and formation of brown concentric rings on fruits. So, data was recorded for disease severity of leaves and disease incidence on fruits of tomato. In case of fruits, inoculation was also conducted under in vivo conditions. Data on disease severity and in incidence revealed that Solan Lalima is highly susceptible to buckeye rot (Fig. 6a, 7a, 8b), while EC-251649 was found moderately resistant to this disease (Fig. 6b, 7b, 8a and Table 3). Molecular screening of parental lines: A total of 45 primers which included 15 of each ISSR, genomic SSR and EST-SSR primers were used to conduct parental polymorphism survey. Out of these 32 primers were found polymorphic (overall 71.11 % polymorphism) between parents including 13 ISSRs, 7 genomic SSRs and 12 EST-SSRs, giving a percentage of 86.66, 56.66 and 80.0, respectively (Table 4). This study confirmed presence of variations between parents at genomic level. These results produced by molecular markers confirmed that the two parents which were proved contrasting at phenotypic level for disease resistance and susceptibility are also contrasting at genomic level (Fig. 9). In the studies conducted earlier various media were optimized which included CMA, lima bean agar (LBA), modified lima bean agar (MLBA), malt extract agar (MEA), oat meal agar (OMA) and PDA with 90.0 mm, 31.2 mm, 49.2 mm, 51.0 mm, 90.0 mm and 79.3 mm mycelium, respectively, after seven days of culturing, but CMA medium was recorded as one of the best media for Phytophthora nicotianae growth (Bowers and Locke, 2004;Flores et al., 2013). The identifying features i.e. mycelium with hyaline, coenocytic, finely Shilpa et al. / J. Appl. & Nat. Sci. 9 (1): 562 -567 (2017) Ribeiro (1978). Screening against buckeye rot disease under laboratory conditions has been done earlier by Oliva-Risco (1983) following different methods viz. fruit dip followed by inoculation (50 %), fruit dip-injury-inoculation (100 %) and injury-fruit dip-inoculation (66.68 %). The data of disease incidence was recorded for ten days when 100 % disease appeared in one of the treatments. Among these methods injury-fruit dip-inoculation was found most effective with value of 66.68 % disease incidence. In present study, inoculation using a syringe was found successful for injecting inoculum in fruits and stem. But in case of leaves, spray of fungal inoculum was used successfully for establishment of infection. The screening results on fruits and leaves were found congruent with each other which categorized the parent lines used in highly susceptible with 92.16 % disease severity and 100 % disease incidence and moderately resistant range with 16.00 % disease severity and 10.00 % disease incidence. It is noteworthy that highly susceptible parent 'Solan Lalima' is a promising variety with high yield, while moderately resistant parent 'EC-251649' is a small fruited line. Mehta (2004) also reported some level of resistance with 15 to 30 % disease incidence in small fruited lines viz., EC 174041, EC 141887 and FT5-5. Difference between two parents was also revealed by three different types of molecular markers viz., ISSR, genomic SSR and EST-SSR with 86.66, 56.66 and 80.00 % polymorphism, respectively. High polymorphism results indicated that the variations which are present at morphological level also exist at DNA level and are not merely due to environmental conditions. Three different types of molecular marker systems were used to produce more authenticated results. This is the very first study in which infection caused by fungus P. nicotianae var. parasitica were reported on leaves. Although, the main plant part more prone of infection remains the fruit, but infection also appear on leaves.
There are no earlier reports on use of molecular markers for studying polymorphism at DNA level between any parental lines either susceptible or resistant to buckeye rot.
Conclusion
Isolation and conservation of Phytophthora genus is very laborious. Thus, the work on search of better options for both mycelial growth and sporulation for different isolates of Phytophthora is a continuous process. To carry out any kind of study related to | 2019-04-02T13:13:55.114Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "41400134d977c383dce59efcd0b02cb3b9a5e651",
"oa_license": "CCBYNC",
"oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/1230/1181",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "93b46e9cf5d78a515bad088b5359a038f8cec66e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
78007396 | pes2o/s2orc | v3-fos-license | Development of Recombinant Domains of Protective Antigen of Bacillus anthracis and Evaluation of their Immune Response in Mouse Model for Use as Vaccine Candidates for Anthrax
Bacillus anthracis , the causative agent of anthrax is considered as the most important biological warfare agent. This Gram-positive, spore forming bacterium has three modes of infection i.e. cutaneous, inhalational and gastrointestinal in human. The principal virulence factors of this bacterium consist of an anti-phagocytic capsule composed of poly-D-glutamic acid and a secreted tripartite bacterial toxin composed of protective antigen (PA), lethal factor (LF) and edema factor (EF). PA is the pivotal protein of the anthrax toxin complex and immune response to PA is central to protection against B. anthracis . In this study, overlapping portions of four different domains of PA were cloned and expressed. The recombinant proteins were purified and used for immunization in mice. The ELISA results showed that all the domains elicited high antibody titres in vaccinated animals. However domain PAD3-4 showed the highest immune response against PA. Among the IgG subtypes, IgG1 response was predominant in all the immunized groups followed by IgG2. This indicated the induction of Th2 type immune responses against all the recombinant protein vaccine candidates. The study showed that the individual domains have also the potential as vaccine candidates for anthrax.
Introduction
Anthrax, primarily a disease of herbivores and domestic livestock, is caused by the Gram positive, spore-forming bacterium, Bacillus anthracis. In human, there are three modes of its infection, cutaneous, gastrointestinal and inhalational. The virulence of bacterium is due to the capsule and a tripartite toxin produced by the B. anthracis [1]. Two plasmids, pXO1 and pXO2 carry the genes encoding toxin and capsule, respectively. Anthrax toxin, a tripartite toxin is comprised of protective antigen (PA), lethal factor (LF) or edema factor (EF). PA is the common protein which facilitates the entry of LF or EF into the host cell [2,3]. PA is an 83 kD protein that initially binds to ubiquitously expressed cell surface receptors [4][5][6]. This binding is followed by cleavage of PA by cell-associated furin-like proteases, releasing a 20-kDa fragment to produce the activated form, PA63 [7,8]. The next steps are formation of a heptamer of PA63 molecules and binding of LF (or EF) to PA63 [9,10]. The PA63-LF (or PA63-EF) complexes are internalized, likely via a lipid raft-mediated process, and within the acidic environment of the endosomes, LF and EF are translocated into the target cell cytoplasm [11,12] where they exert their toxic effects [13][14][15].
PA contains four domains and each domain has a specific role in intoxication process. Individual domain can exist separately from the full protein while retaining its structural and functional integrity [16]. Domain 1, comprising of amino acids (aa) 1-258 contains the furin recognition site RKKR, which is cleaved to release the N-terminal PA20 (1-167) fragment. The remaining protein (PA63) is heptamerized through monomeric interactions of the cell surface [17]. Domain 1 has Ca 2+ binding sites also which provide stability to PA [18]. Domain 2 (residue aa 259 to 487) and Domain 3 (aa 488 to 595) contribute toward heptamerization, and internalization of LF or EF through receptor-mediated endocytosis into the cell [9,19]. Domain 2 forms the heptameric pore through which the effector molecules traverse to enter the cytosol. Domain 4 (residues 596 to 735) possesses the receptor for binding to the host cell [19,20].
PA, being the central moiety of the anthrax toxin has been a major target for development of anthrax vaccine. However, the currently FDA approved anthrax vaccine anthrax vaccine adsorbed' (AVA), or BioThrax is prepared by adsorbing filtered culture supernatants of an attenuated strain (V770-NP1-R) to Aluminum hydroxide (Al hydrogel) as an adjuvant [21]. AVA was developed in the early 1950s when purified components of B. anthracis were not available. However, still its major demonstrable protective component is PA protein [22]. Now, there is a complete understanding of the molecular mechanism of anthrax pathogenesis and the individual protective components can be produced easily. Thus, new generation anthrax vaccines are being developed where major component is purified preparations of recombinant PA.
The role of full PA has been well demonstrated in protection. However, in this study, we have demonstrated the potential of recombinant PA domains. The overlapping gene part of individual domains of PA were cloned and expressed in E. coli. The proteins were purified to homogeneity and used for immunization in mice. The immune response in terms of specific IgGs and their isotypes for individual domains and against whole PA was determined.
Cloning, expression and purification of PA domain proteins
The DNA encoding different domains were amplified by PCR using the B. anthracis Sterne DNA. The primers used for amplification of these truncated portions of PA are listed in (Table 1). The PA Domain 1 (PAD1), PA Domain 1-2 (PAD1-2), PA domain 2-3 (PAD2-3) and PA domain 3-4 (PAD3-4) consisted of 702, 681, 621 and 543 nucleotides, respectively. Thus, the amplified fragments consisted of the overlapping regions of domains instead of complete individual domain. The amplified DNA fragments were cloned into the expression vector pQE30-UA (Qiagen) in accordance with the manufacturer's instructions. Ligated vectors were then transformed into E. coli SG13009 cells and the transformants were selected on the media plates containing ampicillin (100 µg/ml) and kanamycin (25 µg/ ml). The presence of inserts was confirmed by the sequencing (data not shown). Recombinant proteins were expressed at 37°C after IPTG induction and purified in denaturing conditions by Ni-NTA columns (Qiagen) as described by the manufacturer. The purified recombinant proteins were dialyzed and estimated by bicinchoninic acid (BCA) method using BCA from Sigma, USA. Purified proteins (2µg/well) were separated by 12% SDS-PAGE. Samples were electrophoresed in two gels. One was stained with Coomassie brilliant blue G-250 and the other was electro blotted on to a transfer membrane (PVDF). Since pQE-30UA vector provides Nterminal His-tag to the recombinant proteins, presence of proteins was confirmed by western blotting with anti-his antibody.
Animal immunization
All animal procedures were adhered strictly to the Institutional Animal Ethics Committee (IAEC). Groups of six mice (female BALB/c) each weighing 20-25 g received three doses of PA domain antigens at intervals of 2 weeks, subcutaneously (s.c.). PA or its domain antigens (20 µg) were administered with Freunds adjuvant in 10 mM phosphate buffer pH 7.4. Control group was immunized with PBS only. Mice were immunized on day 0 and boosted two more times on a 2-week schedule. One week after final immunization, mice were bled and serum was separated and stored at −20°C until used.
Measurement of antigen specific immunoglobulins
The serum IgG antibody titres to PA and its domains were determined by enzyme linked immunosorbent assay (ELISA). Maxisorp flat bottom 96-well microtiter plates (Nalge Nunc International, Denmark) were coated with 100 µL per well of different PA proteins (2 µg/ml) per well in carbonate buffer, pH 9.6 overnight at 4°C. The antigen coated plates were washed three times with wash buffer (PBS containing 0.1% Tween 20) using ELx 50 microplate washer (BioTek Instruments Inc, USA). The wells were blocked with 300 µL of blocking buffer (5% skimmed milk in PBS) for 1 h at 37 o C. After washing, the plate was blotted dry on a paper towel. The test and control sera were diluted to 1:2000 to 1:512000 in PBS containing 1% skimmed milk, pH 7.4. The final volume in each well was 100 µL and incubated for 60 min at 37 o C. The plate was then washed three times with wash buffer and blotted dry on a paper towel. Horseradish peroxidase (HRP)-conjugated goat anti-mice IgG (Sigma Aldrich, USA) diluted to 1:10000 in PBS containing 1% skimmed milk (100 µL / well) was added and incubated for 60 min to detect the bound anti-PA-IgG. Again the plate was washed three times with wash buffer and detected calorimetrically by using 100µL per well of TMB (3, 3' , 5, 5'tetramethylbenzidine) (Sigma Aldrich, USA) and incubated at room temperature for 10 min. The colour development was stopped with 100µL of 1N H2 SO4 and read at 450 nm using an ELISA plate reader (BioTek Instruments Inc, USA). All the tests were performed in duplicate. Antibody titres were expressed as reciprocal of the end point dilution.
Measurement of PA specific immunoglobulins
Sera were examined for the level of PA specific IgGs by ELISA. PA, obtained from Alpha Diagnostic International, USA (1 µg/ml, 100 µL) was coated into 96-well plates (Nunc, Denmark) in duplicates. The ELISA was performed as described above using the test and control sera from various mice groups.
Detection of isotype specific antibodies to PA
ELISA was used to determine the isotypes and subclass specificities of antibodies to PA. Plates were coated with the rPA domain proteins and serial two-fold dilutions of sera from immunized and control mice were added. Isotype specific antibodies (goat anti-mouse IgG1, IgG2a, IgG2b and IgG3, IgM and IgA from BioRad, USA at 1:1000 dilutions) were added in duplicates and detection of bound isotype specific antibodies was performed with HRP conjugated rabbit anti-goat IgG (1:5000) and TMB substrate. The absorbance was read at 450 nm. Antibody titres were expressed as reciprocal of the end point dilution.
Purification of recombinant PA domain proteins
All the recombinant proteins having Histidine fusion tag were purified by Ni-NTA chromatography under denatured condition. All the proteins were found to be present in insoluble fraction of cell lysate as revealed by SDS-PAGE. Proteins were eluted using denaturing elution buffer, as per the manufacturer's instructions. All the recombinant proteins were checked by SDS-PAGE ( Figure 1A). The proteins were quantified using BCA method. Proteins were then stored at 70°C until used. Various clones yielded 6.0 mg (PAD1), 12.9 mg (PAD1-2), 22.3 mg (PAD2-3) and 19.2 mg (PAD3-4) proteins from one litre of shake flask culture. The western blot of the purified recombinant proteins is shown in Figure 1B.
Antibody titers of individual domain proteins by plate ELISA
An indirect IgG ELISA was performed to determine the antibody response elicited by individual recombinant domain antigen in mice. Figure 2 shows antibody titers of pooled sera from 6 animals with individual domain antigens. Domain PAD3-4, PAD1-2 elicited an excellent antibody titre (1: 512000). This titre was equivalent to antibody tire produced by full PA (Figure 2). Other two domains i.e. PAD1 and PAD2-3 also showed good antibody response in the immunized mice. Our results indicated that domain PAD3-4 and PAD1-2 proteins are the immuno-dominant antigens in magnitude of response, with a titre of 1:512000 ( Figure 2). Other PA domains also showed good antibody titres of 1: 256000.
Antibody response of individual domain proteins against full PA
In order to determine the immune response of vaccinated sera against full PA83 protein, a separate ELISA was performed by coating Full PA as antigen. The highest antibody response against PA83 (full PA) was exhibited by domain PAD3-4 (1:256000) followed by domain PAD2-3 and PAD 1-2 ( Figure 3).
Antibody isotyping
The mice sera were evaluated for specific IgM, IgG subclass, and IgA antibodies to PA. Among the IgG subtypes, IgG1 response was predominant in all the immunized groups followed by IgG2b and IgG2a with respect to pre-immunized sera (Figure 4). The IgG1 response of PAD2-3, PAD3-4 and full PA was identical. In domain PAD2-3, no significant difference was observed between IgG2b and IgG2a ( Figure 4).
Discussion
Anthrax is an important disease of biodefense concern [23]. Besides, it is a public health problem also in countries with agriculture as the major occupation. India has got the largest livestock population of the world. Many regions in India are still enzootic for anthrax [24]. In some states like Orissa and Andhra Pradesh, anthrax is endemic and a public health problem in many areas [25,26]. The immune system response generated during infection is generally effective in eliminating the foreign agent and comprises many different efficient antimicrobial activities. However, for disease like anthrax, development of an effective vaccine for anthrax is a need of the day.
Antibodies generated during bacterial infection may play an important role by binding to the bacterial surface and have potent effector functions that can lead to bacterial lysis via complement, or facilitate phagocytosis by immune cells via Fc receptors (FcRs). Antibody mediated removal of bacterial pathogens can require either any one, or combinations, of these activities. For example, bacteria in the lungs can be unaffected by antibodies in the absence of complement components or FcRs, indicating that a complex combination of Fc-associated effector functions is required for bacterial clearance [27]. Respiratory Bacterial pathogens have been shown to induce immunity that is dependent on T cells or on specific antibody effector functions and can even be dependent on the combination of both antibody effector functions and T cells [28].
Considering that B. anthracis has mechanisms to affect T-cell functions that could disrupt the generation of various immune functions, it is important to understand the mechanism of anamnestic immunity to this pathogen. Antibodies are necessary for protective immunity to B. anthracis. In this study, various domains of PA were generated to study their antibody response in mouse model. Each domain plays a critical role in toxin action, whether it be effector binding (domain 1) [9], participation in oligomer formation (domain 3) [9,19], or receptor binding (domain 4) [19,20], for that reason assessment of the domain-specific antibody levels generated by immunization with PA domains antigen in mice was studied. The results showed that among all the domains PAD3-4 and PAD1-2 showed highest antibody response (1:256000) followed by PAD2-3 and PAD1. Interestingly, maximum immune response against full PA was also shown by domain PAD3-4, followed by PAD1-2, PAD2-3 and PAD1. A significant high antibody titer with predominance of IgG1 isotypes along with elevated level of IgG2b was observed for PAD3-4, followed by PAD1-2, PAD2-3 and PAD1 antigen (Figure 4). There are different subclasses of IgG immunoglobulins such as IgG1, IgG2a, IgG2b, and IgG3 that provide immunity to most infectious agents. This isotype switch is controlled by T-cells and their cytokines. In mice, IL-4 generally switches activated B cells to the IgG1 isotype (Th2 type immune response) [29]. It appears from these findings that domain antigens immunization in mice induces Th2 type of immune response that may provide protective immunity in mice. The study shows that these recombinant domains proteins can be used as an improved vaccine candidate against anthrax in future. | 2019-03-15T13:14:53.327Z | 2016-06-30T00:00:00.000 | {
"year": 2016,
"sha1": "8cfc9b862a42810cae52316b7fd0a58d68af7bf4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2157-2526.1000147",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d1103dcc21f7b5b16b7d8ddaedde8fa385d2957d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195874212 | pes2o/s2orc | v3-fos-license | ABSense: Sensing Electromagnetic Waves on Metasurfaces via Ambient Compilation of Full Absorption
Metasurfaces constitute effective media for manipulating and transforming impinging EM waves. Related studies have explored a series of impactful MS capabilities and applications in sectors such as wireless communications, medical imaging and energy harvesting. A key-gap in the existing body of work is that the attributes of the EM waves to-be-controlled (e.g., direction, polarity, phase) are known in advance. The present work proposes a practical solution to the EM wave sensing problem using the intelligent and networked MS counterparts-the HyperSurfaces (HSFs), without requiring dedicated field sensors. An nano-network embedded within the HSF iterates over the possible MS configurations, finding the one that fully absorbs the impinging EM wave, hence maximizing the energy distribution within the HSF. Using a distributed consensus approach, the nano-network then matches the found configuration to the most probable EM wave traits, via a static lookup table that can be created during the HSF manufacturing. Realistic simulations demonstrate the potential of the proposed scheme. Moreover, we show that the proposed workflow is the first-of-its-kind embedded EM compiler, i.e., an autonomic HSF that can translate high-level EM behavior objectives to the corresponding, low-level EM actuation commands.
I. INTRODUCTION
Metasurfaces (MS) are highly efficient media for the manipulation of electromagnetic (EM) waves in custom and even unnatural ways. Wave attributes such as the direction of reflection, polarization [1], and the amount of absorption [2], among others, can be controlled with unprecedented accuracy. The outstanding principles MS have been already proven in the microwave, Terahertz (THz), and optical regimes [3], finding their way into a plethora of applications [4]- [6].
The basis for the MS operation is the Huygens principle, which states that any EM wavefront can be traced back to a planar distribution of currents [6]. In that sense, a MS acts as a canvas of currents comprising: i) passive elements acting as sub-wavelength antennas receiving impinging waves, i.e., inductive current sources, and ii) active elements such as PIN diodes acting as current flow manipulators. The active elements can be externally-biased, transforming the currents to follow a planar distribution that corresponds to a required wavefront. Thus, any overall manipulation of the impinging EM waves can be attained, e.g., anomalous steering, polarization and phase alteration, partial attenuation or even full absorption. HSFs are the intelligent, networked variation of MSes, which include an embedded nano-network and a gateway [7]. The nano-network collectively controls the biasing of the active elements in a distributed, autonomic manner, while the gateway connects the nano-network to the HSF-external world via a standard protocol (e.g., WiFi, Bluetooth, etc.).
It is a widespread practice to design MSes (and HSFs) assuming that the impinging wave attributes are known [6]. Nevertheless, this is rarely the case in a broad set of applications where the availability of EM sensing systems becomes a necessity. Currently, the wave attributes can be sensed via HSF-external systems and devices (e.g., d-dot sensors) [8]. However, this approach is not space-granular, since the EM field quantities are sensed i) on average and ii) at one point near or on the HSF gateway. Incorporating multiple such sensors in the HSF adds up to its assembly complexity and overall cost. Moreover, d-EM field sensors are currently sizeable, posing novel minification challenges.
The present study proposes a wave sensing approach that operates without specialized sensory hardware and exploiting the HSF networking capabilities instead. The key idea is to sense the attributes of an impinging wave by fully absorbing it. Full absorption of impinging EM waves is well-studied MS capability [6]. When the surface impedance of the HSF is matched perfectly to the impinging wave, its power dissipates on the passive and active elements, at the same time serving the dual purpose of identifying the condition for perfect absorption. Thus, we use the embedded controller network within the HSF to intelligently iterate over the active element states and obtain the one yielding maximal dissipated power across the active elements. Then, we employ a static lookup table (provided by the HSF manufacturer) that contains the actuator states to achieve full absorption for specific impinging wave cases. The best matching entry is picked as the estimate of the impinging wave attributes. Apart from acting as a wave sensing scheme, the proposed approach is also a form of an ambient EM Compiler [9], since the nano-nodes collectively tune the HSF to attain a macroscopic functionality, i.e., the full absorption of an unknown EM wave.
The remainder of the paper is organized as follows. Section II surveys the related studies. Section III details the proposed scheme and evaluation via simulations follows in Section IV. The paper is concluded in Section V.
II. RELATED STUDIES
Metasurfaces are highly effective systems for controlling different aspects of EM waves (wavefront [10], collimation [11], polarization [12], dispersion [13], controllable absorption [14]), especially if they are tunable thus offering the ability to perform multiple functionalities and switch between them at will [3]. In recent years, interest has mainly focused on incorporating voltage-controlled actuation elements inside each unit cell, thus providing a means of dynamically tuning the surface impedance of the metasurface to external EM waves. This is because voltage-controlled elements can be readily controlled by an external controller, such as an Field Programmable Gate Array (FPGA), thus allowing for centrally controlling the metasurface response. The commonly integrated elements are PIN switch [15] or varactor [16] diodes, allowing for a broad range of functionalities ranging from wavefront manipulation, beam splitting, and polarization control. However, switch diodes allow for obtaining two discrete states of the surface impedance; it can be extended to 2 N when N unit cells are clustered together to allow for N -bit encoding, but N is typically limited to N = 2 or 3 [15] since the resulting supercell must be subwavelength to remain in the metasurface regime. On the other hand, varactor diodes allow for continuous control but only over the reactive part of the surface impedance, thus providing the ability to tune the phase of the impinging wave but not its amplitude.
Recently, it has been shown that having the ability to continuously control both the reactive and resistive part of the complex surface impedance leads to maximum freedom over the available functionalities [17]. This can be achieved by integrating elaborate controller chips in each unit cell that can provide a complex input impedance (R + jX). Further enhancing this concept by forming a nano-network of the controllers defines the HyperSurface (HSF) paradigm [7], [18]. In a nutshell, the HyperSurface entails the passive metaatom enhanced with an actuation module, a computation and communication module for exchanging data with other cells, and a gateway for communicating with HSF-external entities, such as receiving cell actuation commands and diffusing them for propagation within the inter-cell network. A prominent example of the applications enabled by HSFs are employing mitigation strategies for fault tolerance [19] or deploying programmable wireless environments [5], [18], [20], [21].
Here, we demonstrate a different possibility enabled by the rich HSF capabilities. Specifically, we demonstrate sensing of the characteristics, i.e., direction of incidence (measured by the spherical coordinate angles φ and θ) and polarization (TE or TM), of the impinging wave. Similar functionalities have been demonstrated in plasmonic metasurfaces incorporating Fig. 1. The generic HyperSurface structure comprising passive elements (conductive patches and insulating substrate) and active elements (tunable impedance elements and their networked nano-controllers). The attributes of the impinging planar wave to be sensed are its direction of arrival (azimuth and elevation angles expressed in the illustrated coordinate system) and polarization. pixelated photodetectors strictly for polarization detection [22] and for orbital angular momentum detection enabled through the decomposition of the transmitted waves impinging on a properly designed metasurface [23]. In our approach we utilize the controllers integrated in the HSF configuration. In particular, we rely on the fact that by fully absorbing the incident wave the power dissipated on the controllers is maximized. More specifically, we monitor the dissipated power on the actuation elements themselves, thus avoiding extra sensing circuitry which would increase the complexity. Moreover, the proposed sensory system is autonomic, not requiring HSFexternal computing or communication elements [8]. Additionally, to be best of the authors' knowledge, the present work also constitutes the first application of distributed consensus algorithms [24], [25] as the enabler for the autonomic and collective operation of nano-networks. In this aspect, we note that related works have studied controlled flooding [26], [27], peerto-peer [26], [28]- [31] and nature-inspired alternatives [32].
It is noted that impressive further capabilities of metasurfaces has been recently demonstrated, promising novel expansions of the HyperSurface concept. Estakhri et al have showcased metastructure design processes able to serve as analog solvers to integral problems [33]. La Spada et al have proposed design workflows for curvilinear metasurfaces exerting arbitrary control over surface EM currents [34], as well as near-zero index wires [35]. Finally, graphene has also been extensively studied for frequency-tunable THz metasurfaces, since its conductivity can be dynamically modulated via electrical, magnetic and optical means [36].
The workflow of the proposed ABSense scheme.
III. THE PROPOSED SCHEME
We consider a generic HSF architecture as shown in Fig. 1. From its physical aspect, it comprises the common passive and active elements of a metasurface. Additionally, it includes a wireless nano-network embedded within the HSF material, with each nano-node being responsible for the control of one active element. Notably, the HSF material also constitutes the wireless channel medium for the communicating nanonodes [37], [38]. A distributed communications protocol is employed by the nodes at the application layer, as described later in this Section. The nano-network is supplied with power via an energy pulse generator, which also serves as a rudimentary clock as discussed later. The power pulses can be of any spectrum, such as microwaves, visible light, etc. [39]. A planar wave impinges upon the HSF, with a specific direction and polarization, as illustrated in Fig. 1. The objective is to employ the nano-network capabilities in order to detect these characteristics of the wave, i.e., direction and polarization. The proposed process executed by the HSF as a whole is denoted as ABSense and formulated in pseudo-code as Algorithm. 1.
The goal of the ABSense scheme is to sense the impinging wave attributes by: i) detecting the active element configurations that leads to the full absorption of the wave, and ii) deducing the best matching wave attributes by reversing a hashmap provided by the HSF manufacturer. This hashmap, denoted as L, corresponds any parameterized EM function supported by the HSF to the active element state per node e to achieve it. For instance, an entry of L can be written as: Treating L and the impinging wave as inputs, the ABSense workflow is as follows. Signaled and synchronized by the pulse generator, each nano-node iterates over its possible active element states. All nodes move along their iterations in lockstep (detailed below), resulting each time into a uniform surface impedance across the HSF, which will eventually match the impinging wave. For each state z, a node e obtains a corresponding measurement of power flowing via its active element, P e (z). Susceptibility to noise and errors is taken into account. The z * e state that yields the maximum P e is communicated to the nano-network as a whole via a distributed consensus approach, and an average value, E e [z * e ], is obtained. Finally, using the L map in reverse, each nano-node obtains the best matching parameterized function and, hence, the best matching wave attributes as well. We proceed to detail the operation of each nano-node in Fig. 2. It comprises two phases, the iteration over local active element states (a set of impedance values {z e : R + jX}) to detect the one (z * e ) yielding the maximum power flow, and the consensus phase to obtain the E e [z * e ]. From another point of view, this phase is also an ambient compilation of an EM functionality, since the nano-nodes self-tune the HSF to perform a full EM wave absorption [9]. The iteration phase is initialized by resetting the z * e variable. Subsequently, the nodes are iterating over all their impedance values, orchestrated by the pulse/power generator (considered to be physically decoupled from the inter-nanonode packet exchange process). We assume that the pulse generator emits its energy in the form of pulse sequences representing integer identifiers of the nodes' impedance values. This pulse emission and orchestration via impedance value identifier broadcast process is continuous. The identifiers are broadcast given enough time for the impedance value to be setup and the power measurement to be obtained (TTM: time to measure), accounting for clock drifts and signal processing variations.
Reset Local
The consensus phase then operates as follows. Each node initializes its consensus value as E e [z * e ] ← z * e : R + jX, and broadcasts it to all nano-nodes in its vicinity. A random delay (RD) is considered to minimize packet collisions. The broadcast date has the form of a packet structured as follows: SENDER ID (8 bit) R (32 bits) X (32 bits) where the sender ID is an integer identifier of the sending node. During a fixed time interval (TTW: time to wait), each receiving node collects incoming packets while also keeping a log of the average reception power for each one. This information is kept in a hash-map, with sender IDs as keys, as shown in Fig. 2. Subsequently, each node obtains a first estimate of E e [z * e ] as: where w e ∈ (0, 1) is the personal weight that the node gives to its local value, and w SENDER ID is the weight assigned to the incoming consensus packets. We consider that the w SENDER ID values are proportional to the reception power of the corresponding incoming packets, and are normalized to comply to the condition: The consensus process converges iteratively, allowing each node to obtain the actual E e [z * e ] value [25]. The consensus process is allowed to run for a maximum number of send/receive packet cycles (max cycles), upon which it yields the estimated E e [z * e ]. The process then concludes by reversing L to detect the full EM absorption parameterized function that best matches the estimated E e [z * e ] and, subsequently, it returns the corresponding EM function parameters as the most probable EM wave attributes.
The process as a whole can be immediately restarted, should the HSF tile be operating in a dedicated sensory role, i.e., to sense EM waves and inform other HSF units to adapt accordingly. Alternatively, having obtained the sensory information, the same HSF tile can autonomically apply a different EM manipulation function, such as steering the sensed wave towards a direction. The mode of operation can be adapted to the application scenario at hand.
IV. EVALUATION
We evaluate the proposed scheme using simulations. The evaluation combines full wave simulations conducted in CST [40] and nano-network simulations conducted in the AnyLogic platform [41]. The full wave simulations produce a dataset of power flow values at each active element in a specific HSF design (shown in Fig. 3) for a variety of impinging wave directions and polarizations, and for any active element state each time. This dataset is then passed on to the nano-network simulator which follows the process described in Section III.
The HSF used in this work consists of an array of 30 × 30 square metallic patches over a thin metal-backed dielectric ( Fig. 3 illustrates a 3×3 sub-part for ease of presentation). The unit-cell includes two lumped elements modeled as complexvalued RC (resistive and capacitive) loads that connect neighbouring metallic patches, along the xand y-directions. The Fig. 3. Metasurface used for ABSense in this work, with annotation of the main unit-cell parameters and the coordinate systems, Cartesian (xyz) and spherical (θ, ϕ). Incident wave polarization is termed TE (or TM) when the E-field vector is perpendicular (or parallel) to the plane of incidence, defined by the z-axis and an azimuth angle ϕ.
corresponding unit-cell, framed with a white border, is designed for 5 GHz operation and its main parameters are: w uc = 10 mm, w p = 4.5 mm, g = 0.5 mm, t m = 0.02 mm, t d = 0.5 mm, ε r = 2.2 and σ = 5.8 × 10 7 S/m. For normal incidence, perfect absorption of xand y-polarized plane waves is achieved when R x = R y ≈ 1.15 Ohm and C x = C y ≈ 0.99 pF.
Sample visual hash-maps that are used to link the RC xy values required for perfect absorption with incidence direction and polarization are depicted in Fig. 4: panel (a) shows the required RC y/x for TE/TM polarized incidence as the elevation angle θ varies, for azimuth angle ϕ = 0, i.e., when xz is the plane of incidence; panels (b)-(d) show the absorption coefficient for waves of three different incidence directions and polarizations as the RC values are varied; the white circles mark the regions where the attenuation coefficient is higher than 0.9 (corresponding to reflection coefficient of −10 dB or lower), and the white crosses mark the RC values leading to perfect absorption. Note that only the lumped elements oriented parallel to the impinging E-field affect the resonance of the unit-cell; for instance, when (θ, ϕ) = (75 o , 0) the TE (perpendicular) polarization is only affected by RC y elements and TM (parallel) polarization is only affected by RC x elements. Finally, our algorithm inherently assumes that the power in the TE and TM polarizations is known (or can be measured) so that the absorption coefficient can be translated to absorbed power. In the context of the consensus workflow, we will assume that the possible RC y/x values are discretized in a 10 × 10 grid covering uniformly the axes span of Fig. 4.
The present physical implementation of the HSF can AB-Sense the direction of any linear polarization in the principal planes (xz and yz, ϕ = 0 or 90 o , respectively), where the xand y-oriented lumped loads are decoupled and directly correspond to TE or TM polarizations; ABSensing the direction of incidence of pure TE or TM (but not both) polarizations, when ϕ ∈ (0, 90 o ) is also possible, but requires for more careful cross-polarization coupling considerations as both loads (RC x/y ) affect both polarizations (TE/TM) simultaneously. The general case of ABSensing elliptical polarizations in a broad frequency range and in the entire hemisphere, ϕ ∈ (0, 360 o ), is a topic for future work requiring for more complicated 'anisotropic' unit-cell designs (optimized for higher resolution in ϕθ vs. RC measurement) and, additionally, the ability to measure the current (or voltage) phase on the lumped elements.
Regarding the nano-node workflow parameters, we consider one nano-node per HSF active element. Each nano-node is located exactly below the corresponding element center and at t d/2 depth within the substrate. In order to simulate the inter-nanonode communication channel, we employ the model of [27], [42], [43] using the same physical-layer parameter values (frequency 100 GHz, noise level 0 dBnW, SINR threshold −10 dB, guard interval 0.1 nsec). Assuming a bitrate of 100 Gbps, we consider a consensus data packet duration of approximately 1 nsec (i.e., the 72 bits of the consensus packet plus preamble overheads rounding up at 100 bits total). The transmission power is set to 30 dBnW, yielding approximately 20 nodes within connectivity range.
Regarding the consensus process parameters, we set a TTM equal to 50 times the HSF operating period (5 GHz → 0.2 nsec), to accommodate any transient EM phenomena (typically lasting 2−3 periods) and obtain dependable average power flow values. The RD is picked at random in the range 0 to 10 packet duration(s) to minimize collisions. The TTW is set to a marginally larger value than the maximum RD, i.e., 12 times the single packet duration. Finally, we assign a 50 % consensus weight to the local optimum of each node and an equal, 50 % aggregate weight to all incoming consensus values.
In Fig. 5 we consider the impinging EM wave of Fig. 4(b). Additionally, we assume that the power measurements of each node contain a fault expressed as a random percentage over the actual value. We are interested in deducing the minimal max cycle value that eliminates the error via the consensus value averaging. The consensus process is shown to be very robust to such errors in the measurements. For errors up tõ 82 %, even one consensus cycle is enough, while˜12 cycles can eliminate even the highest measurement errors.
In Fig. 6-7 we focus on the 90 % measurement error case and set max cycles to 10. Fig. 6 plots the consensus (i.e., sensory) value progression versus time. For ease of exposition, we employ the EM function identifier (cf. rel. (1)) to denote the correct impinging wave attribute (f id = 10). As shown, the consensus process converges rapidly even for nodes with completely erroneous initial measurement. Thus, the consensus process is economic in terms of required packet transmissions. The corresponding packet statistics are shown in Fig. 7-top: each node received 10 packets (per cycle) from each of its ∼ 20 neighbors, subject to some losses due to collisions. The transmission per node are strictly bounded and fully defined by the max cycles value. Finally, as shown in Fig. 7-bottom, the measurements phase lasts for 10 µsec (i.e., 10 × 10 RC values to iterate over, times the time to obtain a single measurement), while the consensus phase lasts for 1 µsec (i.e., max cyles times the RD and TTW). At an aggregate EM wave sensing time of 11 µsec, ABSense shows promise for application in real-time-adapting HSFs.
V. CONCLUSION AND FUTURE WORK
The manipulation and re-shaping of EM waves via intelligent metasurfaces constitutes a key enabler of exotic capabilities in energy harvesting, medical imaging and wireless communications. Nonetheless, accurate EM manipulation requires the accurate sensing of the attributes (direction, polarization, phase) of the wave impinging on a metasurface. Filling in a gap in the related research, the study proposed a novel sensory scheme that exploits the ambient intelligence and communication capabilities of the HyperSurfaces, a novel meta-surface variant. An embedded nano-network automatically tunes the HyperSurface to full absorb the impinging waves, as exhibited by the increase of energy dissipating within it, hence indirectly estimating its attributes. The proposed scheme, validated via extensive EM simulations, can also constitute the basis for distributed EM compiler processes, were the nano-network will auto-tune the HyperSurface status to obtain any high-level objective in its macroscopic EM behavior.
In the future, the authors plan to extend the proposed scheme towards sensing non-planar, complex EM waves, while also advancing the nano-network consensus process into an alwayson process that will run in parallel with EM manipulation objectives. | 2019-07-09T15:01:33.000Z | 2019-07-09T00:00:00.000 | {
"year": 2019,
"sha1": "4bed4cf047ee1d63eb32455d75b7ce0bc07b3159",
"oa_license": null,
"oa_url": "https://upcommons.upc.edu/bitstream/2117/340291/3/1907.04811.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d7f07be4d6f4b77c9c780491ed28b7e5ec98da83",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
10633824 | pes2o/s2orc | v3-fos-license | Nonparametric variational inference
Variational methods are widely used for approximate posterior inference. However, their use is typically limited to families of distributions that enjoy particular conjugacy properties. To circumvent this limitation, we propose a family of variational approximations inspired by nonparametric kernel density estimation. The locations of these kernels and their bandwidth are treated as variational parameters and optimized to improve an approximate lower bound on the marginal likelihood of the data. Using multiple kernels allows the approximation to capture multiple modes of the posterior, unlike most other variational approximations. We demonstrate the efficacy of the nonparametric approximation with a hierarchical logistic regression model and a nonlinear matrix factorization model. We obtain predictive performance as good as or better than more specialized variational methods and sample-based approximations. The method is easy to apply to more general graphical models for which standard variational methods are difficult to derive.
Introduction
Approximate posterior inference-estimating the conditional distribution of hidden variables given some observations-is an important problem in many settings. In this paper, we develop a new variational inference algorithm for complex probabilistic mod-Appearing in Proceedings of the 29 th International Conference on Machine Learning, Edinburgh, Scotland, UK, 2012. Copyright 2012 by the author(s)/owner(s). els. Compared to traditional variational methods, our method can capture more expressive distributions and be applied to a wider class of models.
Variational inference methods define some restricted family of distributions over the hidden variables θ and try to find the member of that family that is closest to the posterior. The family is chosen so that the problem of finding the distribution q that best approximates the posterior becomes a tractable optimization problem.
Variational methods are effective and widely used. These methods usually find a unimodal approximation of the posterior, especially when the variational family is the commonly chosen mean-field family (Jordan et al., 1999;Beal, 2003). Such an approximation is inadequate when the posterior is multimodal. Furthermore, variational inference algorithms are challenging to derive for models that lack conditional distributions in tractable exponential families (i.e., for models without conditional conjugacy).
We develop a variational inference method for continuous hidden variables that captures multimodality and can be applied to many non-conjugate models. The variational family is a mixture of Gaussians, where the variational parameters are the locations and variances of each mixture component. This family of distributions resembles classical kernel density estimators from nonparametric statistics (Silverman, 1986). To approximate the variational objective function, we use Taylor series approximations of the log joint distribution and a bound on the entropy. We call this method nonparametric variational inference (NPV). In contrast to traditional unimodal variational distributions, the multiple components of the mixture can capture different aspects of the posterior.
While mixture approximations have been studied in the variational inference literature (Bishop et al., 1998;Jaakola & Jordan, 1998), we develop this idea into a more generally applicable framework. (We discuss other approaches to mixture approximations in Section 4.) NPV is "general" in the sense that it is not tailored to a specific model, only requiring that the first and second derivatives of the log joint probability log p(θ, y) be computable. Thus, it can be used in non-conjugate settings, i.e., where conditionals of the the individual hidden variables cannot be computed, such as in Bayesian models with non-conjugate priors. While previous methods for variational inference in non-conjugate models rely on mathematics tailored to the problem at hand, NPV is easily adapted to many settings.
In the following sections, we describe the variational objective function using this family and a generalpurpose algorithm to approximately optimize it. We illustrate its performance on two models. First, we show that it performs as well in Bayesian logistic regression as the method of Jaakkola & Jordan (2000), which is tailored to that specific model. Second, we show that it outperforms several MCMC methods for a non-conjugate matrix factorization model of brain activity data (Gershman et al., 2011). Nonparametric variational inference is a promising strategy for approximating posterior distributions in complex probabilistic models.
Variational inference
We consider the problem of computing the posterior distribution of hidden variables θ ∈ R D given observed data y, This computation is analytically intractable for many models of interest because the denominator is difficult to compute.
The idea behind variational methods is to approximate p(θ|y) with a distribution q(θ) that belongs to a constrained family of distributions, indexed by a variational parameter (Jordan et al., 1999;Beal, 2003). The goal is to choose a member of that family that is "closest" to the posterior. In variational inference, closeness is measured by Kullback-Leibler (KL) divergence, Thus, inference becomes an optimization problem: we choose the variational parameter to minimize the KL divergence. The family of distributions is chosen to make this optimization tractable.
The KL divergence is difficult to optimize because it requires knowing the distribution that we are trying to approximate. In variational inference, we maximize an objective that is equal to the negative KL divergence plus a constant. Recall that KL[q(θ)||p(θ|y)] ≥ 0. We define a lower bound on the log marginal likelihood (evidence) log p(y) through the relation where is the negative free energy, also known as the evidence lower bound (ELBO). Here H[q] is the entropy of q and f (θ) = log p(y, θ). The ELBO is equal to the negative KL divergence plus the marginal distribution of the observations, which is constant with respect to the family q. It therefore reaches a maximum when p(θ|y) = q(θ), where the KL is zero. Note that this is only attainable when the target posterior p(θ|y) is in the variational family, which it usually is not. Typically, q will be constrained to a family of simpler distributions, and F[q] is optimized to find the distribution in this family that is closest (in KL) to the true posterior.
The most commonly used variational inference algorithm is mean-field variational inference. Mean-field methods find q from the family of factorized posteriors: q(θ) = i q i (θ i ), where it is often convenient to choose q i (θ i ) to have the same functional form as the conditional distribution p(θ i |θ −i , y). When p(θ i ) is chosen to be conjugate to p(y|θ), the calculus of variations leads to closed-form coordinate ascent updates that converge to a local maximum of F[q] (Beal, 2003).
Despite the computational convenience of the meanfield approximation, it can be overly restrictive if there are strong dependencies between the hidden variables in the posterior distribution. Moreover, the closedform updates are only available when using conjugate priors; many likelihood models of interest, such as logistic regression and the multilayer perceptron, cannot be paired with conjugate priors, making the application of mean-field methods more difficult.
Nonparametric variational inference
We now consider a flexible family of variational approximations that admits an efficient inference algorithm. Our algorithm is appropriate for models with continuous-valued hidden random variables, and does not require conjugacy between pairs of variables.
We choose the distribution q(θ) to be a uniformlyweighted Gaussian mixture with isotropic covariances, where µ n is the mean of the nth Gaussian component and σ 2 n is its variance. We call this a "nonparametric" family: We are making a weak set of assumptions about the shape of the posterior, since the Gaussian mixture family can approximate arbitrarily complex posteriors given a sufficient number of components. Further, this family resembles kernel density estimators used in classical nonparametric statistics (Silverman, 1986), with µ n playing the role of a kernel center and σ 2 n playing the role of a bandwidth parameter.
The Evidence Lower Bound
If q is in the family defined by Eq. 5, we cannot compute the ELBO F[q]; in general there is no closedform expression either for the expectation of a nonlinear function under a Gaussian distribution or for the entropy of a mixture of Gaussians. However, we can approximate the ELBO and optimize this approximation (see Lawrence, 2000;Honkela et al., 2007, for other approaches to this problem). First, we lower bound the entropy term H[q]. Then, we approximate the expected log joint E q [log p(y, θ)].
We lower bound the entropy (the first term in Eq. 4) using Jensen's inequality (Huber et al., 2008), Each integral in Eq. 6 is the sum of N convolved Gaussians, each component convolved with the nth. We obtain the final bound by using the fact that the convolution of two Gaussians is another Gaussian, where q n = 1 N N j=1 N (µ n ; µ j , (σ 2 n + σ 2 j )I). We now turn to the expected log joint f (θ), which is the second term in Eq. 4, We approximate each term in this sum with a secondorder Taylor series expansion of f (θ) around µ n , where H n = ∇ 2 θ f (θ) is the Hessian matrix of second derivatives. The approximate expectation is This approximation is known as the multivariate delta method for moments (Bickel & Doksum, 2007), and is often used within variational inference schemes for models that cannot exploit conjugacy (e.g., Braun & McAuliffe, 2010).
Finally, we add the bound in Eq. 7 to the approximation in Eq. 9. This gives the approximate ELBO 1 Intuitively, the likelihood term, f (µ n ), encourages placing samples in areas of high probability density, while the entropy term, log q n , penalizes "overcrowded" locations (i.e., where many samples are near each other). The Hessian term captures the local curvature of the posterior, discouraging the algorithm from placing samples in areas with high probability density but low volume (and therefore low mass).
We note two attractive properties of the approximate ELBO in Eq. 10. First, we have made no conjugacy assumptions; our only requirement is that the log joint f (θ) = log p(θ, y) is twice differentiable (or thrice differentiable if one wishes to use gradient ascent; but see below). Second, although the objective function involves a Hessian term, it only requires the calculation of the diagonal components; the cost of computing the diagonal of the Hessian is comparable to the cost of computing the gradient.
Optimizing the ELBO
Eq. 10 is a tractable approximation of the ELBO in Eq. 4. Our goal is now to maximize Eq. 10 with respect Algorithm 1 Nonparametric variational inference Input: data y, number of components N . Initialize θ 1:N randomly.
to the variational parameters µ n and σ n . One option is to use a gradient-based solver. However, there is a serious computational problem with this approachcomputing the gradient of Eq. 10 requires computing a matrix of third derivatives, since we must compute the gradient of the Hessian trace Tr(H n ). This leads to a cost that is quadratic in the number of parameters.
To avoid the calculation of third derivatives, we use both first-and second-order approximations of the ELBO. The first-order approximation is This is obtained in the same way as Eq. 10, but using a first-order approximation of f (θ), rather than the second-order approximation in Eq. 8. We iterate between optimizing the variances σ using the secondorder approximation in Eq. 10 and optimizing the means µ using the Eq. 11. Each optimization is done using L-BFGS. We found that it is more efficient to optimize L 1 [q] with respect to one mean at a time, holding the others fixed, and iterating over components. This coordinate ascent procedure converges faster than batch optimization of µ 1:N , but coordinate and batch optimization produce similar results. Our algorithm is summarized in Algorithm 1.
Both L 1 [q] and L 2 [q] are approximations of F[q]. Splitting the optimization problem into these two steps allows us to avoid the cost of calculating the gradient of σ 2 n 2 Tr(H n ) with respect to the means µ. In our experiments, 3 iterations typically proved sufficient to achieve convergence. Although the first-order approximation may appear drastic, it still achieves our main goal: placing kernels in areas of high probability mass. Further simulation work is needed to assess the tradeoffs involved in this approximation.
As an illustration, we constructed a synthetic multimodal "posterior" f (θ) using a mixture of skewed bivariate t-distributions. Figure 1 shows f (θ) alongside the NPV approximation with several settings of N .
With N = 1, the approximation is only able to capture a single mode, but with N = 2 it is able to capture the two modes with high fidelity, though it cannot capture the true covariance structure or the heavy tails. With N = 10, the approximation better captures the skew by placing several low-variance components along the diagonal. This illustration demonstrates some strengths and weaknesses of the NPV approximation: it can capture multi-modality, but the isotropic covariance of the components makes it difficult to capture skew in the posterior. This problem can be ameliorated by using more components.
Note that the number of parameters that need to be fit with NPV increases linearly with N (the number of components in the mixture). This may pose challenges for models with a large number of hidden variables. On the other hand, it may only be necessary to use a small number of components (e.g., less than 10) to capture the major aspects of the posterior (as suggested by Figure 1). We note also that the KL divergence between the mixture distribution q and the true posterior decreases at best logarithmically in the number of mixture components N , suggesting that there may be diminishing returns to using very large values of N (Jaakola & Jordan, 1998).
Relationship to other algorithms
The NPV objective relates to several other methods. When there is one component N = 1, the entropy term log q 1 does not depend on the mean µ 1 , and when σ 2 1 becomes sufficiently small, the Hessian term of Eq. 10 goes to 0. Consequently, the NPV objective when N = 1 and σ 1 → 0 is L[q] = log p(y, µ) + const. = log p(θ = µ|y) + const.
The maximum of this function is the maximum a posteriori (MAP) solution.
When N = 1 and σ 2 1 is allowed to vary, we obtain a Gaussian approximation centered around the MAP solution. This can be understood as a diagonalized Laplace approximation (MacKay, 1995), i.e., where we ignore correlations between the dimensions of θ. The Laplace approximation has drawbacks: for example, it is not invariant to reparameterization, it performs badly when the mean and mode of the posterior are far apart, and it cannot capture multiple modes (Beal, 2003).
When N > 1 and σ 2 n → 0, we obtain a quasi-Monte Carlo approximation of the posterior, q(θ) = 1 N N n=1 δ µn (θ), where δ µn (·) is a Dirac point mass located at µ n . Thus one way to look at the NPV algorithm is as a deterministic sampling method.
Related work
Approximate inference for non-conjugate models is an active area of research. Some authors have used numerical or Monte Carlo methods to approximate intractable integrals. For example, Lawrence et al. (2004) used importance sampling to approximate the expectations required for inference in a Bayesian model of microarray images. Ihler et al. (2009) generalized particle filtering for approximate inference in factor graphs with continuous variables. Honkela et al. (2007) used numerical quadrature to approximate expectations in a nonlinear factor analysis model. These techniques are useful, but may fail in high dimensions.
Several researchers use specialized approximations for certain classes of models, such as those with logistic nonlinearities (e.g., Jaakkola & Jordan, 2000;Khan et al., 2010). In contrast, our goal is to develop an algorithm for inference in general non-conjugate models with continuous hidden variables.
Closely related to our method is the mixture meanfield (MMF) method (Bishop et al., 1998;Jaakola & Jordan, 1998;Lawrence, 2000), which models the posterior as a mixture of mean-field approximations. Recently, Bouchard & Zoeter (2009) revisited this approach using soft-binning functions. NPV can be viewed as a special case of MMF because each component factorizes into a collection of one-dimensional Gaussian sub-components (due to the isotropic covariances). Our innovation is that we exploited the functional form of the Gaussian mixture to derive an efficient approximate inference algorithm. NPV requires no user input beyond specifying the joint likelihood function, its gradient, optionally the diagonal of its Hessian, and the number of components. These modest requirements give NPV a practical advantage in situations where it is difficult to derive the MMF up-dates.
Applications
In this section, we apply the NPV algorithm to several probabilistic models and compare its performance to other widely-used methods.
Logistic regression
In this section, we ask whether NPV produces reasonable approximations for models where closed-form updates can be applied. We focus on a hierarchical logistic regression model and compare its accuracy to a standard variational treatment (Jaakkola & Jordan, 2000, henceforth "JJ").
Generative model. The observed data y = {c, X} consist of T binary class labels, c t ∈ {−1, 1}, and K covariates for each datapoint, x t ∈ R K . The hidden variables θ = {w, α} consist of K regression coefficients w k ∈ R, and a precision parameter α ∈ R + . We assume the following model (MacKay, 1995): Here a and b are hyperparameters (shape and inverse scale, respectively) that we assume to be fixed.
Results.
We evaluated NPV and JJ on 13 binary classification data sets compiled by Mika et al. (1999). 2 The number of covariates in these data sets ranges from 2 to 60, and the number of observations ranges from 24 to 7400. We used split-half training/testing. We used the following hyperparameter settings: a = 1, b = 0.01, N = 5 (similar results were obtained with N = 10).
The predictive distribution for NPV was approximated using a Monte Carlo estimate. We drew 1000 samples from the fitted variational mixture of Gaussians and estimated the log-likelihood of the test data as an average of the log-likelihoods under each sample. Figure 2 (top) compares the log-likelihood of the test data under the NPV and JJ approximations. NPV and JJ achieve statistically indistinguishable accuracy. Figure 2 (bottom) shows the same comparison for the ELBO, confirming that NPV closely mimics the JJ approximation. We emphasize that JJ exploits special properties of the generative model (i.e., a clever lower bound on the logistic sigmoid function), whereas NPV only uses the derivatives of the joint distribution.
We also fit the model using an MCMC algorithm, Hamiltonian (or Hybrid) Monte Carlo algorithm (HMC; Neal, 2011), which takes the same inputs as NPV (the log joint probability and its gradient). HMC uses the gradient of f (θ) to efficiently explore the posterior, making it one of the most effective samplers for models with continuous variables. With 1000 samples, we found that this algorithm predicts held-out data significantly worse (p < 0.00001, Wilcoxon signed-rank test) compared to NPV and JJ. Presumably the inferior performance of HMC could be improved by running the sampler for longer, but this would result in greater computational overhead.
Topographic latent source analysis
We now study our method with a more complicated model, for which standard variational algorithms are inapplicable. We apply the NPV approximation to a nonlinear latent variable model of functional magnetic resonance imaging (fMRI) data. Data from fMRI experiments contain measurements of brain activity that are collected while a subject performs a task, such as labeling images. The goal of these experiments is to understand the relationship between cognitive pro- cesses and brain activity. One reason this problem is complicated is that fMRI data is spatial. Brain activity is measured in 3D brain-space (a grid of "voxels"). Measurements made on nearby voxels are dependent. Gershman et al. (2011) developed a factorization model of spatial patterns in fMRI data, topographic latent source analysis (TLSA). TLSA decomposes voxel activations into a set of spatial functions (topographic latent sources). These functions are related to task and cognitive variables (called "covariates") through a weight matrix that is also inferred from the data. We can evaluate the quality of a fitted model by using it to predict held-out brain data, conditional on covariates. Unlike traditional probabilistic matrix factorization models, TLSA is not conditionally conjugate and closed-form mean-field inference is not available. Gershman et al. (2011) approximated the posterior with MCMC, but their method was too slow to analyze large data sets.
Generative model. Each datapoint t in an fMRI experiment consists of a vector of V voxel activations, u t ∈ R V , and a vector of C covariates, x t ∈ R C . The intuition behind TLSA is that the spatial organization of voxel activations arises from a small number of anatomically localized brain regions involved in processing the task. Formally, TLSA decomposes the voxel activations into a covariate-dependent superposition of K latent sources: where tv ∼ N (0, τ −1 ) is a Gaussian noise term, w ck is a weight that specifies how covariate c influences source k, and g kv is the activation of source k in voxel v. This generative process (illustrated in Figure 3) can be viewed as a probabilistic matrix factorization model where {g k } are basis images that are combined to produce the observed neural activity.
Each basis image is constructed by evaluating a parameterized spatial basis function at each voxel location. Following Gershman et al. (2011), we chose this function to be a radial basis function with parameters ω k = {r k , λ k }: wherer k ∈ [0, 1] M is the source center (in normalized coordinates), λ k ∈ R + is a width parameter, and r v ∈ [0, 1] is the location of voxel v. In the notation of Section 2, the observed variables are y = {X, U, R} and the hidden variables are θ = {W, G}.
To complete the generative model, we placed the following priors on the parameters: In all our analyses, we used the following hyperparameter settings: τ = 1, σ 2 w = 5, ρ = 1. Results. We fit TLSA to data collected by Mason and Just (unpublished), involving subjects viewing words. Each word was either the name of a type of tool or of a type of building (i.e., there were 2 classes), and the subject's task was to think about the word and its properties. There were a total of 84 trials per subject (see Gershman et al., 2011, for more details). We restricted our analysis to a 1,323 voxels (a single slice of the brain activity data) from a single subject. We trained the model on one half and then generated predictions of the neural data for the other half, conditioning on the test covariates. For NPV, we approximated the predictive distribution using a Monte Carlo estimate, as described in the previous section. As an illustration of the model fits, Figure 4 Figure 5. The nonparametric variational approximation improves TLSA predictions of held-out data.
The Y-axis represents the negative log-likelihood of predictions for held-out neural data, conditional on the covariates. In all cases K = 20 sources were used. Standard error bars are smaller than the markers.
idiosyncrasies that may be difficult to extract using a single point estimate. In other words, the NPV approximation captures several local maxima of the posterior; we next show that this translates into better predictive accuracy.
We evaluated the quality of the reconstruction by calculating the mean-squared reconstruction error of held-out neural data, a quantity proportional to the negative log-likelihood of the held-out data. We also fit TLSA using HMC (see above); we collected 5000 samples, keeping the last 200 for the predictive distribution. We repeated this procedure for the Metropolis-Hastings (MH) sampler used in the original TLSA paper (Gershman et al., 2011). The results are shown in Figure 5. NPV works well with a varying number of components (though best when N > 3), substantially outperforming the MAP and MCMC estimators.
We re-emphasize here that TLSA is non-conjugate, and hence MMF cannot be applied without using specially-tailored approximations (Lawrence, 2000). Note that while both MH and HMC are asymptotically guaranteed to perfectly approximate the posterior, these algorithms require tuning and are often slow to converge. In our experiments, NPV was about 3 times faster than HMC.
Discussion
We developed an approximate inference method for posteriors that do not necessarily enjoy the conjugacy properties that make common variational approximations (e.g., mean-field) possible. Our algorithm is easy to apply to new probabilistic models; all that is required is the likelihood function and its gradient (a requirement shared by many other algorithms, including MAP estimation and HMC). When applied to a hierarchical logistic regression model, we found that NPV incurs little loss in accuracy compared to a more specialized variational algorithm (Jaakkola & Jordan, 2000). We further showed, using a nonlinear latent variable model of fMRI data, that NPV can find an approximation of the posterior that improves predictive performance over MAP estimation and MCMC.
NPV has limitations. First, it assumes a simple approximating family. This could be improved by introducing a full covariance matrix into the component distributions or by allowing the components to be nonuniformly weighted. Further, NPV only applies to continuous variables. We plan to extend it to models with discrete hidden variables.
In summary, NPV is a posterior inference algorithm that is a step towards generically applicable variational approximations. The need for such approximations is increasing, as researchers begin to explore more and more complicated probabilistic models to cope with the increasing complexity of large data sets. Our hope is that by employing generic inference algorithms, the hard work of inference can proceed "invisibly," and researchers can devote more time to testing and refining the assumptions of their models. | 2012-06-18T08:32:05.000Z | 2012-06-18T00:00:00.000 | {
"year": 2012,
"sha1": "6ba0491f9dde8ea042ea4a49df34838b345f23c2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6ba0491f9dde8ea042ea4a49df34838b345f23c2",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
16566740 | pes2o/s2orc | v3-fos-license | Altered Co-contraction of Cervical Muscles in Young Adults with Chronic Neck Pain during Voluntary Neck Motions
[Purpose] Muscle co-contraction is important in stabilizing the spine. The aim of this study was to compare cervical muscle co-contraction in adults with and without chronic neck pain during voluntary movements. [Subjects and Methods] Surface electromyography of three paired cervical muscles was measured in fifteen young healthy subjects and fifteen patients with chronic neck pain. The subjects performed voluntary neck movements in the sagittal and coronal plane at slow speed. The co-contraction ratio was defined as the normalized integration of the antagonistic electromyography activities divided by that of the total muscle activities. [Results] The results showed that the co-contraction ratio of patients was greater during flexion movement, lesser during extension movement, slightly greater during right lateral bending, and slightly lesser during left lateral bending compared with in the controls. [Conclusion] The results suggested that neck pain patients exhibit greater antagonistic muscle activity during flexion and dominate-side bending movements to augment spinal stability, while neuromuscular control provides relatively less protection in the opposite movements. This study helps to specify the changes of the stiffness of the cervical spine in neck pain patients and provides a useful tool and references for clinical assessment of neck disorders.
INTRODUCTION
Neck pain is a common disorder in the aged population. In young adults, the cumulative 1 year incidence of neck pain was estimated to be 16.4% 1) . Various physical, psychosocial, and sociodemographic factors have been reported to be associated with chronic neck pain 2) . Middle-aged subjects with chronic neck pain have shown dysfunction of kinesthetic sensibility characterized by increased movement irregularities 3) and movement errors 4) during reposition tasks. Subjects with chronic neck pain also showed abnormal cervical muscle recruitment patterns during dy-namic and work-related tasks 5) . To prevent the progress of pain with age, the early detection of neck control problems in young patients with neck pain would be worthy of study.
Muscle co-contraction, the simultaneous activation of agonistic and antagonistic muscles, contributes to maintenance of spinal stability 6) . Since muscle stiffness increases with increased muscle activation associated with increasing effort, it is believed that co-contraction of muscles helps to stiffen and stabilize the spine 7) . Previous studies have shown that muscles exhibit higher activations and generate greater force during eccentric contractions than during concentric contractions 8) . Thus, co-contraction of antagonistic muscles is important to augment the stiffness of spine. Clarification of muscle co-contraction patterns can be helpful in understanding the control strategy of the central nervous system under different movement conditions and its links to neck disorders 9) .
Co-contraction of the extremities and trunk has been extensively studied [10][11][12] . Muscle activity associated with voluntary co-contraction has been shown to increase joint stiffness. Researches have also indicated that muscle cocontraction can be affected by internal 13,14) and external 15) postural disturbances and can be affected by movement speeds 16,17) . The regulation of co-contraction is presumably an efficient adjustment mechanism for spinal stability. Nevertheless, previous studies on the co-contraction of neck muscles were only conducted under isometric contraction in a neutral posture 18) , and there are no studies related to the co-contraction pattern of patients with chronic neck pain.
The purpose of this study was to compare cervical muscle co-contraction during voluntary movements in healthy adults to that in patients with chronic neck pain. The cocontraction patterns were quantified by the electromyography-based (EMG-based) co-contraction ratio (CCR). Comparisons of the CCR between the two groups would reveal the characteristics of the neuromuscular control strategies, and could help to facilitate a specific training program for the treatment of neck disorders.
SUBJECTS AND METHODS
Fifteen subjects with a history of nontraumatic neck pain and fifteen asymptomatic age-matched subjects were recruited in this study. The subjects in the neck pain group (four males and eleven females) were between 20 and 28 years of age and had suffered from neck pain for at least six months (mean 41.4 ± 43.8 months). Subjects were excluded if they had either undergone cervical spine surgery or complained of any neurological signs. Asymptomatic subjects (six males and nine females) were between 19 and 28 years of age and were excluded if they had any history of neck pain or neck orthopedic disorders. All subjects were right-hand dominant. This study was approved by the institutional medical research ethics committee. Sufficient explanation about the experiment was given, and the experiment was conducted only with those who had consented to participation.
The average intensity of neck pain was measured by a 0-10 numerical rating scale (NRS), with 0 meaning "no pain" and 10 meaning "the worst possible pain imaginable." Patients also completed a self-administered questionnaire to determine their level of impairments resulting from neck pain by the neck disability index (NDI, total score of 50). Table 1 shows the duration of pain, average intensity of pain rated on the NRS, and perceived level of disability measured by the NDI for the neck pain group. The presence of cervical pain and dysfunction in all patients was examined by the same trained physiotherapist.
Surface EMG activities of the bilateral sternocleidomastoid (SCM), splenius capitis (SPL), and semispinalis capitis (SSC) were measured (Trigno Wireless, Delsys, Boston, MA, USA). The SCM muscles are the main neck flexors, and the SPL and SSC muscles are the main neck extensors in maintaining neck dynamic stability 19) . These muscles are also the major muscles for lateral bending motion. The skin surface was shaved of hair and cleaned with alcohol swabs, and wireless EMG electrodes were applied. For the detailed placements of the electrodes on the SCM, SPL, and SSC muscles, please refer to our previous study 20) . The Trigno Wireless system does not use a reference electrode.
An electrogoniometer (CXTLA02, Crossbow Technology, Inc., San Jose, CA, USA) attached at the top of the head was used to record the range of motion of the head synchronously with the EMG. The electrogoniometer traces the inclination to the gravity line and offers fast-response and high-resolution measurement (0.1° over the range of ±90°).
The recorded EMG signals were digitally band-pass filtered between 20 Hz and 450 Hz, full-wave rectified, and smoothed with a low-pass filter (time constant of 100 ms; Butterworth 4th-order). The high-pass cutoff frequency at 20 Hz reduced the noise sources from motion artifacts and ECG artifacts 21) .
The subjects were asked to sit on a chair with their head positioned in a neutral position. They were instructed to perform two sessions of tasks sequentially. In the first session, the subject performed maximal voluntary isometric contraction (MVC) of the cervical muscles against a fixed surface for 3 s in the anterior, posterior, left, and right directions, respectively. There was a rest period of 2 min between each repetition to minimize the effect of fatigue. In the second session, the subject performed voluntary movements in the same four directions at slow movement speed. Each movement direction included two phases with the neutral position as the bench mark, i.e., from the neutral to terminal position, holding for 3 s, and then from the terminal to neutral position. The terminal range of motion was reached when the subject felt mild resistance. Each movement was performed with a constant movement period of around 10 s to represent slow neck movement, which was selected to minimize the inertial interactions. Three trials of all the tasks were recorded and analyzed. To assure repeatability, all measurements were collected by the same trained physiotherapist.
For EMG analysis, the central 1-s EMG activities of three MVC trials were averaged as the reference activity for data normalization. The averaged EMG data were expressed as the normalized average integration of EMG activity (%MVC). The six muscles in this study were attributed to either agonistic or antagonistic muscles. The antagonists were the bilateral SPL and SSC muscles in flexion, bilateral SCM muscles in extension, and the contralateral muscles in lateral bending (i.e., right side SCM, SPL, and SSC muscles during left lateral bending, and the left side SCM, SPL, and SSC muscles during right lateral bending). The CCR was then calculated by the following equation:
CCR = ΣNAIEMG Anta /ΣNAIEMG Total
The subscript "Anta" indicates the antagonists, and "Total" indicates all muscles. Details of the algorithm were described in our previous study 22) .
The one-sample Kolmogorov-Smirnov normality test was used to verify whether each measurement was normally distributed. The independent t-test was used to examine the CCR measurements. The significance level was set to 0.05.
RESULTS
The mean age, body height, and weight were not significantly different between groups. The recruited patients presented mild to moderate neck pain (numerical rating scale: 2-6) and mild disability (neck disability index: 3-14) ( Table 1). Compared with the numerical rating scale recorded before and after the tests, no subjects complained of augmented pain. The asymptotic significances of the Kolmogorov-Smirnov test for the measurements (p>0.05) indicated that all measurements complied with the normal distribution.
The CCRs were highest during motion from the neutral to flexed position (ranging from 0.79 to 0.88), and lowest from the flexed to neutral position (ranging from 0.11 to 0.18) during the sagittal plane motion in both groups. Compared with the control group (0.79±0.07 and 0.62±0.13 for motion from the neutral to flexed position and motion from the extended to neutral position, respectively), the patients particularly showed greater CCRs (0.88±0.05 and 0.71±0.08) during the flexion movement (all p<0.025). In addition, the patients also showed lesser CCRs (0.19±0.05 and 0.11±0.05 for motion from the neutral to extended position and motion from the flexed to neutral position, respectively) during the extension movement compared with those of the controls (0.27±0.09 and 0.18±0.06, all p<0.004) ( Table 2).
The CCR generally ranged from 0.41 to 0.55 during the coronal plane motion in both groups. Compared with the control group (0.52±0.07 and 0.42±0.07 for motion from the neutral to right end position and motion from the left end to neutral position, respectively), the patients showed marginally significantly greater CCRs (0.55±0.08 and 0.47±0.08, p = 0.08 and p = 0.06, respectively) during the right lateral bending movement. The patients also showed slightly lesser CCRs (0.49±0.07 and 0.40±0.08 for motion from the neutral to left end position and motion from the right end to neutral position, respectively) during the left lateral bending movement compared with those of the controls (0.52±0.07 and 0.41±0.08, p = 0.08 and p = 0.10, respectively) ( Table 2).
DISCUSSION
The major findings for the young adults with chronic neck pain were that 1) patients demonstrated higher CCRs during flexion and right lateral bending than the asymptomatic controls and 2) patients demonstrated lower CCRs during extension and left lateral bending than the control group. The different CCR patterns between the two groups indicate how neuromuscular control changes the stiffness levels of the cervical spine in response to chronic neck disorder. The rationale for the altered co-contraction of cervical muscles in patients with chronic neck pain is discussed as below.
This study showed that the CCRs of the patients during flexion and right lateral bending were greater than those of the healthy adults. This indicated that the spinal stiffness of the patients is predominantly taken up from the muscle guarding phenomenon as well as possible proprioceptive deficits. Sjolander et al. found that jerky and irregular cervical movements are characteristic sensorimotor symptoms in chronic neck pain 3) . Our previous study also showed that the neck pain patients would have poor position sense acuity 20) . Thus, the greater CCRs during those movements could be attributed to a higher activation of the antagonists to augment the steadiness of the spinal movement. However, the high muscular contraction is at the cost of greater muscle fatigability 23) and spinal loads 24) . Endurance exercises designed for those muscles would be suggested to reduce myoelectric manifestations of cervical muscles and to induce an improvement in pain and function of the subjects with chronic neck pain 25,26) .
The second finding of this study was that patients demonstrated lower CCRs during extension and LLB than the control group. This indicated that their neck flexors as well as the muscles on the dominant side are not sufficiently activated. Together with the above statement that the antagonists were highly activated during flexion and right lateral bending, there seems to be quite an imbalance in neck muscle activations of chronic neck pain patients between flexion and extension as well as between right and left lateral bending. The results implied that neuromuscular control provides relatively less protection for neck pain patients especially during neck extension and left lateral bending. Abnormal activations of neck muscles or neuromuscular control errors could decrease the stiffness of the cervical spine and expose it to a less stable situation, and may be the reason why the chronic pain persists. It was therefore suggested that strengthening of the neck flexors as well as the muscles on the dominant side to maintain a normal level of cervical co-contraction could be important for the prevention of neck disorders. As discussed above, the CCR evaluation in the two groups aided in identification of a risk factor for chronic neck pain, that is, inadequate agonist/antagonist coordination. The low-load craniocervical flexion exercise, which is designed to flex the upper cervical spine, was reported to improve the muscle coordination 27) . Several studies also reported a reduction in neck pain with strengthening and endurance exercise of cervical muscles 28,29) . Further studies are needed to verify the effect of training protocols.
Several methodological considerations for this study should be addressed. First, only the superficial muscle groups were examined in this study. The indwelling needle electrodes used to detect small and deep muscles were not considered, since such a method may induce anxiety in neck pain subjects or hinder head movements. Second, the subjects were instructed to perform movements at a constant period to ensure the consistency of the neck movements. Since this study focused on the investigation of voluntary movements, the subjects were not controlled by constant angular velocities using an isokinetic dynamometer device. Finally, the number of subjects was relatively small. Caution should be emphasized in generalizing the findings.
In conclusion, the results showed that the young adults with chronic neck pain exhibited altered muscle responses. This could be due to the proprioceptive deficits that resulted in greater antagonistic muscle activity to augment spinal stability. Meanwhile, certain muscles also demonstrated insufficient activations that expose the patients to a less stable situation. Future studies to provide more insights into those mechanisms that could lead to better evaluation of neck pain and the development of rehabilitative exercise programs are suggested. | 2018-04-03T01:33:43.113Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "bddc5f3543be4d8db6f3be48db5005b81e609db5",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jpts/26/4/26_jpts-2013-454/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bddc5f3543be4d8db6f3be48db5005b81e609db5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216413039 | pes2o/s2orc | v3-fos-license | A didactic perspective on negotiations and collaborations between different actors within the Swedish support system: children with autism spectrum disorders included in community-based preschool settings
ABSTRACT In the present study, a didactic perspective was used to examine collaborations and negotiations between preschools and habilitation centres concerning intensive behavioural interventions for children with autism spectrum disorders in inclusive settings in Swedish preschools. The didactic triangle was used as the theoretical tool to analyse information derived from a qualitative case-study in two preschools exemplifying ‘high quality practice’. Direct content analysis was used to analyse data with a focus on the child, the pedagogue, and the subject. Data were collected through multiple sources during a 12-month period, including observations and interviews. A model of aspects of the collaboration between preschools, habilitation centres, and families was conceptualized based on the didactic triangle: the ‘pedagogue cornerstone’ encompassed competence, attitudes, and collaborations; the ‘child cornerstone’ encompassed learning in relation to specific goals; the ‘subject cornerstone’ encompassed both subjects shared with typically developing peers and subjects related to the specific challenges. In addition, the preschool principals were described as important. Different factors in relation to tensions and collaborations between organizations concerning inclusive education were elaborated. Implications for preschools, inter-organizational collaboration, and future research are discussed.
Introduction
In this article, a didactic perspective is used to examine collaborations and negotiations between different actors within the Swedish support system when preschool aged children with autism spectrum disorders (ASD) receive intensive behavioural intervention (IBI) in communitybased Swedish preschools. Sweden has agreed to support the principles of inclusion regarding children with special needs (UNESCO, 1994;United Nations, 2006). Approximately 95% of children between 4-5 years of age attend preschool (Swedish National Agency for Education, 2018a), and this is also the case for children with ASD. This does not necessarily mean that children with ASD receive support that meets their needs within inclusive preschool settings. Nilholm and Göransson (2017) identified four categories defining inclusion in educational research: 'placement definitions' define inclusion as the placement of pupils in need of special support in regular schools; 'specified individualized definitions' as meeting the needs of the specific pupils in need of special support; 'general individualized definitions' as meeting the needs of all pupils and, 'community definitions' as creating inclusive communities. Placement in regular preschools can be seen as a necessary but insufficient prerequisite for meeting the needs of children with ASD.
The didactic triangle
In the 17 th century, Comenius (1657Comenius ( /1999) described didactics as the art of teaching. Already in about the 12 th century Europe, the core of didactics was described to encompass three disciplines, all with a focus on 'orderly' approaches: the order of knowledge, the order of teaching, and the students' orderly approach to learning (for a review, see Hopmann, 2007). Hopmann (2007) describes that according to this definition it is necessary to be prepared for teaching and teaching is not the same as everyday life learning. Comenius (1657Comenius ( /1999) stated that to develop teaching, pedagogues need to ask: what, when, how, and whereby can challenges be met?
The core of didactic theory builds on three cornerstones: student, teacher, and subject. They have been depicted as interacting to build the didactic triangle, which can be used as a conceptual model for planning and reflecting on teaching (Gidlund & Boström, 2017;Zierer, 2015). The student cornerstone includes the child's learning, the teacher cornerstone includes the teacher's qualities, and the subject cornerstone includes the knowledge content (Gidlund & Boström, 2017). Interactions among the cornerstones are perceived as ongoing over time, however, the focus in research has shifted between the cornerstones (Klette, 2007;Zierer, 2015). Although widespread, the didactic triangle has not previously been used to understand the implementation of IBI in relation to preschool children with ASD.
The didactic triangle may be a fruitful tool to understand factors affecting the implementation of IBI in preschools, despite the different philosophical origins of the didactic triangle and IBI. The didactic triangle, on the one hand, stems from an emphasis on teachers' methodological freedom (Hopmann, 2007). IBI, on the other hand, has its basis in applied behavioural analysis, pragmatism, and evidence-based approaches (see below). Yet, both the didactic triangle and IBI share a focus on questions such as: What do the children need to learn? How they can learn it? Who can teach the children?
Children with autism spectrum disorders ASD is a lifelong neurodevelopmental disorder that starts in early childhood (Magiati, Tay, & Howlin, 2014). It is defined in DSM 5 and ICD-11 as impairments in reciprocal social interaction and communication and as restricted, inflexible, and repetitive behaviours and interests (American Psychiatric Association, 2013;World Health Organization, 2019). The characteristics often also include difficulties in learning 'naturally' from experiences, unusual sensitivity for sensory impressions, and resistance to change (American Psychiatric Association, 2013;Bölte, 2014;Daniels & Mandell, 2014). The prevalence of persons with an ASD-diagnosis has increased over the last few decades (Centers for Disease Control and Prevention, 2018;Stockholm County Council, 2017). For example, in 2016 the prevalence of ASD diagnosis in Sweden's capital Stockholm was about two percent of the boys under the age of 12 years, four percent of the teenage boys, one percent of the girls under the age of 12 years, and two percent of teenage girls (Stockholm County Council, 2017).
Intensive Behavioural Intervention (IBI)
The origin of IBI began in the 1970s and in 1987, when Ivar Lovaas, at UCLA, published a groundbreaking study demonstrating the effectiveness of early IBI (Lovaas, 1987). Since then, the practices have been developed and include a variety of teaching procedures (discrete trial, incidental, and natural environmental teaching). Research suggests that when skilled professionals use IBI, children with ASD can make significant gains (e.g., in adaptive behaviour, intellectual ability, and sociocommunicative skills); however, outcomes in 'real-life' community-based settings have been reported to be smaller than within controlled set-ups (e.g., Eikeseth, Klintwall, Jahr, & Karlsson, 2012;Eldevik, Berg Titlestad, Aarlie, & Tønnesen, 2019;Matson & Konst, 2014;Reichow, Barton, Boyd, & Hume, 2014).
Common elements in IBI interventions are that they are individualized, skills are built step-by-step, practices are based on the child's initiatives as well as adultdirected, positive reinforcement is used, generalization of skills is planned, interactions with typically developing peers are included (one-to-one practices are also used), and progress is measured in the natural environment (e.g., Eldevik et al., 2019;Leaf, Taubman, McEachin, Leaf, & Tsuji, 2011). When IBI is provided in Swedish preschools, pedagogues are expected to work with these elements. In addition, the children preferably start participating in IBI before the age of four, the children spend approximately 20-40 hours a week involved in structured activities, and the pedagogues need appropriate knowledge and are supervised (e.g., Eldevik et al., 2019;McGee, Morrier, & Ala'i-Rosales, 2019). Tailoring IBI to individual children requires understanding of ASD, applied behaviour analysis, child development, the individual child and the context (e.g., the preschool setting), and there is a need for adults to be structured, yet flexible (e.g., Leaf et al., 2016;McGee et al., 2019).
The Swedish support system
On average, each group within a Swedish preschool consists of 13-15 children (with and without special needs), with about a 1:5 ratio of children per pedagogue. When children have ASD, the preschools usually obtain funding to employ additional personnel (Eikeseth et al., 2012). Approximately 39% of the pedagogues in Swedish preschools have a preschool teacher exam, 2% have a similar exam (e.g., teacher exam for other ages), and 20% have completed a secondary school education directed towards working with children (Swedish National Agency for Education, 2018a).
The formal provision of support to children with ASD is mainly provided within the educational system within the municipalities (preschools) and the healthcare sector (habilitation centres, HCs). Preschools and HCs thus belong to different sectors. The preschools are obliged to follow their guidelines, including the Swedish Educational Act and the Curriculum for the Preschool (Swedish National Agency for Education, 2010, 1998/2019), and HCs are obliged to follow their recommendations and the Health and Medical Service Act (1982:763). Both preschools and HCs share a common recommendation to base practices on scientific evidence and best practices. The Curriculum for the Preschool emphasizes that preschools should be enjoyable, prepare for lifelong learning, and provide a rich learning environment. The Swedish Educational Act states that preschool principals are responsible for ensuring that children who need special support for their development receive the support required and that the children's guardians have the opportunity to participate in the planning of the support. The Swedish guideline for HCs (Bromark & Granat, 2012) are more specific, recommending early intensive interventions based on applied behaviour analysis for children with ASD. Accordingly, in case of IBI, HCs provide supervision to preschool staff.
Aim
The purpose of this study is to use the three cornerstones of the didactic triangle (Gidlund & Boström, 2017) to examine negotiations and collaborations between organizations in relation to learning when children with ASD receive IBI in mainstream Swedish preschools. We want to contribute to a deeper understanding of the implementation of IBI. As IBI for children with ASD in preschools focuses on the children's learning, it seems likely that the theoretical structure provided by the didactic triangle facilitates researchers to reveal and further understand the implementation of IBI in relation to learning in preschool. This article is based on a case-study of two community-based preschools, each enrolling a child with ASD together with typically developing peers. Our hypothesis is that by using the didactic triangle, we will contribute to a more in-depth understanding of how collaborations between preschools and habilitation centres can be organized for children with ASD.
Methodological considerations
A qualitative case-study approach was used (Yin, 2009). Yin (2009) recommends that a case-study be used when the goal is to expand the understanding and contextually explore real-life in depth. Data were collected through observations and interviews. A previous article (Roll-Pettersson, Olsson, & Ala'i-Rosales, 2016) analysed different data from the same case-study using a grounded theory approach to conceptualize a theoretical model of implementation, which expanded the 'Active Implementation Formula' used by Metz (2016). In the present study, material from the casestudy is re-analysed to explore inter-organizational collaboration and children's learning using the didactic triangle as a theoretical model. In the present study, the following pre-determined categories were used to analyse data: student, teacher, and subject.
Procedures, participants, and analyses
Two HCs with adjacent municipalities were chosen based on having long history of implementing IBI. The HCs forwarded the authors' written information to parents concerning the purpose and methods of the projects, the rights of the participants, and the authors' contact information (to be used in case of questions and if they wanted to participate). HCs were asked to only send the information to families with a child diagnosed with ASD who had experience of preschools exemplifying 'high quality' practice of IBI during at least one year. This included parents who were actively involved, supportive preschool principals, competent pedagogues, and ongoing supervision from HCs. Following parental approval, contact was made with the preschools. Parents and staff interested in participating were also given written and oral information about the project and their rights as participants and signed written consent forms. The cases revolved around two 5-year-old boys, one at each preschool, who had received ASDdiagnosis when they were 2 to 3-years-old. As shared by the participants and noted in the field notes, both children had clear symptoms in line with the criteria for an ASD diagnosis (see above). The practices we observed included common elements in IBI, the pedagogues at the preschools often based interventions from children's own interests/initiatives and used procedures such as positive reinforcements, prompting, and engaging other children in activities. Furthermore, the pedagogues prepared the working material and planned activities. The pedagogues employed to work directly with the children with ASD had both relevant university degrees and experience of working with children with ASD and IBI.
Data were collected during a 12-month period through participant observations, direct observations, a focus group interview, and semi-structured individual interviews. Participant observations took place at the two preschools, covering about 20 hours per site. Direct observations were conducted at two supervision meetings at the HC with pedagogues, parents, and HC supervisors, and at an introductory course at a HC for pedagogues at preschools. A focus group interview was conducted at a HC with HC staff: a behavioural psychologist, a behavioural speech language therapist, and a social worker (in this article, these participants will be referred to as HC staff and the persons from HCs providing support to preschools will be referred to as HC supervisors). Semistructured individual interviews were conducted with two pedagogues (para-professionals at preschools), two parents, two municipality-based special educators, one district level special education administrator responsible for granting resources and goal-setting, one behavioural special education specialist, and one senior supervisor behavioural psychologist. The two pedagogues were employed as preschool paraprofessionals by the communities, and each of them worked mostly with the child in focus at the preschool. Municipality staff (including pedagogues) and HC staff were also asked questions about the general support provided to children with ASD in preschools within the region. The participants have been given fictitious names for anonymity purposes. Participant observations at the preschools included taking part in daily activities and conversing with staff. Field notes were taken during and after the observations. Interviews were recorded and transcribed verbatim.
The interviews were analysed by line-by-line coding. Interview transcripts and field notes were combined and analysed using direct content analysis (Hsie & Shannon, 2005). As stated by Hsie andShannon (2005, p. 1281): 'The goal of a directed approach to content analysis is to validate or extend conceptually a theoretical framework or theory.' Direct content analysis allows for using an existing theory deductively to guide the coding of the categories (Mayring, 2000).
Preschool settings
Preschool 1 is located outside of a large city in Sweden and has more than 100 children between one and five years old. Anton (5 years old) was diagnosed with ASD 2-3 years before the current study. He belonged to a group of about 20 typically developing peers. Anton lived with both parents. His mother Amelia participated in the interviews. The pedagogue Agnes had worked with Anton for over a year and with IBI for approximately 10 years with supervision from the HC. There was a senior level supervisor at the HC, Alice, who had contact with the preschool, Anton, and Anton's parents, and an intermediate level supervisor, Amanda, who was also affiliated with the case. Agnes visited Alice at the HC for supervision and follow-up together with Anton's parents every 4-6 weeks. Agnes and Amelia also had meetings at the preschool once every month to discuss the process, materials, and goals.
Preschool 2 is a small preschool in a large city in Sweden. Ben (5 years old) was diagnosed with ASD 2-3 years before the current study. He belonged to the preschool, together with typically developing peers. Ben lived with both parents. His mother Bianca participated in the interviews. The pedagogue Barbara had worked with Ben for about three years and with children with ASD for more than five years. Ben's parents moved him from another preschool to the participating preschool when he was diagnosed with ASD. There was a senior supervisor at the HC, Beatrice. Beatrice met with Ben, Ben's parents, and Barbara once every month. There was also a municipality-based behavioural special educator, Britta. Britta was proficient in IBI, and she had a position similar to an intermediate level supervisor at HC. Britta's position, as a municipality-based supervisor, is uncommon in Sweden. She participated at the meetings at the HC. She also provided onsite weekly support and coaching to Barbara at the preschool and contacted the HC supervisor for advice when needed.
Ethics
Ethical principles and code of conduct for research according to the American Psychological Association (2018) and the ethical standards as described in the 1964 Declaration of Helsinki and its later amendments were followed when applicable, including actions such as informing participants in writing and orally about the purpose of the project, the methods, and their rights. Participants signed consent forms. The parents of the other children in the preschool groups were given basic information about the project. The project was ethically approved through the Department of Child and Youth Studies, Stockholm University, Sweden. We provided assurances that no names would be reported. Pring (2006) maintains that respect of confidentiality and dignity of informants are important ethical principles. In order to respect the confidentiality of the participants in this study, descriptions pertaining to them were kept to a minimum, including all names, places, etcetera, which were omitted or changed. Information perceived as possibly being sensitive was only used if both the parents and the professionals gave similar information.
Results
The result section is organized based on the cornerstones in the didactic triangle. Figure 1 summarizes the findings. In order to be consistent with the terminology used by the participants, the student cornerstone is here referred to as the child cornerstone and the teacher cornerstone is referred to as the pedagogue cornerstone. As can be seen in Figure 1 depending on which cornerstone that is in focus different aspects of the collaboration between the preschools and the HCs emerge.
Child
When analysing the data with the child in focus, a picture of consensus in beliefs between preschools, HCs, and parents emerged concerning both the prerequisites of the child and the practice of choice. The children with ASD were described as active learners with specific challenges associated with the ASD diagnosis. As highlighted in interviews by parents, pedagogues, and HC staff, it was difficult for the children to learn using practices that are common for typically developing children, such as free-play. Instead, the practices were adapted to their prerequisites within the context of the learning environment. HCs were involved in defining the child's prerequisites for learning. The practices could sometimes be done together with typically developing peers but also separately from the other children or with a few peers. All expressed positive effects of IBI on the child with ASD. Preschool principals were described by HC staff, pedagogues, and parents as having the power to arrange (or not arrange) the learning situation in preschools for children with ASD.
Two types of inter-organizational tensions seemed to hamper the quality of the teaching. The first tension discussed in interviews concerned which organization is responsible for adapting practices to meet the children's needs as well as to the contextual environment in the preschools. HCs gave advice to preschools about practices based on prevailing evidence as defined in their guidelines, but to adapt the practice to the child and the preschool setting was viewed as the responsibility of the preschools. However, as noted in interviews, pedagogues in general usually lack the necessary experience and skills to adapt the advice to preschool settings. This puts focus on what qualifications are needed in order to be a pedagogue for children with ASD and where the expertise of the pedagogues concerning the child is expected to begin and to end.
The second tension concerned the responsibility for evaluating the learning process. According to the curriculum for the preschool (Swedish National Agency for Education, 1998/2019), children's learning in preschool shall be planned and documented. Compared to what is usually the case within Swedish preschools, these children with ASD had measurable individual goals that were evaluated regularly. Furthermore, HC supervisors took part in the evaluations, as they were done within the interorganizational collaboration. Although follow-ups meetings were used to plan daily activities and to evaluate the learning, evaluations could be a challenge: She [HC supervisor] has set up a great advanced program on how to get rid of temper tantrums, but we have not really been able to follow it up. (Bianca, mother) Lack of time was said to be one of the reasons for not following-up, but the participants also expressed uncertainty about why evaluating goals was difficult. One possible solution discussed in the focus group interview is a team approach in which meetings are arranged at the preschool or in the home rather than at the HC.
If we [HC supervisors] are at the home, then the pedagogues could also come to the home … we think about ourselves as one team. All the time, we think that we are one network. (Focus group) Seeing other things, such as the physical learning environment, materials, and other children, were additional benefits noticed by the HC staff, accomplished by meeting in places other than at the HCs, which contributed to a better understanding of the learning context in the preschool. Meeting in different places also distributes the power more evenly between HC supervisors, pedagogues, and parents.
The physical environments at these preschools were adapted to reduce sensory stimuli that might interfere with learning while at the same time supporting the children in becoming accustomed to frequent disturbances such as sounds.
Anton works in a separate room. The room is open so that Agnes and Anton can work by themselves, but he still gets used to sounds and noises and then he is not isolated and other children can join in. (Field note) This highlights the importance of pedagogues having knowledge concerning how to adopt the learning environment to meet specific children's needs. Supervision by the HC supervisors was described as an important factor. Also, even though inclusion, defined as being placed with typically developing peers, was positively valued, the presence of 'too many' children was mentioned as challenging for some children with ASD.
Pedagogue
In the current study, the pedagogue refers to the pedagogue at the preschool responsible for working with the child. The pedagogues used their own expertise in order to make informed decisions. According to Zierer (2015), ability, knowledge, will, and judgement, i.e. competence, are key components in expertise. The pedagogues acquired competence through in-service training and expertise supervision by the HC supervisors. HC staff described in-service training and supervision as empowering components; They [pedagogues who participated in in-service training] wanted to learn strategies. Now, they feel that they can make a difference for the children. (Focus group) One aim of the supervision from HC was to support pedagogues on 'how to obtain specific goals' which sometimes lead to inter-organizational tensions between the HCs and the preschools. According to discussions in the focus group interview, providing external supervision sometimes led to lowering the pedagogues' self-determination. However, when successful, supervision led to improvements in the preschool learning environment. Pedagogues need to obtain knowledge in order to be competent and to make independent and informed decisions. Thus, even though we focus here on the pedagogues, the HC supervisors were also involved in their education.
Both HC staff and pedagogues suggested that their knowledge could be used more by elementary school. For example, it was noted by participants that interorganizational collaborations during preschool did not continue when children transitioned over to elementary school. Pedagogues and HC staff actualized that the gap between the practices in preschools and in preschool classes (first year of elementary school) led to great disadvantages for many children with ASD, resulting in parents of older children with ASD not placing their children in regular preschool class settings. Preschools were described as being more amenable to adapting practices to children with ASD and to collaborating with HCs than elementary schools. One pedagogue (Barbara) emphasized the importance of pedagogues within preschools providing their knowledge to HC supervisors. She highlighted that pedagogues have unique knowledge about how to understand and communicate with specific children as well as have personal knowledge about the children's cultural background.
The parents expressed that they wanted support from the preschool and the HC (cf. Olsson, Hagekull, & Bremberg, 2006). Bianca (mother) described the importance of having meetings without the child in order to talk about things that are sensitive for the child, such as how to manage aggressive behaviour, as well as to talk about topics without needing to explain to child. The pedagogues in this study collaborated to a great extent with the parents and gave them advice.
Parents and HC staff discussed tensions experienced when parents suspect that a preschool is not following through on the HC supervisors' recommendations. While this was not the case at the preschools in focus of this case-study, the participants described their experiences from other preschools. One strategy mentioned to handle parental dissatisfaction was that parents chose to move their child to another preschool where the principals were more positive to collaborations with HC and the pedagogues were more knowledgeable. The HC supervisors could also ask the preschool staff to reflect on their work and ask them if they need support and thus empowering preschools: We felt that we did not really get the preschool staff 'onboard.' We did not think they really had enough knowledge to do the job and definitely did not have the time, that is, they did not have enough resources to work as planned. And we began to feel that it was wrong to let the parents think that the preschool will provide 'intensive' support because they did not. (Focus group) It would have been difficult to tell them [preschool pedagogues] that because the contact between the preschools and habilitation centres is already difficult, sensitive. They [preschools] have been very clear with: '-You cannot decide in our business.' That is how it is. We must accept that. (Focus group) The HC staff and the pedagogues highlighted the preschool principals as imperative for enabling pedagogues to educate themselves on ASD and IBI. The principals were also crucial for implementing IBI by stating (or not stating) to all of the preschool staff that implementation of IBI is important and by making it possible for the pedagogues to arrange the learning context in preschools to children with ASD.
Subject
Few tensions were noted concerning making decisions regarding subjects. Subjects were related to specific challenges for the child, including socio-communicative difficulties (e.g., expressing wishes), selective eating, or strong reactions to environmental stimuli such as sounds. There were also examples of subjects shared with typically developing peers, with the child with ASD practicing a skill more frequently or by using more structured teaching practices than was the case for typically developing children. For example noted was the importance of teaching one of the children with ASD not to bite other children; the pedagogue used laminated pictures with texts about not biting to teach the child to self-regulate strong reactions. Another example was to learn how to play with peers. The findings support previous results that children with ASD benefit from interacting with typically developing peers and need guidance regarding playing (see McGee et al., 2019;Syrjämäki, Pihlaja, & Sajaniemi, 2018). Pedagogues described the importance of peer-interactions, but they did not always know how to implement it. Without instructions, prompts, and structured play situations, the children sometimes risked being excluded. The HC supervisors provided guidance concerning peer mediation. The pedagogues continuously observed and provided instructions during play. Peers were asked to join the play, based on the peers' own interests or on the peers' capacity to interact with the child with ASD. The pedagogues sometimes scaffolded peers on how to engage the child with ASD in play activities. Criteria for progress were that the child with ASD played with other children for a longer period or in a more similar way as the other children.
The pedagogues, HC supervisors, and parents collaborated by suggesting and agreeing on goals during the supervisory meetings at the HC. Goals were transformed into subjects by pedagogues with support from the HC supervisors. HC supervisors provided guidance to pedagogues concerning how to teach the child with ASD different subjects based on the principles of applied behaviour analysis. For example, the parents of one child asked for support in toilet training, thereby a toilet training intervention with step-by-step goals was collaboratively devised to which both the pedagogue and parents adhered.
Several subjects were taught simultaneously. For example, as observed, when practicing basic skills such as drawing, the pedagogue encouraged the child to make his or her own choices and to collaborate with peers. There is also research indicating that choice reduces problem behaviour induced in demand situations among children with ASD (Carter, 2001). Social validity was stressed as important when considering subjects. The findings show that how a subject is defined has implications for how and when children with ASD interact and share subjects with typically developing peers.
Discussion
In this case-study, we examined negotiations and collaborations between organizations in relation to learning when children with ASD receive IBI in mainstream Swedish preschools. By using the didactic triangle (Gidlund & Boström, 2017), we detected several not previously described aspects concerning the implementation of IBI in preschools. When focusing on the child, these aspects were mainly about learning in relation to specific challenges and in the contextual learning environment. When focusing on the pedagogue, these aspects included the pedagogues' expertise, attitudes, and collaborations. Moreover, when focusing on the subject, both common and specific aspects were discernable as important. Thus, the findings contribute to the understanding of how collaborations between preschools and HCs can be conceptualized for children with ASD and support the didactic triangle as being a useful theoretical tool. To the best of our knowledge, this has not been shown before. The focus on the cornerstones highlighted HC supervisors as clearly being involved in the teaching within the case-study preschools. In order to work with IBI, the HC supervisors are seen as a recourse for preschools when defining subjects (skills to teach the child), deciding upon methods for teaching, and in understanding the specific child's approach to learning (cf. Hopmann, 2007). The preschools' principals and pedagogues are responsible for teaching in preschools, but the input from the HC supervisors were viewed by the preschools as important.
While the case-study method does not demonstrate the cause-and-effect relationships, the results indicate that preschools, HCs, and parents actively co-collaborate contributing to how the situation for the children in preschool was arranged. Their collaboration was described in interviews as decisive for the children's learning. As far as we know, such a large influence on preschool didactics from another organization has not been reported for either typically developing children or children with other disabilities in Swedish preschool. The focus on the child cornerstone demonstrates that preschools, HCs, and parents agreed that the children had specific challenges included in the ASD diagnosis (e.g., American Psychiatric Association, 2013). Even though the present study cannot tell us about the effects, there was an agreement that children with these type of difficulties do not learn as much with the 'common' didactic practices in preschools as they do with IBI, thus supporting previous research (e.g., Eikeseth et al., 2012;Eldevik et al., 2019). As stated by Leaf et al. (2016), Lovaas was quoted as saying, 'If a child cannot learn in the way we teach, then we must teach in the way the child can learn. ' (p.722). The HC supervisors were involved in evaluating what challenges the child had. As previously noted, the cases were chosen because they were examples of 'high quality' IBI, which very likely influenced our findings. We suggest that future research investigate the implementation of IBI in preschools with different prerequisites, for example, with less inter-organizational collaboration.
The most commonly used definition of inclusive education within research is inclusion by placement, without a need for the practices to meet the needs of each child (cf. Nilholm & Göransson, 2017). As Nilholm and Göransson (2017, p. 239) put it 'The fact that children with disabilities attend ordinary classes is a necessary but insufficient condition for inclusion.' Given that IBI is effective for children with ASD to learn (e.g., Eikeseth et al., 2012;Eldevik et al., 2019;Matson & Konst, 2014), the children in the current case-study are included, in the sense of getting education that meets their individual needs. Yet, the participants in the current study described that many other Swedish preschools (that is, not applicable to the ones included in the case-study) do not meet the needs of children with ASD. Those preschools may be seen as not being inclusive in a deeper sense than by placement (cf. Nilholm & Göransson, 2017). An argument for providing IBI in regular preschool settings is that the children benefit from interactions with typically developing children (see McGee et al., 2019). Sjödin (2015) found that Swedish schools favour abilities and characteristics of children that are seen as 'normal.' More knowledge is needed about which practices are most commonly used in Swedish preschools and how these can be developed in order to address the needs of children with ASD. As pointed out by Nilholm and Alm (2010), there is a risk that inclusion stops at physically placing children with special needs in the regular classroom without adjusting the didactic practices. Research needs to address if this is evident also in preschools.
An important component of IBI is to follow-up and evaluate the progress of the child (cf. Eldevik et al., 2019). We found that time constraints negatively effected inter-organizational collaboration regarding follow-ups, this is an aspect deserving further attention. Additional research is needed on the suggestion that follow-ups at different places can lead to both ecologically meaningful evaluations and to a more even distribution of power. Conclusively, a focus on the child cornerstone showed that the children with ASD were seen as learners with special needs. In order to learn as much as possible, children with ASD were seen as needing adapted teaching strategies.
The findings when focusing on the pedagogue reinforce previous results that a competent pedagogue is a prerequisite for learning (Leaf et al., 2016;McGee et al., 2019) and that expertise can be acquired through education and supervision (Denne, Hastings, Hughes, Bovellc, & Redford, 2011;Roll-Pettersson, Ala'i-Rosales, Keenan, & Dillenburger, 2010). Based on our findings, interorganizational collaborations can contribute to the latter. However, a conclusion from the current study is that a less favourable outcome of HC supervisors providing knowledge to pedagogues can be that the pedagogues follow instructions, instead of making informed decisions of their own (cf. Hopmann, 2007). In this study, the pedagogues showed competence in IBI. A question remains as to what effects there are of interorganizational collaboration on pedagogues with lacking knowledge or formal relevant education (cf. Långh, Hammar, Klintwall, & Bölte, 2017). The finding that knowledge from HC supervisors empowers pedagogues also deserves further investigations.
In this study, different aspects of the role of the preschool principals for supporting and arranging for collaboration between preschools and HCs have become visible. The principals are formally responsible for ensuring that children with ASD are given support that is tailored to their needs (Swedish National Agency for Education, 2010). The current findings suggest that they also seem to be important for the implementation of IBI by supporting the implementation and by arranging the learning context in preschools to the child with ASD. More research is needed on the role of preschool principals for inclusive practices.
The findings suggest that elementary school staff could learn more from preschool staff. Barriers between preschool and elementary school lead to a loss of learning in elementary school for children with ASD. Swedish preschools shall collaborate with elementary schools to support children's development in a long-term perspective and create continuity (Swedish National Agency for Education, 2018b). How the knowledge from preschool and HC could be used in the transition process between the preschool and the elementary school deserves further attention.
The parents of the children with ASD interacted regularly with the pedagogues concerning, for example, goal setting and follow-ups. The findings underscore that expertise among preschool staff is valued among parents (e.g., Olsson et al., 2006). Conclusively, the current research suggests that pedagogues contribute to the learning for children with ASD and that the inter-organizational collaboration with HC is important for what the pedagogues know.
The focus on the subject cornerstone made visible that, in line with previous findings (e.g., McGee et al., 2019), children with ASD were taught subjects that were shared with most other children in preschool but also more unique subjects. The findings might be explained by previous research (e.g., Sjödin, 2015), suggesting that how a child 'should be' influences the teaching of children with ASD. Social validity was important when considering what subjects the children were taught in preschool (cf. National Autism Center, 2014).
Pedagogues, HC supervisors, and parents all suggested subjects based on what they thought are useful skills. Few inter-organizational tensions were noticed concerning this. This finding suggests that HCs and parents have influence over subject choices within preschools. This also deserves further attention, for example, to understand how to reach inclusive education when the practices for some children are under the influence of another organization. Conclusively, subjects in focus were well-defined skills that were either shared with typically developing peers or targeted for the child with ASD.
If it is indeed the case that IBI promotes positive development (as suggested by a number of researchers, e.g., Eldevik et al., 2019), then our findings underscore that inter-organizational collaboration is necessary for this to happen when support for children with ASD is arranged as in Sweden. It needs to be noticed that we have not compared different solutions or tested effectiveness, and we recommend that future research compare different models of support. Furthermore, comparing models of support within different countries might prove useful.
It was difficult to separate the cornerstones in the didactic triangle. For example, our findings suggest that in order for the child to learn, the physical environment needs to suit the specific child and pedagogues need knowledge about how to do this; however, how important the arrangement of the physical environment is, depends on the child. Over time, research has shifted focus between the cornerstones (see Klette, 2007), and more research is recommended on how such focus shifts affect the implementation of IBI in preschools.
In the present article, we used the didactic triangle as a theoretical and analytical tool to obtain a deeper understanding of the implementation of IBI in Swedish preschools. Despite the differences in the theoretical roots (for a review, see Hopmann, 2007;National Autism Center, 2014), we found this innovative approach to be fruitful.
We recommend that future research deepen the understanding of the implementation of IBI in relation to dominant interpretations of didactics in northern Europe, which includes methodological freedom and 'Bildung' of children (i.e., the unfolding of their sociability and individuality, Hopmann, 2007). As noted in previous research conducted in Sweden (e.g., Långh et al., 2017) and mentioned by participants in the current study, pedagogues may lack knowledge to make informed decisions in relation to IBI to children with ASD. In the current cases, the pedagogues were knowledgeable. Yet, the findings raise a question as to when the pedagogues and when the HC supervisors are seen as experts. Additionally, some preschool children may have difficulties in learning basic skills (speaking, etc.) through 'everyday life learning', and more research is needed on how they can achieve 'Bildung' through teaching.
The findings described in this article are based on a small-scale case-study and more research is needed to investigate these conclusions in a larger sample. We recommend that future research address how the benefits from the inter-organizational collaboration, including education of pedagogues, could be reached in a broader range of set-ups.
Though the children have 'inclusive' preschool placements, their education can be described as often (not always) taking place a bit at the side. The present study supports that the didactic triangle can be used by practitioners and researchers when planning for and studying preschool activities for children supported by multiple organizations in inclusive settings.
Summative conclusions
This qualitative case-study used the didactic triangle (Gidlund & Boström, 2017) to provide an understanding of inter-organizational collaborations and negotiations in relation to children with ASD who receive IBI in inclusive preschool settings. A picture emerged of practices that take place within preschools but depend on collaborations between preschools, habilitation centres (HCs), and parents. The pedagogue cornerstone encompassed the pedagogues' competence, attitudes, and collaborations. HC supervisors were described as important for increasing the knowledge of the pedagogues. Furthermore, as several agents were involved in the teaching (i.e., pedagogues, HC supervisors, and parents), the findings point to a need to understand which professions can be included in the pedagogue cornerstone in the didactic triangle. We suggest that the results be used to develop knowledge on how to increase inclusion and learning of preschool children with ASD. | 2020-03-05T10:27:48.600Z | 2020-01-02T00:00:00.000 | {
"year": 2020,
"sha1": "53d0bb379f54cbe6be2ca2ceec7fa64ab777d694",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20020317.2020.1711561?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "8d7a8bad8a6743e865a3ae0cb106788fe10e0621",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
219984216 | pes2o/s2orc | v3-fos-license | Superusers’ Engagement in Asthma Online Communities: Asynchronous Web-Based Interview Study
Background: Superusers, defined as the 1% of users who write a large number of posts, play critical roles in online health communities (OHCs), catalyzing engagement and influencing other users’ self-care. Their unique online behavior is key to sustaining activity in OHCs and making them flourish. Our previous work showed the presence of 20 to 30 superusers active on a weekly basis among 3345 users in the nationwide Asthma UK OHC and that the community would disintegrate if superusers were removed. Recruiting these highly skilled individuals for research purposes can be challenging, and little is known about superusers. Objective: This study aimed to explore superusers’ motivation to actively engage in OHCs, the difficulties they may face, and their interactions with health care professionals (HCPs). Methods: An asynchronous web-based structured interview study was conducted. Superusers of the Asthma UK OHC and Facebook groups were recruited Conclusions: Superusers from a UK-wide online community are highly motivated, altruistic, and mostly female individuals who exhibit judgment about the complexity of coping with asthma and the limits of their advice. Engagement with OHCs satisfies their psychosocial needs. Future research should explore how to address their unmet needs, their interactions with HCPs, and the potential integration of OHCs in traditional healthcare. (J Med Internet Res 2020;22(6):e18185) doi: 10.2196/18185
Introduction
Background Recent work has suggested that taking part in online communities for people with long-term conditions (LTCs) improves illness self-management [1] and adherence to treatment [2], produces positive health-related outcomes [3][4][5], facilitates shared decision making with health care professionals (HCPs) [6][7][8], and may even reduce mortality [9]. There is also evidence that self-management support interventions can reduce health service utilization [10,11]. Participation in online health communities (OHCs) for patients with LTCs can take up part of the health care service demand and indirectly improve access to health care [12]. However, much of this evidence comes from qualitative and observational studies [6,13,14]. Despite a lack of definitive evidence, policymakers are starting to see the potential of OHCs, for example, the Big White Wall, an OHC commissioned by some mental health services in the United Kingdom, Canada, and New Zealand [15]. The Irish health system is piloting closed Facebook groups for smoking cessation [16], whereas the Public Health England Stoptober smoking cessation campaign includes a Facebook group [17] among other initiatives. In the United Kingdom, Facebook is also being piloted by the National Health System Digital (NHS Digital) to improve cancer screening rates, with promising results [18]. This increasing attention to health social media calls for elucidating the mechanisms that make OHC engagement successful in terms of improving self-management [12,19]. Indeed, although some OHCs flourish, many suffer from little or no traffic [20]. The emerging literature investigating mechanisms of effective OHC engagement shows that superusers (ie, users who are in the top 1%-5% in terms of messages posted to the OHC) are key to success. They generate the majority of traffic and create value, so their recruitment and retention are imperative for the long-term success of OHCs [21]. A previous analysis of an online community for people with drinking problems found that common themes for superusers' engagement included introductions, greetings, general supportive statements, suggested strategies, success stories, and discussion of difficulties [22], showing that superusers reassuringly offer peer support toward behavioral and emotional self-management tasks [23], appropriately leaving to HCPs medical self-management tasks.
To fully understand the unique mechanisms of behavior change through internet-based interventions, collaboration and knowledge transfer between researchers, nonprofit organizations, and private organizations have been recommended [24]. Our network study of peer support in the Asthma UK and British Lung Foundation (BLF) OHCs [25] in collaboration with the platform provider HealthUnlocked has highlighted the key role of superusers. Superusers are distinct from community moderators formally appointed by the platform provider; the number of moderators among the highly active users was negligible [25]. Superusers are a naturally available resource and are responsible for holding successful OHCs together, engage with users with low posting activity, and indirectly contribute to the formation of ties between users. As users become more active within the community, they become more likely to reply to posts than to ask questions. This suggests that superusers gradually become experts, providing others with advice and support [2,7,26].
Online superusers could be considered allies of the health care workforce [27,28]. Our work has inspired the development of a new network-based theory of social medical capital, broadly defined in terms of the advantages that any user (patient or caregiver) can gain from participation in OHCs [29].
In this context, strategies to increase superusers' participation can improve engagement with OHCs [30]. This is attracting growing interest from academics, HCPs, and policymakers. Despite the evidence of superusers' key role in successful OHCs, little is known about this small but critical population, what motivates them to contribute to the community and stay active over time [22,25], whether they encounter any challenges, how their contribution could be supported in any way, and what would make OHCs safer and more effective. This study set out to understand what motivates superusers to adopt this role and the reciprocal value it offers them. Although the research was pragmatically driven, our interpretation draws on self-determination theory [31], a framework for differentiating intrinsic and extrinsic forms of motivation, to help link the insights to recommendations for health organizations seeking to engage with and utilize the value offered by superusers in their online communities.
Objectives
Here, we undertook an asynchronous web-based structured interview study of UK superusers previously quantitatively characterized [25], in collaboration with the charity Asthma UK. We additionally aimed to explore superusers' interfaces with HCPs and their views on HCPs' potential role in promoting engagement with OHCs and on HCPs' engagement as OHC participants themselves.
Interview Schedule Development
The interview schedule was developed with questions based on the extant literature, recent work on OHCs [25,32], and informal discussion dated from 2015 to 2018 with 2 superusers, 1 from a stroke OHC previously studied [33] and 1 from Asthma UK OHC.
Piloting Phase
Piloting was undertaken with 6 superusers recruited through the Asthma UK research operating officer (JP) and OHC moderator. JP emailed the weblink to the study questions and attached to the same email a Microsoft Word document with the interview questions in November 2018. Comments and suggestions were received by JP between November 2018 and February 2019. Superusers' suggestions improved the clarity of the introductory text and the queries asked. Some questions initially part of the same query were split to make replying easier (ie, questions 3-5 and 7-8), whereas new questions were suggested (ie, questions 10, 13-15, 21, and 24). This process resulted in 10 additional questions. The wording of some questions was also adjusted to make it more neutral to participants.
Inclusion Criteria
The inclusion criteria were as follows: • Living with asthma or caring for somebody with asthma • Having posted to an online asthma community at least one message per week for at least four weeks.
As there is no evidence yet about whether superusers' posting activity over time is regular or occurring in bursts (eg, when off work due to illness), we opted to be nonspecific about the 4-week period. Therefore, posting activity over any 4 weeks, consecutive or not, at any point in time would qualify participants as superusers.
In this asynchronous web-based structured interview, the definition of superusers is different from the retrospective one used in the study by Joglekar et al [25] (ie, top 1% of users characterized by the largest number of posts posted in the community over the entire observation period of 10 years). This previous study showed that only about 20 to 30 superusers were active on a weekly basis since 2015. The inclusion criteria for superusers were agreed upon by coauthors and the superusers who took part in the pilot phase. Due to the hypothesized small sample of potential participants available to recruitment [25], no saturation criteria were used to determine the study sample size.
Participant Recruitment
Of the 17 participants, 16 were recruited by the Asthma UK research operating officer (JP) by email and through an Asthma UK monthly email bulletin to take part anonymously through a SurveyMonkey link [34]. Responses were collected between March and April 2019. A superuser (OF) who is a member of the Asthma UK Centre for Applied Research Patient and Public Involvement (PPI) group [35] was invited and recruited by AD.
Ethical Approval
The study was approved by the Queen Mary University Research Ethics Committee (ref QMREC2205a). To address the issue of confidentiality around patient information and to avoid this information being known to the research team, superusers were approached only by the Asthma UK staff (JP) and invited to participate. The research team did not have access to personally identifiable information apart from the AUKAR PPI member and coauthor (OF).
Analysis
We analyzed the text from open questions using inductive content analysis as described by Elo and Kyngas [36]. Two authors (AS and AD) read all responses to familiarize themselves with the data. An initial coding framework with themes and subthemes was developed, which was adjusted as new data were added. This was done for the first 10 individuals and, subsequently, for the additional 7 individuals. Coding was then performed independently by 2 authors (AS and AD) on all data. Coding was discussed until agreement was reached, and the themes were revised as well.
Characteristics of the Participants
A total of 17 participants were included in the study (Table 1): 14 were people living with asthma, whereas 3 were mothers of children with asthma. The majority were female (15/17), with an age range of 18 to 75 years (3 out of 17 participants were aged 66-75 years).
Of the 17 users, 10 participated in 2 or more OHCs: 15 out of 17 in Asthma UK HealthUnlocked community and 10 out of 15 in Facebook groups. HealthUnlocked is the platform provider of the Asthma UK online community.
Before taking part in the study, they had been active in OHCs for 1 to 6 years and spent between 1 and 20 hours/week (11 out 17 participants spent ≥2 hour/week) reading and between 1 and 3 hours/week (7 out of 17 participants spent ≥1 hour/week) writing posts.
Self-reported participation increased over time for 14 out of 17 superusers and was linked to wanting to know more about asthma and its treatment in the context of deterioration of asthma or change in medical treatment. Other factors contributing to participation were increased familiarity and interest toward OHC members and improved awareness and knowledge of asthma.
Themes
Themes and subthemes were generated through content analysis of open-ended questions and are shown in Table 2. Our findings will be articulated into 4 themes: 1. Motivation to engage: Motivation to active participation in OHCs included personal advantage and the desire to help others/being altruistic. Engagement with OHC promoted superusers' sense of personal control, agency (ie, the actual ability to deal with a task or situation), and self-efficacy (ie, the perceived ability to deal with a task or situation) over their illness, particularly when they adopted the informal role of wise mentors wise mentors or supporters to other users. An important reason for people taking part in asthma OHCs was the reward felt by being helpful to other members.
Seeking Information/Support
Motivation to engage with OHCs was linked to personal advantage through gaining knowledge and support for asthma and its treatment:
To learn from others who actually know what it can be like and to learn from their experiences. [N.4]
Validation of own experiences in the context of asthma and the feeling of being less isolated were also important factors: To get validation from others with the same symptoms.
[N.17] Having the opportunity to talk with people who live with asthma was considered important: ...many people find comfort and support in such communities that cannot be offered by family members and/or friends that have not experienced the day-to-day living of conditions. [N.14] Reading other users' conversation was described as a positive experience that increased engagement: I enjoy the chats with others and reading the dialogue between others, many of whom I've got to know. [N.5]
Helping Others
Altruisms and the benefit of feeling in a position to help others were a significant factor sustaining the motivation to regularly take part in OHCs.
Some even mentioned the potential to save lives: Using their knowledge to clear up any confusion about asthma and medications was relevant, as well as making sure people with asthma took their disease seriously and did not rely on social media for queries that needed HCPs' input: Trying to make people take their asthma more seriously and not rely on social media for the answers which often don't come and then they end up in hospital. [N.15] Interestingly, a participant mentioned that part of the motivation was to disseminate proper scientific information: To be helpful and disseminate information, especially scientific information. [N.8]
Rewards for Online Health Communities Engagement
Participants found helping others a positive experience for themselves. By providing replies to other users' queries, superusers increasingly acquired confidence and were recognized for their role as community experts, which in turn boosted their motivation to further engage in OHCs. For some participants, who were unable to work due to ill health or were retired, taking part in OHCs could work as a replacement of role:
Financial or Social Recognition Not Important
A question addressed whether the contribution superusers make could be recognized in any way (socially or financially), considering it might help other patients to manage their illness better. Of the 17 participants, 13 replied negatively, 3 were unsure, and only 1 replied positively.
The main reason behind the no answer was that reward should come from the awareness of helping others and the fact that social interaction is actually enjoyable.
The motivation to ensure that all users felt equally important to the whole community also played a role: There is no guarantee that a superuser is any better informed than any other user. Superuser
Decisions on Posts to Respond to
When asked about what determines their decision to reply to certain posts, participants showed a reassuring awareness of the type of self-management support they were able to offer (ie, emotional and behavioral but not medical tasks): Some mentioned that they posted replies when they felt they were able to provide a different/unrepresented point of view with respect to the ones already given, which in turn could help others make decisions:
...there may not be another voice in that comment section giving the view I feel, so I may choose to add it. [N.16]
Participants' aim was to empower patients and carers through their own experiences: I will post a reply from my own experience of asthma gained over 50 years. [N.11]
Types of Support
The type of support most frequently provided by our respondents was mainly behavioral and emotional. In addition, most participants also mentioned their role in signposting users to source of information and support:
Medical Self-Management Needing Health Care Professionals' Input
Medical self-management was unanimously agreed upon as something that required consultation with HCPs, and all superusers had prior experience of referring other community members to their HCPs:
Problems and Difficulties
Of the 17 superusers, 9 described problems and difficulties associated with their role in the OHCs (2 were unsure about it, 4 replied no, and 2 did not reply).
The main difficulty described by superusers was the worry they felt regarding other community members who were not successfully managing their asthma and not seeking appropriate medical help: Members who put their health at risk by not realising how dangerous a situation they are in. [N.17] Other problems described included dealing with misunderstandings, spam, or posts promoting miracle cures or dangerous ideas (eg, buying medicine over the internet). Of the 17 participants, 9 had experience of reporting such posts to moderators:
Spam e mails, folks responding who've not understood my posts, prolonged communication. [N.2]
People offering "miracle" cures; people not being supportive; going off topic of the original post. [N.14] Some users found it difficult to deal with the negative tone of some conversations, when the underlying aim was to complain:
Some people don't want to take advice and will just complain constantly no matter what you suggest. [N.15]
Only 1 user mentioned being trolled once in the past and this being a negative experience. Asthma UK HealthUnlocked community was described as a good forum.
Posts Causing Superusers Worries/Stress
Posts causing superusers worry were about religion-based advocacies and derogatory or emotionally challenging stories. Posts offering bad advice, indicating that users had little knowledge of asthma and its gravity, and revealing a sense of responsibility of superusers to reply to posts and moral pressure toward other OHC users also caused worry and stress.
Moreover, superusers worried about posts from users who were struggling or acutely unwell and subsequently stopped posting or from users who had been chronically struggling with their asthma without seeking professional help:
Suggestions for Policies and Guidance
Of the 17 participants, 8 believed that more policies and guidance should be available for asthma OHCs (2 did not, 4 were unsure, and 3 did not answer the question). In particular, they felt that additional policies and guidance should be introduced on the rules for safe engagement with asthma OHCs and for clarifying when emergency medical advice is needed. Some participants did acknowledge that such policies were already in place, though not all users seemed to be aware of them. A suggestion was made for new users to be encouraged to passively engage and read posts before active engagement: New members should be encouraged to read without contributing at first. I think all members do this instinctively anyway...joining a social group is rarely instant. Good sites include few risks, made safe by the site rules, moderators and experienced users. [N.5] One participant recommended having policies and guidance about buying medications over the internet.
A number of participants highlighted the importance of quick removal of clearly bad advice so as to develop patient confidence in participation:
Health Care Professionals' Awareness of Engagement With Asthma Online Health Communities
Most participants' HCPs (10 out of 17) were not aware of superusers' engagement with asthma OHCs. Only 3 out 17 participants reported that their OHCs' involvement was known by their HCPs, whereas 4 out of 17 participants were unsure.
Even when the HCPs were aware, this was because superusers mentioned their engagement with OHCs, though they did not discuss it any further:
The only person who knows is my husband. [N.3]
Of the 17 participants, 15 stated that they did not believe that HCPs would discourage participation in asthma OHCs: Only 1 participant reported being discouraged by their HCP from engaging with OHCs, and this was linked to concerns about patients becoming focused on illness rather than health and well-being: They seemed to feel that by engaging with other people in online health forums it focused people on the illness rather than on getting on with life. They seemed to feel that it made people more anxious about their illness rather than provide reassurance, information and support. It seems to me that they were worried that it reinforced an 'illness' mentality.
Health Care Professionals Promote Engagement With Asthma Online Health Communities
The majority of participants (11 out 17) thought that HCPs should direct patients with LTCs to OHCs, provided they were appropriately moderated and trusted platforms. The remaining 6 out of 17 participants were unsure, though no one felt that HCPs should not promote engagement with OHCs: Any recommended communities would need to be appropriately vetted/ endorsed by medical professional to ensure their accuracy in terms of medical advice and to keep people safe. [N.10] Respondents offered several specific suggestions about how to promote engagement with OHCs: There could be posters up in the waiting rooms of relevant hospital departments and GP surgeries. Asthma nurses could inform patients. Ask people how they feel they have benefited from online communities.
Indeed, a range of advantages arising from the promotion of OHCs by HCPs included obtaining behavioral and emotional self-management support that HCPs may not be able to offer as easily: For support for people when they get diagnosed, have a really difficult time with their asthma and recovering. [N.11]
Suggestions to Reassure Health Care Professionals About Online Health Community Engagement
To reassure HCPs about the safety of OHCs, participants felt OHC providers should have clearer statements about contacting HCPs for medical self-management, place more emphasis on the fact that posts from peers come from not medically qualified people, and have a readily accessible guidance about keeping safe in social media. Comments from moderators should be regular and nonintrusive, with strict rules regarding posts: Healthcare professionals may need to be reassured that any group they signpost is a medically sound one. However, they cannot dictate. It is about mutual respect for the role of the medical professional and the role of an on-line health community. [N.3] Participants felt improving HCPs' knowledge and awareness of why patients engage with OHCs and the benefits of peer support on LTCs would make them keener to promote OHCs. Evaluation of the impact of engagement in OHCs on patients was also suggested: [HCPs' awareness that] online communities are primarily useful for feeling more "normal" with your condition -connecting with others in the same situation. [N.10]
Health Care Professionals' Participation in Online Health Communities
When exploring whether HCPs should themselves take part in OHCs, 9 out of 17 participants replied positively, 5 were unsure, 2 were against it, and 1 did not answer the question.
The reasons behind perceiving HCPs' participation beneficial were the opportunity to get worries and questions addressed. However, as this respondent notes, their participation may be mutually beneficial through learning more about the patient experience of their illness: Not only could a lot of peoples'worries and questions be easily answered authoritatively healthcare professionals could gain much knowledge from forums. [N.8] There was a mention of engagement in OHCs as an additional remunerated duty for HCPs: I think they [HCPs] should be paid to set aside time to monitor forums. [N.8] Most participants felt that HCPs' participation in OHCs was important as long as their identity was stated: They should include their medical specialisms in their profiles and understand that there are many viewpoints on some issues. [N.5] Difficulties making participants unsure about HCPs' participation were potential scrutiny of all posts, limitation of expression from different points of view, and the problem of not knowing the clinical details of users well enough before an appropriate answer could be given: Issues with HCPs' code of conduct and difficulties with HCPs being patients themselves were also expressed: Difficult. There is a place for it but I think it blurs the lines a little and their code of conduct with their registering body...I think as a healthcare professional who is also a patient they need to be aware of the blurred line between patient and healthcare worker. [N.15]
Principal Findings
This is the first study to provide evidence of superusers' motivations for engagement in a large nationwide OHC, the challenges they face when interacting with other users, and their interface with HCPs. As the use of social media in health care is increasing, taken together with our previous network study [25], these results provide unprecedented insight on superusers who are key to creating value, driving and sustaining user engagement and contributing to the success of an OHC.
Superusers are both patients with asthma and carers of a wide age range, tend to take part in more than one OHC, and spend considerable time in a role sometimes similar to that of moderators [37]. Reassuringly, they showed awareness of the complexity of coping with asthma and the limits of their advice, provided emotional and behavioral self-management support, and had at times to direct users to HCPs for medical queries. This is an important point as much of the work exploring HCPs' views of OHCs suggests that they are concerned that inappropriate advice is commonly shared and that community members may not be skilled/reflective enough to realize it.
The superuser role appears to be acquired by users as they deepen their asthma-related knowledge and become accustomed to web-based communication and the dynamics of group-based anonymous interaction [26], turning into expert patients, acquiring some of the characteristics of the second generation of e-patients [8].
Although the superuser role could be stressful at times, most HCPs were unaware of superusers' engagement with OHCs and therefore unable to provide support. This is also in contrast with the general agreement among superusers that patient engagement with trusted and thriving OHCs should be promoted within health care. For some, being a superuser could work as a replacement of role, as in the case of a retired HCP participant or a participant in working age who is identified as an HCP off work due to asthma.
Superusers who were themselves HCPs raised the issue of the need to develop a code of conduct within their registering bodies to engage with users in OHCs.
It has been suggested that HCPs' engagement with OHCs could be remunerated as part of HCP duties.
Superusers' perspectives on what would make OHCs safer and more effective are of interest not only to OHC platform providers but also to policymakers who are increasingly considering leveraging OHCs for health care delivery.
Strengths and Limitations
There are a number of strengths and limitations to our work, which merit comment. The data we collected from superusers in this study came from an existing and thriving asthma OHC [25,27]. In our previous study, we uncovered the emergence of superusers (or hubs) as the OHC network grew larger. Our findings suggest that users with a disproportionate number of contacts started to emerge only when many users had already joined the network (about 1000). This has important implications for the size of our sample of superusers. Although the absolute number of superusers in this study may appear to be small, it takes a very large-scale network for these superusers to emerge. Thus, our sample size must be gauged jointly with the (large) size of the underlying network to which the superusers belong. Although no saturation criteria were used to determine the study sample size, our qualitative analysis revealed that saturation of emerging themes was reached.
The currently limited literature about superusers in OHCs, the lack of a formal identification of superuser status in OHCs, and the a posteriori definition of superusers (ie, superusers as the top 1% active users over a 10-year period [25]) make it difficult to judge the response rate in this study. Our previous work [25] showed the presence of 20 to 30 superusers active on a weekly basis. Although for obvious reasons we could not use the same definition to identify superusers in this study, based on our previous results, the superuser response rate would support the validity of the data presented here.
The study benefited from a superuser piloting phase that face-validated the questions and improved their focus, resulting in additional questions. The study was not designed to test the self-determination theory, which was used as an interpretive lens.
The Asthma UK and Facebook communities are established OHCs (Asthma UK OHC has been operational since 2006) and are moderated and trusted; thus, the results may not extend to other OHCs. Although we cannot confirm superuser sharing of scientific information being always appropriate, in such circumstances, it is likely moderators and other superusers, as seen in this study, would intervene in providing rectification.
Moreover, the self-selective nature of recruitment may have introduced a subjective bias, as less altruistic superusers with different characteristics may not have responded to the invitation.
Comparison With Existing Literature
Only a handful of studies characterizing superusers in OHCs are present in the literature, and to our knowledge, this is the first direct account of superusers' motivation to engage in OHCs. Previous work focused on quantifying superusers [21] and their posting behavior [24] using a passive approach. Superusers have been described as mostly female [24], at times, assuming a role similar to moderators [37]. The desire for agency and mastery in asthma patients has been previously described [38]. Engagement with OHCs promoted superusers' sense of personal control/agency/self-efficacy over their illness, particularly when adopting the informal role of wise mentors or supporters to other users. Interestingly, a recent study indicated that patients gained empowerment through OHCs, which was positively related to patient commitment to the physician and to patient compliance with the proposed treatment [2]. Moreover, there is evidence that users who are high engagers (such as superusers who are themselves patients) exhibit the greatest improvement in patient activation measure (PAM; a measure that captures the extent to which people feel engaged and confident in taking care of their condition) in HealthUnlocked OHCs, even if the average change in PAM across all levels of engagement is not clinically meaningful [39].
Interpretation of Findings Through the Lens of Self-Determination Theory
Superusers display high intrinsic motivation to engage with OHCs ( Figure 1) [31]. Intrinsically motivated behaviors are carried out for the sake of sheer interest or satisfaction derived from the task. Intrinsic motivation constitutes the most autonomous form of motivation and is highly evident in the participants of this study [31]. Through OHCs' engagement, they exhibited fulfillment of the 3 basic psychosocial needs: relatedness, competence, and autonomy. With respect to relatedness, superusers described a sense of belonging to the community and a feeling that they mattered to other users. Participants also expressed a sense of mastery (competence), believing in the effectiveness of their ongoing interactions with users within the OHCs. Their behavior is self-endorsed, reflecting autonomy. Superusers are autonomous and wholeheartedly behind their engagement with OHCs. With such strong intrinsic motivation, extrinsic motivation, that is, behaviors that are carried out to obtain outcomes unrelated to the activity itself, such as financial rewards, unsurprisingly, is not particularly relevant. Nevertheless, moral pressure to monitor OHCs, answer to requests of help, rectify any inappropriate advice, or address users not seeking medical help when appropriate were extrinsic motivation factors that at times felt difficult and stressful, needing to be internalized and integrated in their role of superusers. Figure 1. Superusers' self-determination theory, freely adapted from Ryan and Deci's theory. Intrinsic motivation constitutes the most autonomous form of motivation and is highly evident in superusers. Such motivation emerges from pure personal interest, curiosity, or enjoyment through engagement with online health communities. The transition from external to intrinsic regulation is promoted by superusers' fulfillment of the 3 basic psychosocial needs: relatedness, competence, and autonomy. Within the basic psychosocial needs, factors potentially undermining fulfillment and suggestions for improvement are listed. HCP: health care professional; OHC: online health community.
Clinical and Research Implications
There is a need to improve clinicians,' researchers,' and policymakers' awareness of superusers. Clinicians could inquire about OHCs' engagement during consultations with patients with LTCs and offer support to any potential superusers. The first step is establishing a definitive trial to determine whether a primary care intervention specifically aimed at promoting engagement with trusted and thriving OHCs improves the health and well-being of patients with LTCs.
If integration of OHCs proves to be beneficial, given superusers' potential (ie, 10 superusers can sustain a community of 1000 people) [25,27], campaigns to promote patients with LTCs actively engaging with disease-specific and trusted OHCs may be a way to tackle the demand for behavioral and emotional self-management support in LTCs. Through participation in OHCs, patients who were unable to work due to ill health or retired naturally acquire over time the role of superusers and become a resource to the community, as shown in this study. Further research should investigate the possible role of HCPs in OHCs based on their monitoring activity and contributions to web-based conversations. Indeed, the Big White Wall [15] and Health Service Executive Facebook [16] are already including HCPs in the delivery of health care.
Further studies of OHC superusers are needed, aimed at addressing their unmet needs and understanding their role as mentors, their learning potential, and how other users within the community learn from them. Using more explicitly self-determination theory approaches can inform the design of new theoretically informed strategies for planning, managing, and sustaining OHCs.
As with the UK NHS face-to-face peer supporters in mental health, the usefulness and development of potential training packages for superusers could be explored.
Superusers expressed the need to improve OHC moderation through quicker removal of harmful posts. This could be achieved by taking advantage of advances in artificial intelligence, which increasingly allow real-time monitoring of OHCs and identifying and quarantining posts until review by moderators.
HCPs' registering bodies may need to develop a code of conduct for HCPs' participation in OHCs, especially when they take on a superuser role.
These results should be considered in the current increasingly wider uptake of digital skills across populations, with 95% of UK adults being on the internet [40]. In this context, OHCs assume growing potential as vehicles of health and social interventions [41], with the presence of superusers playing a key role in guaranteeing OHC success or failure. The roll out of the NHS app through the National Health Service [42] and initiatives such as the Online Centres Network [43] are working to tackle digital and social exclusion by providing people with the skills and confidence they need to access digital technology. Indeed, 70% of homeless people use social media [44], and the estimated penetration of broadband connection ownership and the tendency to be influenced by web-based content are wider in ethnic minorities [45].
This study offers a novel and fresh perspective on motivation, difficulties, and interaction with HCPs of superusers, a group of patients likely to be key players in the digital health social media landscape. | 2020-06-24T13:07:01.322Z | 2020-02-10T00:00:00.000 | {
"year": 2020,
"sha1": "bedfa93484f4cbfcef1f167adbba355fdbc1468e",
"oa_license": "CCBY",
"oa_url": "https://www.jmir.org/2020/6/e18185/PDF",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1c40c5f10a296cbf2f9525172e2e160a31d1fd3f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
88522915 | pes2o/s2orc | v3-fos-license | Prediction based on conditional distributions of vine copulas
Vine copulas are a flexible tool for multivariate non-Gaussian distributions. For data from an observational study where the explanatory variables and response variables are measured together, a proposed vine copula regression method uses regular vines and handles mixed continuous and discrete variables. This method can efficiently compute the conditional distribution of the response variable given the explanatory variables. The performance of the proposed method is evaluated on simulated data sets and a real data set. The experiments demonstrate that the vine copula regression method is superior to linear regression in making inferences with conditional heteroscedasticity.
Introduction
In the context of an observational study, where the response variable Y and the explanatory variables X = (X 1 , . . . , X p ) are measured simultaneously, a natural approach is to fit a joint distribution to (X 1 , . . . , X p , Y ) assuming a random sample (x i1 , . . . , x ip , y i ) for i = 1, . . . , n, and then obtain the conditional distribution of Y given X for making predictions. Observational studies are studies where researchers observe subjects and measure several variables together, and inferences of interest are relationships among the measured variables, including the conditional distribution of Y given other variables when there is a variable Y that one may want to predict from the other variables. In contrast, in experimental studies, the explanatory variables (treatment factors) are controlled for by researchers, and the effect of the non-random explanatory variables is then observed on the experimental units. The inferences of interest may be different for experimental studies.
The conditional expectation E (Y |X = x) and conditional quantiles F −1 Y |X (p|x) can be obtained from the conditional distribution for out-of-sample point estimates and prediction intervals. This becomes the usual multiple regression if the joint distribution of (X, Y ) is multivariate Gaussian. Unlike multiple regression, the joint-distribution-based approach uses information on the distributions of the variables and does not specify a simple linear or polynomial equation for the conditional expectation.
When the explanatory variable is a scalar and continuous (p = 1), the joint distribution of (X, Y ) can be modeled using a bivariate parametric copula family. Bernard & Czado [6] show how different copula families can lead to quite different shapes in the conditional mean function E (Y |X = x) and say that linearity of conditional quantiles is a pitfall of quantile regression. There are applications of bivariate or low-dimensions copulas for regression in Bouyé & Salmon [7], Noh et al. [25]. However, none of the previous papers link the shape of conditional quantiles to tail properties of the copula family.
For the multivariate distribution approach to work for moderate to large dimensions, there are two major questions to be addressed: (A) How to model the joint distribution of (X 1 , . . . , X p , Y ) when p is not small and some X j variables are continuous and others are discrete? (B) How to efficiently compute the conditional distribution of Y given X? For question (A), the vine copula or pair-copula construction is a flexible tool in high-dimensional dependence modeling [1,5,9,12,17].
The possibility of applying copulas for prediction and regression has been explored, but an algorithm is needed in general for (B) when some variables are continuous and others are discrete. Parsa & Klugman [26] use a multivariate Gaussian copula to model the joint distribution, and conditional distributions have closed-form expressions. However, Gaussian copulas do not handle tail dependence or tail asymmetry, so can lead to incorrect inferences in the joint tails. Vine copulas are used by Kraus & Czado [18], Schallhorn et al. [27] for quantile regression, but the vine structure is restricted to a boundary class of vines called the D-vine. A general regular-vine (R-vine) copula is adopted in Cooke et al. [11], for the case where the response variable and explanatory variables are continuous. Noh et al. [25] use a non-parametric kernel density approach for conditional expectations, but this can run into sparsity issues as the dimension increases.
In this paper, we propose a method, called vine copula regression, that uses R-vines and handles mixed continuous and discrete variables. That is, the predictor and response variables can be either continuous or discrete. As a result, we have a unified approach for regression and (ordinal) classification. The proposed approach is interpretable, and various shapes of conditional quantiles of y as a function of x can be obtained depending on how pair-copulas are chosen on the edges of the vine. Another contribution of the paper is a theoretical analysis of the asymptotic conditional cumulative distribution function (CDF) and quantile function for vine copula regression. This analysis sheds light on the flexible shapes of E (Y |X = x), as well as provide guidelines on choices of bivariate copulas on the vine to achieve different asymptotic behavior. For example, with the approach of adding polynomial terms to an equation in classical multiple regression, one cannot get monotone increasing E (Y |X = x) functions that flatten out for large values of predictor variables.
The remainder of the paper is organized as follows. Section 2 gives an overview of vine copulas. Section 3 describes the model fitting procedure and the prediction algorithm given a fitted vine regression model. Section 4 provides theoretical results on how the choices of bivariate copulas in a vine affect the asymptotic tail behaviors of the conditional CDF and quantile function. These results are more general than those given in Bernard & Czado [6] and provide insights into the possible tail behaviors for higher-dimensional copulas. Sections 5 and 6 present a simulation study and applications of vine regression. Section 7 concludes the paper. The supplementary materials include the code and data for Sections 5 and 6. 2
Vine copulas
In this section, we provide an overview of vine copulas. A d-dimensional copula C is a multivariate distribution on the unit hypercube [0, 1] d , with all univariate margins being U (0, 1). Sklar's theorem provides a decomposition of a d-dimensional distribution into two parts: the marginal distributions and the associated copula [29]. It states that for a d-dimensional random vector Y = (Y 1 , Y 2 , . . . , Y d ) following a joint distribution F with the jth univariate margin F j , the copula associated with F is a distribution function If F is a continuous d-variate distribution function, then the copula C is unique. Otherwise C is unique on the set Range(F 1 ) × · · · × Range(F d ).
Vine copulas use bivariate copulas as the basic building blocks along with vine graphs to specify the dependence structure. Sections 2.1 and 2.2 briefly review some results for bivariate copulas and vine copulas that are used subsequently.
Let f 1 , f 2 and f 12 be the density functions of Y 1 , Y 2 and (Y 1 , Y 2 ) respectively, with respect to Lebesgue measure for continuous random variables or counting measure for discrete ones. Next is a result from Stöber et al. [30] and Section 3.9.5 of Joe [17]. The joint density function f 12 can be decomposed as follows, • If both Y 1 and Y 2 are continuous random variables, thenc(y 1 , y 2 ) := c(F 1 (y 1 ), F 2 (y 2 )).
• If Y 1 is a discrete random variable and Y 2 is continuous, then 3 In this case, • If Y 1 is a continuous random variable and Y 2 is discrete, theñ • If both Y 1 and Y 2 are discrete random variables, then the density of (Y 1 , Y 2 ) is: In this case,
Vine structures
A regular vine (R-vine) in d variables is a nested set of d − 1 trees where the edges in the first tree are the nodes of the second tree, the edges of the second tree are the nodes of the third tree, etc. Vines and truncated vines provide a flexible approach to summarizing dependence in a multivariate distribution with edges in the first tree representing pairwise dependence and edges in subsequent trees representing conditional dependence. Vines extend Markov trees to allow for conditional dependence. A multivariate Gaussian distribution can be represented through vines when parameters on the edges of the vine are correlations in the first tree and partial correlations in subsequent trees; for tree (2 ≤ < d), the partial correlations are conditioned on − 1 variables.
In general, the first tree represents d variables as nodes and bivariate dependence of d − 1 pairs of variables as edges. The second tree represents conditional dependence of d − 2 pairs of variables conditioning on another variable; nodes are the edges in tree 1, and a pair of nodes could be connected if there is a common variable in the pair. The third tree represents conditional dependence of d − 3 pairs of variables conditioning on two other variables; nodes are the edges in tree 2, and a pair of nodes could be connected if there are two common conditioning variables in the pair. This continues until tree d−1 has one edge that represents the conditional dependence of two variables conditioning on the remaining d − 2 variables.
A formal definition, from Bedford & Cooke [5], is as follows. To get a vine copula or pair-copula construction, for each edge [jk|S] ∈ E(V) in the vine, there is a bivariate copula C jk;S associated with it. Letc jk;S (·; y S ) be as defined in Section 2.1 for C jk;S (·; y S ) when the conditioning value is y S , and let C j|k;S (a|b; y S ) = ∂C jk;S (a, b; y S )/∂b and C k|j;S (b|a; y S ) = ∂C jk;S (a, b; y S )/∂a. C j|k;S and C k|j;S are the conditional CDFs of the copula C jk;S . The joint density of (Y 1 , . . . , Y d ) can be decomposed according to the vine structure V.
jk;S (y j , y k ; y S ). (2. 2) The above representation for the case of absolutely continuous random variables is derived in Bedford & Cooke [4]; its extension to include some discrete variables is in Section 3.9.5 of Joe [17]. For simplicity of notation, we denote F + j|S = F j|S (y j |y S ) and F − j|S = lim t↑yj F j|S (t|y S ). If it is assumed that the copulas on edges of trees 2 to d − 1 do not depend on the values of the conditioning values, then c jk;S andc jk;S in (2.2) do not depend on y S ; i.e., c jk;S (·) = c jk;S (·; y S ) andc jk;S (·) =c jk;S (·; y S ). This is called the simplifying assumption. With the simplifying assumption, we have the following definition ofc jk;S .
• If Y j and Y k are both continuous, thenc jk;S (y j , y k ) := c jk;S (F + j|S , F + k|S ).
• If Y j is continuous and Y k is discrete, theñ c jk;S (y 1 , y k ) := C k|j;S (F + k|S |F + j|S ) − C k|j;S (F − k|S |F + j|S ) /f k|S (y k |y S ).
• If Y j is discrete and Y k is continuous, theñ c jk;S (y j , y k ) := C j|k;S (F + j|S |F + k|S ) − C j|k;S (F − j|S |F + k|S ) /f j|S (y j |y S ).
• If Y j and Y k are both discrete, theñ A t-truncated vine copula results if the copulas for trees T t+1 , . . . , T d−1 are all independence copulas, representing conditional independencies.
Vine copula regression
Consider the data structure for multiple regression with p explanatory variables x 1 , . . . , x p and response variable y as a sample of size n; the data are (x i1 , . . . , x ip , y i ) for i = 1, . . . , n, considered as independent realizations of a random vector (X 1 , . . . , X p , Y ). 5 If these data are considered as a sample in an observational study, then a natural approach is to fit a joint multivariate density to the variables x 1 , . . . , x p , y. This can be done using a flexible, parametric vine copula.
Researchers have applied D-vine copulas to quantile regressions [18,27]. Their approach is to sequentially constructs a D-vine: it first links the predictor with the strongest dependence to y; then a second variable with strongest conditional dependence to y given first predictor; this procedure continues until an information criterion stops improving. This structure learning algorithm is similar to the forward selection in multiple regression and easiest to handle with a D-vine. However, it is known that forward selection does not usually produce an optimal solution. Compared to the existing D-vine-based methods, our proposed algorithm uses R-vines and is more flexible.
Furthermore, Schallhorn et al. [27] propose a method based on continuous convolution to handle discrete variables, and estimate the vine copula non-parametrically. When variables are all monotonically related, the parametric approach that we are using can be simpler for interpretations and check monotonicity of conditional quantiles.
The remainder of this section is organized as follows. Section 3.1 introduces the model fitting and assessment procedure. Section 3.2 describes an algorithm that calculates the conditional CDF of the response variable of a new observation, given a fitted vine copula regression model. The conditional CDF can be further used to calculate the conditional mean and quantile for regression problems, and the conditional probability mass function (PMF) for classification problems.
Model fitting and assessment
Due to the decomposition of a joint distribution to univariate marginal distributions and a dependence structure among variables, a two-stage estimation procedure can be adopted. Suppose the observed data are (z i1 , z i2 , . . . , z id ) = (x i1 , . . . , x ip , y i ), for i = 1, . . . , n with d = p + 1.
1. Estimate the univariate marginal distributionsF j , for j = 1, . . . , d, using parametric or non-parametric methods. The corresponding u-scores are obtained by applying the probability integral transform:û ij =F j (z ij ). 2. Fit a vine copula on the u-scores. There are two components: vine structure and bivariate copulas. Section 3.1.1 discusses how to choose a vine structure, and Section 3.1.2 presents a bivariate copula selection procedure. 3. Compute some conditional quantiles, with some predictors fixed and others varying, to check if the monotonicity properties are interpretable.
Vine structure learning
In this section, we introduce methods for learning or choosing truncated R-vine structures. From Kurowicka & Joe [19], the total number of (untruncated) R-vines in d vari- vines and find the best -truncated R-vine based on some objective functions such as those in Section 6.17 of Joe [17]. However, this is only feasible for d ≤ 8 in practice. Greedy algorithms [12] and metaheuristic algorithms [10] are commonly adopted to find a locally optimal -truncated vine. The development of vine structure learning algorithms is an active research topic; various algorithms are proposed based on different heuristics. However, no heuristic method can be expected to be universally the best. 6 The goal of vine copula regression is to find the conditional distribution of the response variable, given the explanatory variables. In general, to calculate the conditional distribution from the joint distribution specified by a vine copula, computationally intensive multidimensional numerical integration is required. This could be avoided if we enforce a constraint on the vine structure such that the node containing the response variable as a conditioned variable is always a leaf node in T , = 1, . . . , d − 1. When this constraint is satisfied, Algorithm 3.1 computes the conditional CDF without numerical integration.
To construct a truncated R-vine that satisfies the constraint, we can first find a locally optimal t-truncated R-vine using the explanatory variables x 1 , . . . , x p . Then from level 1 to level t, the response variable y is sequentially linked to the node that satisfies the proximity condition and has the largest absolute (normal scores) correlation with y. The idea of extending an existing R-vine is also explored by Bauer & Czado [3] for the construction of non-Gaussian conditional independence tests. Figures 1 and 2 demonstrate how to add a response variable to the R-vine of the explanatory variables, after each variables has been transformed to standard normal N (0, 1). Given a 2-truncated R-vine V = (T 1 , T 2 ) in Figure 1 Suppose the response variable is indexed by 6. The first step is to find the node that has the largest absolute correlation, i.e. arg max 1≤i≤6 |ρ i6 |. Assume ρ 36 is the largest, then node 3 and node 6 are linked: Figure 2.
Bivariate copula selection
After fitting the univariate margins and deciding on the vine structure, parametric bivariate copulas can be fitted sequentially from tree 1, tree 2, etc. The results in Section 4 can provide guidelines of choices of bivariate copula families in order to match the expected behavior of conditional quantile functions in the extremes of the predictor space. With the simplifying assumption and parametric copula families, the log-likelihood of the
Commonly used model selection criteria include Akaike information criterion (AIC) and
Bayesian information criterion (BIC): where |θ jk | refers to the number of copula parameters in c jk;S . For each candidate bivariate copula family on an edge, we first find the parameters that maximize the loglikelihoodθ MLE . Then the copula family with the lowest AIC or BIC is selected. When all the variables are continuous, this approach of selecting the bivariate copula selection is the standard approach in VineCopula [28] and has been initially proposed and investigated by Brechmann [8].
Prediction
This section describes how to predict the conditional distribution of the response variable of a new observation, given a fitted vine copula regression model. We first present an algorithm that computes the conditional CDF of the response variable. If the response variable is continuous, the conditional quantile and mean can be calculated by inverting the conditional CDF and integrating the quantile function. If the response variable is discrete, the conditional PMF can be easily derived from the conditional CDF via finite difference.
Based on ideas of the algorithms in Chapter 6 of Joe [17], Algorithm 3.1 can be applied to an R-vine with mixed continuous and discrete variables. The idea is that, given the structural constraint on the vine structure described in Section 3.1.1, conditional distributions are sequentially computed according to the vine structure, and the conditional distribution of the response variable given all the explanatory variables is obtained in the end. The input is a vine copula regression model with a vine array A = (a kj ), a vector of new explanatory variables x = (x 1 , . . . , x d ) , and a percentile u ∈ (0, 1). The vine array is an efficient and compact way to represent a vine structure; see Appendix A or Kurowicka & Joe [19] or Joe [17]. The R-vine matrices in the VineCopula package [28] are the vine arrays with backward indexing of rows and columns. The algorithm returns the conditional CDF of the response variable given the explanatory variables evaluated at u, that is, p(u|x) := P(F Y (Y ) ≤ u|X = x). It calculates the conditional distributions C j|a j ;a1j ,...,a −1,j and C a j |j;a1j ,...,a −1,j for = 1, . . . , n trunc and j = + 1, . . . , d, where n trunc is the truncation level of the vine copula. For discrete variables, both the left-sided and right-sided limits of the conditional CDF are retained. In the end, If the response variable Y is continuous, then the conditional mean and conditional quantile can be calculated using p(·|x): the α-quantile is F −1 Y (p −1 (α|x)), and the conditional mean is where p −1 (·|x) is calculated using the secant method, and the numerical integration is computed using Monte Carlo methods or numerical quadrature. If the response variable Y is ordinal, then it is a classification problem; we only need to focus on the support of Y . The conditional CDF is fully specified by If the response variable Y is nominal, then the proposed method does not apply. An alternative vine-copula-based method is to fit a vine copula model for each class separately and use the Bayes' theorem to predict the class label. Specifically, for samples in class Y = k, we fit a vine copula densityf X|Y (x|k). Letπ k be the proportion of samples in class k in the training set. According the Bayes' theorem, the predicted probability that a sample belongs to class k iŝ .
The classification rule has been utilized in Nagler & Czado [23] in an example involving vines with nonparametric pair copula estimation using kernels. Since the distribution of predictors is modeled separately for each class, this alternative method is more flexible but has a high computational cost, especially when the number of classes is large.
Theoretical results on shapes of conditional quantile functions
From the properties of the multivariate normal distribution, if (X 1 , . . . , X p , Y ) follows a multivariate normal distribution, then the conditional quantile function of Y |X 1 , . . . , X p has the linear form ..,Xp is the multiple correlation coefficient. Going beyond the normal distribution, we address the following question in this section: how does the choice of bivariate Algorithm 3.1 Conditional CDF of the response variable given the explanatory variables with which to predict; based on steps from Algorithms 4, 7, 17, 18 in Chapter 6 of Joe [17].
. . , d. 4: Compute the I = (I kj ) indicator array as in Algorithm 5 in Joe [17]. 5: if isDiscrete(variable j) then 15: end if 16: end if 17: if isDiscrete(variable a −1,j ) then We focus on a bivariate random vector (X, Y ) with standard normal margins. Let C(u, v) be the copula, then the joint CDF is F X,Y (x, y) = C(Φ(x), Φ(y)). Without loss of generality, we assume the copula C(u, v) has positive dependence. We are interested in the shape of the conditional CDF F Y |X (y|x) and conditional quantile F −1 Y |X (α|x), when x is extremely large or small and α ∈ (0, 1) is fixed. Bernard & Czado [6] study a few special cases for bivariate copulas. Our results are more extensive in relating the shape of asymptotic quantiles to the strength of dependence in the joint tail.
If the conditional distribution C V |U (·|u) converges to a continuous distribution with support on [0, 1], as u → 0 + , then C −1 V |U (α|0) > 0 , for α ∈ (0, 1). Therefore, F −1 Y |X (α|x) levels off as x → −∞. The same argument applies when x → +∞. That is, If lim u→0 + C V |U (·|u) is degenerate at 0, then lim u→0 + C −1 V |U (α|u) = 0. To study the shape of F Y |X (y|x) when x is very negative, we need to further investigate the rate at which C −1 V |U (α|u) converges to 0. The next proposition, with proof in the supplementary material, summarizes the possibilities.
Proposition 4.1. Let (X, Y ) be a bivariate random vector with standard normal margins and a positively dependent copula C(u, v).
Here η indicates the strength of relation between two variables in the tail; a larger η value corresponds to stronger relation. The strongest possible comonotonic dependence is when Y = X, and the conditional quantile function is F −1 Y |X (α|x) = x, which is linear in x and does not depend on α; in this case, η = 1. The weakest possible positive dependence is when X and Y are independent, and F −1 Y |X (α|x) = F −1 Y (α) does not depend on x; in this case, η = 0. Based on the value of η, the asymptotic behavior of the conditional quantile function can be classified into the following categories: 1. Strongly linear: η = 1 and k α = 1. F −1 Y |X (α|x) goes to infinity linearly, and it does not depend on α. It has stronger dependence than bivariate normal. 2. Weakly linear: η = 1, k α can depend on α and 0 < k α < 1. F −1 Y |X (α|x) goes to infinity linearly and it depends on α. It has comparable dependence with bivariate normal. 3. Sublinear: 0 < η < 1. F −1 Y |X (α|x) goes to infinity sublinearly. The dependence is weaker than bivariate normal. 4. Asymptotically constant: η = 0. F −1 Y |X (α|x) converges to a finite constant. Asymptotically it behaves like independent.
The conditional quantile function is Take the log of both sides, − log C −1 V |U (α|u; δ) ∼ log u. By Proposition 4.1, we have To apply the next proposition to get the same conclusion, the generator is the gamma Laplace transform ψ(s) = (1 + s) −1/δ .
Example 4.2. (Gumbel lower tail) The bivariate Gumbel copula CDF is
The conditional CDF is The conditional quantile function C −1 V |U (α|u; δ) does not have a closed-form expression; it has the following asymptotic expansion: To apply the next proposition to get the same conclusion, the generator is the positive stable Laplace transform ψ(s) = exp{−s −1/δ }.
For Archimedean and survival Archimedean copulas, the following proposition provides some links between tail dependence behavior and tail conditional distribution and quantile functions. The proof of the proposition is included in the supplementary material. where a 1 > 0, r = 0 implies a 2 = 0 and q < 0, and r > 0 implies r ≤ 1 and q can be 0, negative or positive. 2. For the lower tail of ψ, as s → 0 + , there is M ∈ (k, k + 1) such that where h 0 = 1 and 0 < h i < ∞ for i = 1, . . . , k + 1. If 0 < M < 1, then k = 0.
Then we have the following.
• (Upper tail) If v ∈ (0, 1) and α ∈ (0, 1) are fixed, then as u → 1, Combined with Proposition 4.1, the above proposition states that, for the lower tail, the three cases r = 0, 0 < r < 1 and r = 1 correspond to strongly linear, sublinear and asymptotic constant conditional quantile functions respectively; for the upper tail, the two cases 0 < M < 1 and M > 1 correspond to strongly linear and asymptotic constant conditional quantile functions respectively.
For trivariate vine copula models, the asymptotic behavior of conditional quantile functions also have the four shapes: strongly linear, weakly linear, sublinear and asymptotically constant. However, extending from bivariate to trivariate is not trivial, since the asymptotic conditional quantile function depends on the direction in which the covariates go to infinity. The trivariate case provides insight on the type of asymptotic behavior in higher dimensions. See the supplementary material for a detailed analysis.
Simulation study
We demonstrate the flexibility and effectiveness of vine copula regression methods by visualizing the fitted models on simulated datasets. The simulated datasets have three variables: X 1 and X 2 are the explanatory variables and Y is the response variable, where and Y is simulated in three cases with varying conditional expectation and variance structures. Let U 1 = Φ(X 1 ) and U 2 = Φ(X 2 ), where Φ is the standard normal CDF, and be a random error following a standard normal distribution and independent from X 1 and X 2 . The three cases are as follows: 1. Linear and homoscedastic: Y = 10X 1 + 5X 2 + 10 . 2. Linear and heteroscedastic: Y = 10X 1 + 5X 2 + 10(U 1 + U 2 ) . 3. Non-linear and heteroscedastic: Y = U 1 e 1.8U2 + 0.5(U 1 + U 2 ) .
We simulate samples with size 2000 in each case with a random split of 1000 observations for a training set and a test set. Five methods are considered in the simulation study: (1) linear regression, (2) linear regression with logarithmic transformation of the response variable, (3) quadratic regression, (4) Gaussian copula regression, and (5) vine copula regression. The Gaussian copula can be considered as a special case of the vine copula, in which the bivariate copula families on the vine edges are all bivariate Gaussian. Different models are trained on the training set and used to obtain the conditional expectations as point predictions and 95% prediction intervals on the test set. For copula regressions, the upper and lower bounds of the 95% prediction interval are the conditional 97.5% and 2.5% quantiles respectively. For the Gaussian and vine copula, the marginal distribution of Y is fitted by the MLE of a normal distribution in case 1. In cases 2 and 3, the distributions of the response variable are skewed and unimodal but not too heavy-tailed. Therefore, we fit 3-parameter skew-normal distributions. For the vine copula regression, the candidate bivariate copula families include Student-t, MTCJ, Gumbel, Frank, Joe, BB1, BB6, BB7, BB8, and the corresponding survival copulas. The bivariate copulas are selected using the AIC described in Section 3.1.2. The procedure is replicated 100 times and the average scores of the replicates are reported in Table 1. To evaluate the performance of a regression model, we apply the root-mean-square error (RMSE) and several scoring rules for probabilistic forecasts studied in Gneiting & Raftery [15], including the logarithmic score (LogS), quadratic score (QS), interval score (IS), and integrated Brier score (IBS). Note that the RMSE is not meaningful if there is heteroscedasticity in conditional distributions; the LogS, QS, IS, and IBS assess the predictive distributions with non-constant variance more effectively.
• The root-mean-square error (RMSE) measures a model's performance on point estimations.
where y i is the response variable of the i-th sample in the test set, andŷ M i is the predictive conditional expectation of a fitted model M.
• The logarithmic score (LogS) is a scoring rule for probabilistic forecasts of continuous variables [15]. It is closely related to the generalization error in machine learning literature (Chapter 7.2 in Hastie et al. [16]).
where (x i , y i ) is the ith observation in the test set, andf M Y |X is the predictive conditional PDF of model M. For example, if M is a linear regression, then the predictive conditional distribution is a scaled and shifted t-distribution. If M is a vine copula, the predictive conditional distribution can be calculated using the procedure described in Section 3.2.
• The quadratic score (QS) measures the predictive density, penalized by its L 2 norm [15]: • The interval score (IS) is a scoring rule for quantile and interval forecasts [15]. In the case of the central (1 − α) × 100% prediction interval, letû M i andˆ M i be the predictive quantiles at level α/2 and 1 − α/2 by model M for the i-th test sample. The interval score of model M is Smaller interval scores are better. A model is rewarded for narrow prediction intervals, and it incurs a penalty, the size of which depends on α, if an observation misses the interval.
• The integrated Brier score (IBS) is a scoring rule that is defined in terms of predictive cumulative distribution functions [15]: where F M Y |X is the predictive conditional CDF of model M. Smaller integrated Brier scores are better.
The first case serves as a sanity check; if the response variable is linear in the explanatory variables and the conditional variance is constant, the vine copula should behave like linear regression. Figure 4a plots the simulated data, the true conditional expectation surface and true 95% prediction interval surfaces. Figure 4b plots the corresponding predicted surfaces. All three surfaces truthfully reflect the linearity of the data. The first three lines of Table 1 show that the vine copula and linear regression have similar performance in terms of all five metrics.
(a) Linear and homoscedastic data, the true surfaces.
(b) Linear and homoscedastic data, predicted surfaces by a vine copula regression model. The second case adds heteroscedasticity to the first case; that is, the variance of Y increases as X 1 or X 2 increases while the linear relationship remains the same. We expect the conditional expectation surface to be linear. Figure 5a and Figure 5b show the true and predicted surfaces respectively. The conditional expectation surface is linear and the lengths of prediction intervals increase with X 1 and X 2 . The performance measures in Table 1 are also consistent with our expectation: the vine copula models have better LogS, QS, IS, and IBS, although the RMSE is slightly worse than the linear regression model. The logarithmic transformation of the response variable does not seem to improve the performance.
Finally, the third case incorporates both non-linearity and heteroscedasticity. Since the linear regression obviously cannot fit the non-linear trend, we compare our model to quadratic regression as well. Figure 6 shows the true surfaces and the predicted surfaces for the three models. Although the quadratic regression model captures the non-linear trend, it is not flexible enough to model heteroscedasticity. Another drawback of quadratic regression is that, the conditional meanŷ is not always monotonically increasing with respect to x 1 and x 2 , and this contradicts the pattern in the data. The vine copula naturally fits the non-linearity and heteroscedasticity pattern. Quantitatively, the quadratic regression model has the best RMSE and IS, but vine copula models have the best LogS, QS, and IBS, as shown in Table 1.
We have also conducted a similar simulation study with four explanatory variables (a) Linear and heteroscedastic data, the true surfaces.
(b) Linear and heteroscedastic data, predicted surfaces by a vine copula regression model. (a) Non-linear and heteroscedastic data, the true surfaces.
(b) Non-linear and heteroscedastic data, predicted surfaces by a linear regression model.
(c) Non-linear and heteroscedastic data, predicted surfaces by a quadratic regression model.
(d) Non-linear and heteroscedastic data, predicted surfaces by a vine copula regression model. The response variable Y is generated from similar three cases: 1. Linear and homoscedastic: Y = 5(X 1 + X 2 + X 3 + X 4 ) + 20 .
The results of the simulation study are shown in Table 2, the pattern of which is similar to that of Table 1. 6. Application
Abalone data set
In this section, we apply the vine copula regression method on a real data set: the Abalone data set [20]. The data set comes from an original (non-machine-learning) study [24]. It has 4177 cases, and the goal is to predict the age of abalone from physical measurements; the names of these measurements are in Figure 7. The age of abalone is determined by counting the number of rings (Rings) through a microscope, and this is a time-consuming task. Other physical measurements that are easier to obtain, are used to predict the age. Rings can be regarded either as a continuous variable or an ordinal one. Thus the problem can be either a regression or a classification problem. We focus on the subset of 1526 male samples (with two outliers removed). Figure 7 shows the pairwise scatter plots, marginal density functions and pairwise correlation coefficients. There is clear non-linearity and heteroscedasticity among the pairs of variables. We discuss the regression problem in Section 6.2, and Section 6.3 shows the results for the classification problem.
Regression
In this section, we compare the performance of vine copula and linear regression methods. Three vine regressions are considered: • R-vine copula regression: the proposed method with the candidate bivariate copula families; • Gaussian copula regression with R-vine partial correlation parametrization: the proposed method with the bivariate Gaussian copulas only; • D-vine copula regression: Kraus & Czado [18] with the candidate bivariate copula families.
The candidate bivariate copulas include Student-t, MTCJ, Gumbel, Frank, Joe, BB1, BB6, BB7, BB8, and the corresponding survival and reflected copulas. We perform 100 trials of 5-fold cross validation. Vine copula regressions and linear regression are fitted using the training set, and the test set is used for performance evaluation. All the univariate margins are fitted by skew-normal distributions. The conditional mean and 95% prediction interval are obtained for all models. For copula regressions, the upper and lower bounds of the 95% prediction interval are the conditional 97.5% and 2.5% quantiles respectively.
We consider the out-of-sample performance measures used in Section 5: the rootmean-square error (RMSE), logarithmic score (LS), quadratic score (QS), interval score (IS), and integrated Brier score (IBS). Table 3 shows the average performance measures from the 100 trials of cross validation. Compared with linear regression, our method has lower prediction errors, and better predictive scores. The performance of the R-vine copula model is slightly better than the D-vine copula model, in terms of all five scores. The vine array and bivariate copulas on the edges of the R-vine fitted on the full dataset are shown in Table 4 the copulas linking to the response variables in trees 2 to 7 represent weak negative dependence.
The fitted D-vine regression model has path Diameter-VisceraWeight-WholeWeight-ShuckedWeight-ShellWeight-Rings in the first level of the D-vine structure.
We have also conducted monotonicity checks of the predicted conditional median based on the fitted R-vine model. Four of the linking copulas in trees 2 to 7 (last column of the right-hand side of Table 4) represent conditional negative dependence given the previously linked variables to the response variable. This means that the conditional median function is not always monotone increasing in an explanatory variable when others are held fixed. However, when all explanatory variables are increasing together (for larger abalone), the conditional median is increasing. This property is similar to classical Gaussian regression with positive correlated explanatory variables and existence of negative regression coefficients because of some negative partial correlations. Even with some negative conditional dependence, there is overall better out-of-sample prediction performance by keeping all of the explanatory variables in the model.
We also did some numerical checks on the conditional quantiles when one explanatory variable becomes extreme and other variables are held fixed. It appears that the behavior is close to asymptotically constant. From the linking copulas in Table 4 and the results in Section 4, we would not be expecting asymptotic linear behavior (and this is reasonable from the context of the variables). Figure 8 visualizes the prediction performance of the three methods on the full dataset. The plots show the residuals against the fitted values on the test set, and the prediction intervals. Due to heteroscedasticity, there is more variation in residuals as fitted value increases. However, linear regression fails to capture the heteroscedasticity and the prediction intervals are roughly of the same length. Vine copula regression gives wider (narrower) prediction intervals when the fitted values are larger (smaller). This illustrates the reason why our method overall has more precise prediction intervals.
In the supplementary material, further analysis is done to compare the four methods to show where they differ the most in terms of point predictions. The largest differences are when samples are near the upper boundary of the predictor space; that is, at least one of the predictor variables is above its 95th quantile. This is an indication that Rvine copula and D-vine copula models are more flexible than Gaussian copula and linear regression models in handling tail behaviors.
Classification
The response variable Rings is an ordinal variable that ranges from 3 to 27. Therefore this is a multiclass classification problem. Although our method can handle multiclass classification problems, we reduce it to a binary classification problem for easy comparison with commonly used methods, including logistic regression, support vector machine (SVM), and random forest (RF). The sample median of Rings is 10; if a sample's Rings is greater than 10, we label it as 'large', otherwise 'small'. All the predictor variables are fitted by skew-normal distributions, and we fit an empirical distribution to the response variable Rings.
The D-vine regression method [18] can only handle continuous variables and is not directly applicable to the classification problem. In order to compare our method with the D-vine based method, we first treat the binary response variable as a continuous variable (0 and 1) and use the D-vine regression method [18] to find a D-vine structure or an ordering of variables. Then an R-vine regression model is fitted on that D-vine structure using our method.
For binary classifiers, the performance can be demonstrated by a receiver operating characteristic (ROC) curve. The curve is created by plotting the true positive rate against the false positive rate at various threshold settings. The (0, 1) point corresponds to a perfect classification; a completely random guess would give a point along the diagonal line. An ROC curve is a two-dimensional depiction of classifier performance. To compare classifiers we may want to reduce ROC performance to a scalar value representing the expected performance. A common method is to calculate the area under the ROC curve, abbreviated AUC [14]. The AUC can also be interpreted as the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one. Therefore, larger AUC is better. Figure 9a shows sample ROC curves of different binary classifiers and the corresponding AUCs.
Repeated 10-fold cross validations with random partitions is used to assess the performance. In each pass, a 10-fold cross validation is performed and the average AUC is recorded. Figure 9b shows a box plot of the average AUCs. The performance of vine copula regression is marginally better than the other methods. The average AUCs are: RVineReg = 0.835, DVineReg = 0.826, SVM = 0.825, LogisticReg = 0.814, RF = 0.811.
Conclusion
Our vine copula regression method uses R-vines and can fit mixed continuous and ordinal variables. The prediction algorithm can efficiently compute the conditional distribution given a fitted vine copula, without marginalizing the conditioning variables. 24 The performance of the proposed method is evaluated on simulated data sets and the Abalone data set. The heteroscedasticity in the data is better captured by vine copula regression than the standard regression methods. One potential drawback of the proposed method is the computational cost for highdimensional data, especially when the dimensionality is greater than the sample size. This paper is more of a proof of concept of using R-vine copula models for regression and classification problems. Therefore, we evaluate the performance of the proposed methods on classical cases and compare with models such as linear regressions. Another drawback is the constraint on the vine structure such that the response variable is always a leaf node at each level. This constraint greatly reduces the computational complexity; without it, numerical integration would be required to compute the conditional CDF.
To relate how choices of bivariate copula families in the vine can affect prediction and to provide guidelines on bivariate copula families to consider, we give a theoretical analysis of the asymptotic shape of conditional quantile functions. For bivariate copulas, the conditional quantile function of the response variable could be asymptotically linear, sublinear, or constant with respect to the explanatory variable. It turns out the asymptotic conditional distribution can be quite complex for trivariate and higher-dimensional cases, and there are counter-intuitive examples. In practice, we recommend remarkable plots of conditional quantile functions of the fitted vine copula to assess if the monotonicity properties are reasonable.
One possible future research direction is the extension of the proposed regression method for survival outcomes with censored data. For example, Emura et al. [13] use bivariate copulas to predict time-to-death given time-to-cancer progression; Barthel et al. [2] apply vine copulas to multivariate right-censored event time data. They apply copulas to the joint survival function instead of the joint CDF to deal with right-censoring. These types of applications would require more numerical integration methods.
Another research direction is to handle variable selection and reduction when there are many explanatory variables, some of which might form clusters with strong dependence. Traditional variable selection methods for regression can also be applied, for example, the forward selection approach. Moreover, recent papers proposed methods for learning sparse vine copula models [21,22], which can be potentially used as a variable selection method for copula regression. | 2018-07-23T05:10:30.000Z | 2018-07-23T00:00:00.000 | {
"year": 2018,
"sha1": "5b0f3441bbe9a1991b788cd2a41bd02fd3b434bf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1807.08429",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5b0f3441bbe9a1991b788cd2a41bd02fd3b434bf",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
248701487 | pes2o/s2orc | v3-fos-license | Assessing competence of mid-level providers delivering primary health care in India: a clinical vignette-based study in Chhattisgarh state
Background The global commitment to primary health care (PHC) has been reconfirmed in the declaration of Astana, 2018. India has also seen an upswing in national commitment to implement PHC. Health and wellness centres (HWCs) have been introduced, one at every 5000 population, with the fundamental purpose of bringing a comprehensive range of primary care services closer to where people live. The key addition in each HWC is of a mid-level healthcare provider (MLHP). Nurses were provided a 6-month training to play this role as community health officers (CHOs). But no assessments are available of the clinical competence of this newly inducted cadre for delivering primary care. The current study was aimed at providing an assessment of competence of CHOs in the Indian state of Chhattisgarh. Methods The assessment involved a comparison of CHOs with rural medical assistants (RMAs) and medical officers (MO), the two main existing clinical cadres providing primary care in Chhattisgarh. Standardized clinical vignettes were used to measure knowledge and clinical reasoning of providers. Ten ailments were included, based on primary care needs in Chhattisgarh. Each part of clinical vignettes was standardized using expert consultations and standard treatment guidelines. Sample size was adequate to detect 15% difference between scores of different cadres and the assessment covered 132 CHOs, 129 RMAs and 50 MOs. Results The overall mean scores of CHOs, RMAs and MOs were 50.1%, 63.1% and 68.1%, respectively. They were statistically different (p < 0.05). The adjusted model also confirmed the above pattern. CHOs performed well in clinical management of non-communicable diseases and malaria. CHOs also scored well in clinical knowledge for diagnosis. Around 80% of prescriptions written by CHOs for hypertension and diabetes were found correct. Conclusion The non-physician MLHP cadre of CHOs deployed in rural facilities under the current PHC initiative in India exhibited the potential to manage ambulatory care for illnesses. Continuous training inputs, treatment protocols and medicines are needed to improve performance of MLHPs. Making comprehensive primary care services available close to people is essential to PHC and well-trained mid-level providers will be crucial for making it a reality in developing countries. Supplementary Information The online version contains supplementary material available at 10.1186/s12960-022-00737-w.
Background
According to the World Health Organization (WHO), primary health care (PHC) is the best approach to ensure access to healthcare with equity and efficiency [1]. The global commitment to PHC has been reconfirmed in the Declaration of Astana, 2018. Its closeness to community, ability to cover bulk of the healthcare needs of the most number of people in least cost and emphasis on equity make PHC particularly essential for low-and low-to middle-income countries (LLMICs) like India. An important component of PHC is ensuring access to curative care at primary level as it reduces mortality and need for secondary and tertiary care [2,3].
India has also seen an upswing in national commitment to implement PHC. In 2015, a task-force was set up by the central ministry of health to provide recommendations for rolling out comprehensive PHC in India [4]. It was also a key proposal in India's national health policy, 2017 [5]. In order to deliver PHC, an architectural correction was conceptualized in the design of public health system in India. It involved introducing health and wellness centres (HWCs), one at every 5000 population, as the hub for PHC [5]. The emergence of HWC marks a very significant development for PHC in India because of the following reasons [5][6][7][8]: a. It brings primary curative services closer to people.
An HWC can be reached within half an hour by any of the families it covers. b. Earlier, the formal services available close to people were selective, largely limited to immunization and ante-natal care. As a result, for many people the unqualified informal providers were the closest option for seeking treatment when they fell sick. Now, HWCs are there to provide a wide range of services to address the primary care needs comprehensively. c. HWCs also represent a response to the epidemiological transition in India as they emphasize adequate coverage of non-communicable diseases (NCDs) at primary level. d. HWC design is based on a population health approach and to ensure a continuum of care. They aim to undertake activities for promotive and preventive health in the community, detect illnesses in time, treat the simpler conditions and refer the complicated cases to higher facilities and ensure the nec-essary follow-up and home-based care of chronic disease cases.
Pilots on HWCs, funded by the national health mission, were started in 2017 [9]. The central government declared HWCs as a flagship national programme in 2018 [6]. The target was to operationalize 115 000 HWCs across the country by upgrading existing facilities at 5000 population (known a sub-centres) by year 2022. By November 2021, around 51 500 of them had been made functional [10].
In order to operationalize HWCs, existing sub-centres were upgraded and services were expanded by adding infrastructure, supplies and most importantly the human resources. The key addition in staff for each HWC is that of a mid-level healthcare provider (MLHP). MLHP provides leadership to the primary care team working at the HWC and consisting of the MLHP, 1-2 paramedical staff and 5-10 community health workers [8]. MLHPs are also the main providers of healthcare to individuals coming for treatment to HWCs [8].
In order to produce and recruit MLHPs for HWCs, a policy was introduced to select nursing graduates and train them further in a bridge course of 6 months. The bridge course was designed to specifically cover the role of MLHP in HWCs [8]. This cadre trained to play the MLHP role in HWCs was designated as 'community health officers' (CHO).
CHOs now constitute one of the largest cadres of clinical care providers at primary level in India. But no assessments are available of the performance of CHOs including their clinical competence to detect and treat the illnesses they come across in HWCs. Provider competence can have a huge bearing on the range and quality of services the HWCs are able to provide and the amount of credibility these new institutions gain among the communities they serve. Also, if the providers are not confident in diagnosing and treating, they can end up referring most of the patients to higher facilities. That can defeat the fundamental purpose of HWCs, i.e., to bring a comprehensive range of primary care services closer to where people live [7]. The success of the HWC policy thus depends to a large extent on the competence of CHOs. The current study was therefore aimed to provide an assessment of clinical competence of CHOs. The aspect of PHC current study is focused on is of curative services at primary level.
Keywords: Primary health care, Provider competence, Mid-level providers, Quality, Non-physician clinician, India, Non-communicable diseases, Clinical vignette Globally, several models have been tried to create cadres of non-physician healthcare providers suited to deliver PHC, especially for rural areas [11,12]. These cadres have a shorter training than physicians, but are deployed to provide clinical care. The available global evidence suggests that MLHPs are performing several clinical functions that were traditionally handled by physicians [11,[13][14][15][16][17][18][19][20]. A study from India has concluded that nurses can be trained to play a clinical role in management of NCDs [21]. A systematic review has reported that when non-physicians were permitted to prescribe drugs they were able to treat patients effectively by following protocols [12]. Assessments of the performance of such non-physician healthcare providers can also be useful to inform global policies for organizing PHC [11,12]. The current study was conducted in the state of Chhattisgarh. This state was a pioneer to start a 3-year diploma course to create a cadre of non-physician primary care providers for public facilities in rural areas in the state [22]. This cadre now has more than a decade's experience and is known as rural medical assistants (RMA). RMA has been an important reference in design of non-physician healthcare providers in India, including the CHO cadre [4,8]. The simultaneous presence of such cadres in Chhattisgarh offered an opportunity to conduct a comparison of CHOs with RMAs. Another useful comparison for assessing CHOs could be with medical officers (MOs) serving in primary care roles in rural areas. MOs are physicians, i.e. doctors with an undergraduate degree in medicine. A 2009 study of clinical competence of primary care providers in India had also relied on comparisons between different provider cadres [11].
Study setting
Chhattisgarh is one of the poorest states in India. The state had a population of around 29 million and 77% of it lived in rural areas in 2020. The vulnerable indigenous communities called Scheduled Tribes constitute 31% of its population. In 2018-2019, the state had a density of 2.9 MOs per 10 000 population, which was poorer than the national average of 7.6 per 10 000 [23].
The state has 837 primary health centres, one per 30 000 population. The state had 5206 sub-centres, each covering around 5000 population and providing reproductive and child health services. By September 2020, the state had converted 1895 of its sub-centres into HWCs [24].
Selection of providers and HWCs
The study was aimed at assessing the in-service competence of CHOs, RMAs and MOs working at the primary care level. Therefore, individuals in the above cadres working at primary health centres or HWCs for 6 months or longer were included in the study. There were 1110 CHOs, 1212 RMAs and 396 MOs fulfilling the above criteria in September 2020, when the data collection was started [23]. The minimum sample size was calculated to detect a 15% difference in mean competence scores between the groups with 90% power and confidence level of 95%. According to the above calculation, a minimum of 50 providers of each type were necessary to be covered. It was decided that around 10% of the eligible individuals in each cadre will be covered while ensuring the above minimum required sample size. The above sample size for each cadre was increased by 25% to account for nonresponse. Thus, a sample of 139 CHOs, 152 RMAs and 63 MOs was to be selected. The list of all 1110 CHOs working in the state was arranged district-wise from north to south. From the above list, 139 CHOs were selected using systematic random sampling. The required sample of 152 RMAs and 63 MOs was also selected through a similar procedure. The study was able to assess 132 CHOs, 129 RMAs and 50 MOs. The response rate for CHOs, RMAs and MOs was 95%, 85% and 79%, respectively.
Study tools
Provider competence was assessed in terms of clinical knowledge for specific primary care services by using clinical vignettes. Clinical vignettes are a form of simulated clinical case structure, used primarily to measure knowledge and clinical reasoning of a healthcare provider [11,[25][26][27]. A key advantage of using clinical vignettes is that the case-mix is same for all the providers being assessed. This allows a valid comparison of their scores.
Similar to earlier studies, each clinical vignette was structured in the following stages-history taking, examination and investigations, diagnosis, treatment (prescription) and follow-up [11,27]. Under each of the above stages, a set of relevant elements was added based on standard treatment guidelines and inputs from clinical experts.
The form of clinical vignettes used in this study, one of the interviewers played the part of patient and started by describing the main complaint (e.g. I am a 30-year-old woman with fever) and the provider was requested to proceed with the simulated consultation by subsequently asking questions related to history, examination and investigations. The provider was aware that the patient was imaginary and it was a simulated conversation with the interviewer. Whenever the provider asked any relevant question related to history, examination or investigation, the surveyor gave a standard response. After the history, examination and investigation sections, the provider was asked to state the diagnosis, treatment and follow-up [11,27].
Each part of the vignette was standardized: (a) the elements expected to covered by the provider in history, examination and investigation; (b) the responses to be given by the interviewer to any relevant question by provider and c) the correct diagnosis, treatment (prescription) and follow-up care against which the providers' responses are to be judged. The standardization was done using standard guidelines and advice of experts from the All India Institute of Medical Sciences, Raipur. The vignettes were pretested with a few CHOs, RMAs and MOs before being finalized.
Case selection
The clinical vignettes were developed for 10 tracer conditions that cover the illnesses commonly seen at primary care level in Chhattisgarh. They were selected based on consultations with experts and practising clinicians. The clinical vignettes were on the following conditions: diarrhoea with severe dehydration, pneumonia, malaria, hypertension, diabetes, vulvo-vaginal candidiasis, preeclampsia, scabies, poisoning and sickle cell disease.
Some of the other important diseases in the state, like tuberculosis and leprosy, were not included as they are not expected to be diagnosed or treated at HWCs. Though HWCs do refer the presumptive cases of above diseases to higher facilities, the role does not involve clinical care.
Scoring of clinical vignettes
The maximum score for each vignette was of 100 marks. The 100 marks for each vignette were divided across relevant elements in proportion to the relative importance of that element in ensuring the best clinical care. The element-wise distribution of marks for each vignette was decided by a set of experts. It was validated by another set of clinical experts. Other studies have also used a similar approach for deciding the marks for different parts of a clinical vignette [27][28][29][30][31].
Data collection and analysis
The data collection was managed by the State Health Resource Centre, an autonomous body providing technical support to the department of health in Chhattisgarh. Each interviewer deployed for the data collection had an undergraduate degree in health sciences and a master's degree in public health. Data collection for the study was done from October 2020 to February 2021. Apart from the vignettes, data were collected on the number of persons provided treatment by the concerned provider (CHO/RMA/MO) for various kinds of ailments. Mean and median with 95% confidence intervals were calculated for scores in different sections and vignettes. For statistical significance, one-way ANOVA was used to compare the difference in mean score of all three providers.
A multi-variate linear regression model was applied to confirm the difference in clinical scores of the three cadres. The outcome variable was the competence score achieved by the providers. We did not expect any extreme values for this variable. Five of the six independent variables included in the adjusted model were based on an existing study in Chhattisgarh [11]. The above variables were-cadre, age, sex, distance of posting place from district headquarter and type of area (tribal/non-tribal). We expected the years of experience to be a relevant variable and therefore included it in the model. Data analysis was done using IBM-SPSS 20.0 version software.
Ethical consideration
Ethics approval for the current study was provided by the Institutional Ethics Committee of State Health Resource Centre, Chhattisgarh, India. Informed written consent was obtained from each participant.
Results
The socio-demographic and other characteristics of the three cadres-CHOs, RMAs and MOs doctors are given in Table 1. The proportion of women among the CHOs was 83%; as compared to 39% in RMAs and 26% in MOs. Around one-third of CHOs belonged to the vulnerable communities of Scheduled Tribes, whereas their representation among the RMAs and MOs was miniscule. The mean age of CHOs was lower than RMAs and MOs. The average distance of the workplace, i.e. the place of posting, from their native place was the closest for CHOs and farthest in case of MOs.
In terms of experience, CHOs were the least experienced. The average experience in primary care was around a year for CHOs, whereas it was close to 10 years for RMAs.
The number of persons provided treatment by the concerned provider (CHO/RMA/MO) for various kinds of ailments is given in Additional file 1: Table S1. It shows that a variety of illnesses were being treated in health and wellness centres (HWCs) of Chhattisgarh around 2019 end. Average around 359 patients received treatment per health and wellness centre in a month. HWCs managed a variety of ailments when they were staffed with a MO, RMA or a CHO.
Competence scores
The overall competence scores of the three cadres for all the vignettes put together are given in Table 2. Overall, CHOs score less than RMAs. The MOs scored greater than both the MLHP cadres (p < 0.05). The median scores are given in Additional file 2: Table S2 and they show a similar pattern.
The scores in different vignettes/diseases are given in Table 3. CHOs scored well on hypertension, diabetes and malaria and their scores for these diseases were close to what RMAs and MOs scored. CHOs scored poorly (mean score < 50%) for diarrhoea, vulvo-vaginal candidiasis and pre-eclampsia.
The mean scores of RMAs were above 50% for all the ten conditions but their scores for diarrhoea, vulvovaginal candidiasis and pre-eclampsia were poorer relative to other conditions. MOs also scored above 50% for all the ten conditions, but their scores for diarrhoea and pre-eclampsia were poorer relative to other diseases. Table 4 gives the scores according to the component of clinical care. Among the components of clinical care, CHOs did well in diagnosis with score of 65%. Their mean scores were close to 50% in rest of the components.
For RMAs and MOs, the relatively weaker areas were of history taking and physical examination/investigations. Around 80% of the prescriptions written by CHOs for hypertension and diabetes were found correct (Additional file 3: Table S3). For malaria and pneumonia, around two-thirds of prescriptions written by CHOs were found to be correct. The CHOs did not score well on prescriptions for pre-eclampsia. The differences in overall mean scores of CHOs, RMAs and MOs on various components of clinical care were statistically significant (Table 4). MOs scored better than RMAs and CHOs. The median scores are given in Additional file 2: Table S2 and they show a similar pattern.
The adjusted multi-variate model for determinants of the overall score is given in Table 5. It showed that while controlling for the experience and other potentially relevant variables, CHOs scored less than RMAs and MOs. The rest of the variables were not associated significantly with the scores.
Discussion
The current study provides an assessment of clinical competence of mid-level cadres in the context of PHC being expanded and organized through HWCs in India. It found that the overall competence scores of CHOs were lower than MOs and RMAs. However, the CHOs scored well in managing common NCDs and malaria. This is not surprising considering that their 6-month training programme for the MLHP role placed greater emphasis on NCDs. The programme activities in the HWCs and the system for their monitoring were also focused on NCDs. The above results indicate the potential this cadre holds in expanding access to primary care for important diseases. It underscores the need to provide them further training so as to enable them to manage a greater range of illnesses competently. It also suggests the need for modifying the monitoring design to include a wider range of diseases. The current study found several areas in which the technical skills of CHOs need to be strengthenedpre-eclampsia, reproductive tract infections, poisoning, severe dehydration and sickle cell disease. The adjusted model showed that type of cadre was significantly associated with competence scores. Equity can suffer if the difficult and underprivileged districts do not get enough MOs and have to depend mainly on mid-level cadres. This suggests that all cadres including MOs should be distributed equitably between different areas (tribal/ non-tribal).
Internationally, several studies have reported that there is acceptability of non-physician prescribing; especially when the other patient-centric attributes are also present [18,32]. Researchers have argued that prescription is a necessary part of patient-centric care and allowing nonphysicians to prescribe can help in its expansion [33]. Studies have shown that prescription by non-physicians helped in timeliness of care and cost saving [19,34,35]. Studies have also reported that significant barriers exist in deploying MLHPs or non-physicians in clinical roles. Restrictions on prescribing are common [12]. Rigidity of boundaries, established hierarchies and relationships of power between the various medical professions has often been reported as a key barrier [34,36]. There are often shortages of required medicines in settings where MLHPs are deployed [12]. In India, governments have made a few attempts to promote non-physician cadres though such policies have met with a significant amount of resistance from the physicians [37]. The RMA experiment remained limited to a couple of states and their numbers remained below 2000 [11,37,38]. The Bachelor of Rural Health Care (BRHC) course was proposed by the central ministry of health in 2010, but it could not be launched due to sustained opposition [39]. The course was redesigned as Bachelor of Science (Community Health) and it got the central government's approval in 2013 [39,40]. Yet, it remained largely unimplemented and only one or two states could make a start [41]. In comparison to the earlier attempts, CHO cadre has gone the most distance. Many countries seem to be shifting to nurse-based MLHPs. A similar shift could also be seen in the Indian policy, from nonnurse MLHPs like RMAs to nurse-based cadre of CHOs. There are more than 50 000 CHOs already working and their number is likely to cross 100 000 soon. CHOs are perhaps facing less opposition as they are posted in small rural facilities (sub-centres). It is important to note that the policy does not allow CHOs to practise outside HWCs. While earlier attempts to promote MLHPs were stand alone in nature, CHOs were part of the comprehensively designed mechanism of HWCs. This suggests that system-wide amendments, like introduction of HWCs may be necessary for such cadres to get established. Another factor that seems to have facilitated the fast roll-out of CHO cadre is of increased production of nurses in India and availability of surplus nurses.
HWCs are emerging as a suitable vehicle to implement PHC in India as they aim to bring comprehensive services closer to rural people, at a population of 5000. There is no way physicians can be placed at such a grassroots level in the foreseeable future [42]. Still there can be a tendency to restrict the role of HWCs in conducting diagnosis and treatment of illnesses and to confine their involvement to screening, referral and follow-up. Such a tendency is likely to have an adverse affect on PHC. The patients referred by HWCs to higher facilities may have to travel large distances and it can involve a lot of difficulty and uncertainty in most parts of India. Therefore, if HWCs have to fulfil their relevance in PHC, MLHPs need to treat a large share of the patients approaching them and limit the referrals mainly to complicated illnesses. A recent policy brief in Indian context has also recommended expanding the prescription rights of nurses [43]. The development of CHO cadre and their acceptability in curative ambulatory care role can perhaps help in improving the status of nurses in Indian health system. CHOs can also help in correcting some of the gender imbalance in clinical cadres in India.
While a long-standing debate exists on how far to rely on non-physician cadres for clinical roles, the advantages they offer should not be ignored any longer, especially in LLMIC situations like India's. The results of the current study bode well for the CHO cadre deployed in HWCs. The question that requires attention now is how to enable such cadres to ensure the optimal delivery of PHC. This also underscores the need for ensuring adequate in-service training and continuous skill building for such cadres. Earlier studies have shown that continuous training is necessary and effective in improving clinical performance of non-physician cadres in PHC [31,44]. It is significant therefore that the national health mission in India has recently initiated programmes for in-service capacity building of CHOs [45]. Development of standard treatment protocols has also been recommended to enable the non-physician cadres in providing better quality care [12]. A specific recommendation has been to develop simplified protocols for NCD management in facilities including the HWCs [46]. Chhattisgarh has also developed standard treatment protocols for use by CHOs and they form an important aid for building greater clinical competence [47].
The current study has several strengths. It was the first study on clinical competence of CHOs and it provides important guidance for policies to expand PHC in rural areas. The study covered a wide range of ambulatory care competencies and was not limited to a single disease. The study identifies areas in which the technical capacities of CHOs need to be enhanced. There will be a need to conduct similar studies of CHOs in other Indian states. A qualitative study is recommended to understand how the CHO cadre got developed. Further research is recommended on health outcomes achieved by MLHPs in HWCs, the cost-effectiveness of care provided by them and impact of their introduction on the wider health system.
Limitations
The limitations of clinical vignettes-based assessments apply: (a) performance on the vignettes can be different from what providers do in practice; (b) this does not include demonstrating the skills to perform the clinical tasks necessary to diagnose and care for a patient. It is difficult to directly compare the scores of MLHPs in this study with other studies because the vignettes and their scoring templates vary across studies.
Conclusion
The non-physician MLHP cadre of CHOs deployed in rural facilities under the current PHC initiative in India exhibited the potential to manage ambulatory care for illnesses. They should be trained further so that they can treat a large share of patients coming to the health and wellness centres. Their training should also equip them for appropriate referrals to higher facilities based on assessment of likely complications.
Continuous training inputs, treatment protocols and medicines need to be made available to MLHPs to improve their performance. Making a wide range of primary care services available close to people is essential to PHC and well-trained mid-level providers will be crucial for making it a reality. | 2022-05-12T14:00:41.938Z | 2022-05-12T00:00:00.000 | {
"year": 2022,
"sha1": "db8f154858308fccb74592cee655fc434251fed0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "db8f154858308fccb74592cee655fc434251fed0",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251159553 | pes2o/s2orc | v3-fos-license | Role of sarcopenia in the frailty transitions in older adults: a population‐based cohort study
Abstract Background Frailty and sarcopenia are age‐associated syndromes that have been associated with the risk of several adverse events, mainly functional decline and death, that usually coexist. However, the potential role of one of them (sarcopenia) in modulating some of those adverse events associated to the other one (frailty) has not been explored. The aim of this work is to assess the role of sarcopenia within the frailty transitions and mortality in older people. Methods Data from the Toledo Study of Healthy Aging (TSHA) were used. TSHA is a cohort of community‐dwelling older adults ≥65. Frailty was assessed according with the Frailty Phenotype (FP) and the Frailty Trait Scale‐5 (FTS5) at baseline and at follow‐up. Basal sarcopenia status was measured with the standardized Foundation for the National Institutes of Health criteria. Fisher's exact test and logistic regression model were used to determine if sarcopenia modified the transition of frailty states (median follow‐up of 2.99 years) and Cox proportional hazard model was used for assessing mortality. Results There were 1538 participants (74.73 ± 5.73; 45.51% men) included. Transitions from robustness to prefrailty and frailty according to FP were more frequent in sarcopenic than in non‐sarcopenic participants (32.37% vs. 15.18%, P ≤ 0.001; 5.76% vs. 1.12%; P ≤ 0.001, respectively) and from prefrailty‐to‐frailty (12.68% vs. 4.27%; P = 0.0026). Improvement from prefrail‐to‐robust and remaining robust was more frequent in non‐sarcopenic participants (52.56% vs. 33.80%, P ≤ 0.001; 80.18% vs 61.15%, P ≤ 0.001, respectively). When classified by FTS5, this was also the case for the transition from non‐frail‐to‐frail (25.91% vs. 4.47%, P ≤ 0.001) and for remaining stable as non‐frail (91.25% vs. 70.98%, P ≤ 0.001). Sarcopenia was associated with an increased risk of progression from robustness‐to‐prefrailty [odds ratio (OR) 2.34 (95% confidence interval, CI) (1.51, 3.63); P ≤ 0.001], from prefrailty‐to‐frailty [OR(95% CI) 2.50 (1.08, 5.79); P = 0.033] (FP), and from non‐frail‐to‐frail [OR(95% CI) 4.73 (2.94, 7.62); P‐value ≤ 0.001]. Sarcopenia does not seem to modify the risk of death associated with a poor frailty status (hazard ratios (HR, 95%) P > 0.05). Conclusions Transitions within frailty status, but not the risk of death associated to frailty, are modulated by the presence of sarcopenia.
Introduction
In old age, pathways leading to disability could be accelerated by certain conditions as frailty and sarcopenia. [1][2][3] Frailty is an age-associated, biological syndrome characterized by decreased biological reserves and is associated with adverse outcomes (i.e. disability, institutionalization, death, and hospitalization). 4 In 2006, Gill and colleagues proved that frailty is a dynamic state that changes over time, mainly impairing, but also improving. 5 These results, confirmed in other longitudinal studies [6][7][8] and in a recent meta-analysis, 9 opened an ample opportunity for the prevention of frailty and its consequences. 5 Moreover, some studies have suggested that different factors influence the frailty transitions (such as older age, previous diseases, 8,10 physical activity and mobility levels, 6 socio-economic and clinical factors, 6,11 vitamin D levels, 11 or hospitalizations 8 ). A better knowledge of these factors and how they could modulate the frailty transitions would help to both refine the prognosis of frailty but also to develop effective strategies for the prevention and restoration of frailty, improving the quality of life of older people. 4 Sarcopenia, the loss of lean muscle mass and muscle strength and/or function, 2 has been proposed as the biological substrate of frailty. 12 Although frailty and sarcopenia could coexist and have been related as states of increased vulnerability due to degradation on multiple systems 2, [13][14][15] and physical function impairment, 16 both entities have been clearly distinguished. 17 We previously described that only a minority of people with sarcopenia has frailty and, by opposite, that between a third and a quarter of frail people do not have sarcopenia, suggesting an association between sarcopenia and frailty beyond the simple coexistence of the two entities. 17 More recently, another group, using data from the Hertfordshire cohort study have reported similar findings in a cross-sectional study. 18 This fact opens the possibility of different risks for the outcomes depending upon the coexistence of sarcopenia in frail patients, raising the chance of the existence of different clinical phenotypes, 17 but to date, no study has addressed this hypothesis.
Methods
Participant's data were taken from the Toledo Study of Healthy Aging (TSHA). This study was conformed according to the ethical standards defined in the 1964 Declaration of Helsinki and approved by the Clinical Research Ethics Committee of the Toledo Hospital, Spain. Participants signed an informed consent form previous to recruitment.
As detailed elsewhere previously, 19 TSHA is a longitudinal cohort aimed at studying different aging phenotypes through socio-demographic, clinical and genetical variables and its relationship with physical and neuropsychological assessments and lifestyle components, such as physical activity, diet, tobacco, and alcohol consumption. TSHA was designed to study frailty prevalence and its underlying causes in rural and urban community-dwelling older adults aged 65 years or older. Subjects from the cohort were selected by a two-stage random sampling of the municipal census of the province of Toledo. Sampling was conducted within census sections in six strata according to sex, age and town-size groups, recruiting 24% of the population aged 65 and older in the Toledo province.
For the purposes of the current study, data from basal (2011-2013) and follow-up (2014-2017; median time of 2.99 years, range 2.0-5.4) face-to-face visits were analysed.
Study variables
Frailty status was assessed according to two established criteria: the Frailty Phenotype 1 and the Frailty Trait Scale 5 (FTS5) 20 at baseline and at follow-up.
Frailty status
Fried scale Frailty Phenotype (FP) was assessed according to its five criteria fitted to Spanish population 21 : 1. Weight loss: defined as self-reported unintentional weight loss of ≥4.5 kg in the last year. 2. Exhaustion: was measured by self-report using two questions: ('How many days during the last week have you felt that anything you did was a big effort?' and 'How many times during the last week have you felt that you could not keep on doing things?'). The criterion was met when participant answered at a score of 2 or higher (0-4). 3. Weakness: handgrip strength was measured using JAMAR Hydraulic Hand Dynamometer (Sammons Preston Rolyan, Bolingbrook, IL). Best peak strength of three performances was selected and gathered using international standard procedures. 22 Between performances, at least 1 min of resting was permitted. Results was adjusted for sex and body mass index. Low grip strength ≤20th percentile. 4. Slowness: was defined using the 3 m walking test at their usual pace, according to the standard protocol. Best time of two performances was chosen. Cut-offs were adjusted by sex and height. 5. Low physical activity: defined as being in the lowest quintile using the Physical Activity Scale for the Elderly (PASE) scale, 23 stratified by gender. The stages of frailty were defined as robust or not frail in those whose score was 0. A score of 1 or 2 indicated that someone is pre-frail. People were considered frail when they met three or more domains. Differences in the cut-off points of the frailty criteria in the TSHA according to Fried's originals are shown in Supporting Information S1.
Frailty Trait Scale 5 (FTS5) FTS5 20 is a recent tool developed and validated in the TSHA. It seems to improve the accuracy of the Frailty Phenotype to predict adverse events (death, hospitalization, incident frailty, and disability) 20 in older adults even better than classical frailty tools, as the two most used tool 24 : the Frailty Phenotype 1 and the Frailty Index. 25 In addition, it allows continuous assessment of frailty levels, being sensitive to small changes that have been shown to be related to the risk of different adverse events such as disability, hospitalization and mortality, potentially overcoming several of the pitfalls of previous frailty assessment. 26 It is composed by five domains [gait speed, grip strength, physical activity, body mass index (BMI), and balance].
Gait speed, handgrip strength, and physical activity were performed, as has been detailed previously, and scored according to the rules of this scale (Supporting Information S2).
BMI was estimated as body weight in kg (adjusted to the nearest 0.1) divided by height in meter squared. Height was measured using a stadiometer at head level to the nearest centimetre.
Balance was evaluated using the progressive Romberg test. 27 This battery test consists of testing the balance ability of the participant in three position (side-by-side, semi-tandem and full-tandem position), each one more challenging than previous and with the goal of maintaining balance for at least 10 s. FTS5 score ranges from 0 to 50, being 0 the lowest frailty score and 50 the highest. Each of the five domains scores from 0 to 10. Scores to 25.25 or higher were considered as frail and lower as no frail.
Sarcopenia
Sarcopenia was measured at baseline and defined according to the Foundation for the National Institutes of Health (FNIH), fitted to the cut-off points of our population (standardized FNIH [sFNIH]). 17 An individual was qualified as sarcopenic if he or she had a low muscle mass, in addition to a low gait speed and a grip strength below the cut-off points.
BMI-adjusted by appendicular lean mass (BMI/ALM), derived as the sum of the muscle mass of the arms and legs was finally determined. According to sFNIH diagnosis algorithm, low muscle mass was present in men and women when ALM/BMI is below 0.65 and 0.54, respectively.
Gait speed and handgrip strength measurement methodology has been explained previously. Gait speed cut-off point was <0.8 m/s and handgrip strength cut-off points were <25.51 kg for men and <19.19 kg for women.
Mortality
Vital status was ascertained by the Spanish National Death Index (Ministry of Health, Consumer Affairs and Social Welfare), hospital records, and phone contact during the study follow-up. Mortality follow-up time was right censored at 4 years. Median follow-up time was 2.64 years (range from 0.60 to 4.00).
Co-morbidity Charlson index 28 was used to assess co-morbidity.
Nutritional status Mini-Nutritional Assessment 29 was used to assess nutritional status. Participants were categorized according to their score as well-nourished (≥24), at risk of malnutrition (17-23.5), or undernourished (<17). Due to the small number of undernourished subjects we merged this category with at-risk of malnutrition in the same category.
Cognitive status
The Mini-Mental State Examination 30 was used to evaluate the cognitive status. Participants were classified into two categories (normal cognitive status and cognitive impairment) according to their cut-off point based on their educational level adjusted to the Spanish population. 31
Statistical analysis
Characteristics of the subjects at baseline were stratified according to frailty status and presence, or absence, of sarcopenia.
Descriptive statistics were shown as mean (standard deviation, SD) and number (N, %). Differences between sarcopenic and non-sarcopenic were tested using Mann-Whitney and χ 2 test.
We used Fisher exact test to assess if transitions between frailty status were modified by the presence of sarcopenia.
The association between sarcopenia and basal frailty status with the outcomes were assessed using Cox proportional hazard model for death and logistic regression model for improvement, maintenance and worsening in frailty category: robust, prefrail and frail in FP; and no frail and frail in FTS5. We used two models. Model 1 was the univariate model. Then, we adjusted by age, gender and Charlson index (Model 2). Additionally, in a sensitivity analysis, we included to Model 2 cognitive or nutritional status as confounders.
Analyses were performed using the Statistical Package R version 3.6.1 for Windows (Vienna, Austria). Statistical significance was set at P-value <0.05.
Study population
There were 1538 participants (700 men) included in the analysis, with a mean age of 74.73 (5.73 SD) years old. Three hundred forty-eight met criteria of sarcopenia according to the sFNIH. Participant's characteristics are shown in Table 1. Sarcopenia rates were statistically higher in individuals with frailty. While 77.64% (FTS5) and 56.82% (FP) of subjects with frailty were sarcopenic, only 16.19% (FTS5) and 15.02% (FP) of non-frail participants met sarcopenic criteria. Moreover, frailty rates within the sarcopenic individuals were statistically higher than in non-sarcopenic individuals (Supporting Information S3). The status of the participants was successfully assessed along the follow-up in 1349 subjects (87.71%). Participants lost for the follow-up did not show any difference in their baseline characteristics regarding age (P-value 0.904), gender (Pvalue 0.348), Charlson index (P-value 0.792), frailty (FP: Pvalue 0.718; FTS5: P-value 0.775), and sarcopenia (P-value 0.331) stata.
Frailty transitions
Transitions in frailty status and mortality according to the presence or absence of sarcopenia at baseline are included in Table 2. Subjects who met sarcopenia criteria showed a higher probability to impair their frailty status, disregarding the tool used to assess their condition. When FP was used, people with sarcopenia showed a higher percentage of transitions from robust to prefrail (32.37% vs. 15.18%; P ≤ 0.001) and frail (5.76% vs. 1.13%; P ≤ 0.001) and from prefrail to frail (12.68% vs. 4.27%; P = 0.003). When FTS5 was used, the percentage of those progressing from non-frail to frail achieved the figure of 25.91% in those with sarcopenia, while it was 4.47% in those non-sarcopenic (P ≤ 0.001). When we looked to those remaining in the same frailty status, we again observed the same trend. Those without sarcopenia had the highest probability of remaining robust than their sarcopenic counterparts (FP: 80.18% vs. 61.15%, P ≤ 0.001; FTS5: 91.25% vs. 70.98%, P ≤ 0.001).
Although most of these spontaneous transitions were toward a worse state of frailty (Figure 1), some individuals improved, especially those who did not meet the criteria for sarcopenia. In this line, a lower number of prefrail sarcopenic persons (according to the FP) improved in their frailty status (29.27% vs. 45.22%; P ≤ 0.001).
Risk of worsening, maintenance or improving within the same frailty status at baseline according to the presence or
absence of sarcopenia is shown in
In frail participants, people without sarcopenia had a higher possibility of improving frailty
Mortality
Even though frailty was associated to a higher mortality (FP: robust 2.74%; prefrail 7.57%; frail 22.73%, P < 0.001; FTS5: no frail 3.63%; frail 15.53%, P < 0.001), there were no significant differences in mortality between sarcopenics and non-sarcopenics within the same frailty status ( Table 2). According to this finding, there were no statistically significant differences in the risk of mortality between sarcopenic and non-sarcopenic older adults within the same frailty category (Table 4), as expected. These findings were not modified when nutritional and cognitive status were added to the model (Supporting Information S5).
Discussion
This study directly addresses the relationship between sarcopenia and frailty regarding the prognosis of frailty. We show Sarcopenia and frailty transitions in older people that sarcopenia seems to be a modulator of transitions in the frailty status: sarcopenic individuals had more than two-fold the risk of non-sarcopenics of worsen across the frailty continuum. In this continuum, sarcopenia was an independent predictor of frailty but not in mortality. These findings reinforce the hypothesis of the existence of different frailty phenotypes in which sarcopenia could be one of the major risk factors of developing functional decline along the frailty spectrum, with a less relevant role, if any, in the mortality associated to frailty. One of the strengths of this study is the large community-based population used. Moreover, frailty tools and sarcopenia diagnosis criteria used in this study have been standardized and adjusted to the study population prior to this analysis, showing a better predictive ability. 17,21 Furthermore, muscle mass has been determined by DEXA, which is considered the gold standard in assessing body composition. Finally, and to avoid biases linked to the tool used, frailty has been evaluated by using two different tools without detecting relevant differences in the findings depending upon the tool, thus reinforcing the strengths of the results. On the other hand, our study presents a reliable ascertainment of mortality. Using the Spanish National Death Index ensures that all the deaths occurring in the cohort are registered.
Some weaknesses can be found in our study, but they do not seem to significantly bias the results and/or the conclusions. Although grip strength and gait speed are used to assess both frailty and sarcopenia, a fact that could explain some of the overlapping between the two entities, the cut-off points used for qualifying the participants as sarcopenic or frail are different, being higher for sarcopenia.
Another limitation regards the low prevalence of frailty when FP is used. This prevalence is lower to the one found in the whole TSHA cohort and is probably explained by the lower number of frail subjects who attended to the DEXA examination. The consequence of this low power is a decrease in the ability to detect some differences, making non-significative some of our findings. In our study, this potential source of bias could account for the outcomes in frail subjects, a category only met by 44 individuals at baseline when using the frailty phenotype. The lack of significant differences impacts both changes in frailty status and the risk of death. Regarding death, although there is an increase in the percentage of deaths as the frailty status worsen from robust to prefrailty and frailty, there is no differences inside each category of frailty. Having measured frailty by two different tools makes this unlikely, taking into account that the amount of people qualified as frail using FTS5 is not so low (n = 161) and the number of events is high enough to make our results stable. The findings in our study are quite consistent and do not change depending upon the method of assessing frailty.
It must be highlighted that the design of our study suffers from a ceiling effect for those who are frail. As we are only assessing the frailty status, but not disability, this makes impossible for those who are frail to impair their functional condition toward disability.
Although sarcopenia is usually mentioned as a key factor in frailty and has been proposed as a biomarker to confirm frailty, 17 its role in the transitions of the frailty status has been broadly neglected. In fact, we have not found studies addressing the issue of how the presence of sarcopenia can influence the changes in the frailty status along the time in terms of worsening, improving or maintenance of that status. However, in a study assessing the role of muscle mass, by the determination of the Phase Angle, in the transition of frailty, they found a direct relationship between muscle mass and the improvement in the frailty status, 32 a finding supporting our results.
Although frailty generally increases with age, there is high variability between subjects, and not necessarily should increase over time. 9,26 According to the FP, a 57% of total participants remain in their basal frailty status. These results are similar to those found in other studies 5,7 or meta-analysis. 9 Moreover, among those who started being prefrail, 23% improved in their frailty status and 18.2% worsened, with twice the risk of improvement in those prefrail subjects without sarcopenia, and twice the risk of worsening in sarcopenic ones. This higher prevalence of improvement in those who started being prefrail was found by Lee and colleagues after 2 years of follow-up. 8 They showed that 23.4 and 26.6% of the prefrail men and women, respectively, improved in their frailty status; and 11.1 and 6.6% worsened. Also, as we assessed subject at 3 years, there may be transitions in frailty status that we are not capturing because of the time between visits. Several studies in the literature have assessed the transitions in the frailty status, with separate times of observation and controversial results, not allowing to establish what is the best time to reassess it. 6,33,34 Unanswered questions According to our results, it seems that the presence of sarcopenia influences the spontaneous transition of frailty over time, suggesting the existence of different clinical phenotypes (sarcopenic vs. non-sarcopenic) with different prognosis within the framework of frailty. 17 The existence of some other phenotypes of frailty with different origin and course has been proposed by other authors recently. [35][36][37] These different phenotypes raise the need of studying the different pathogenic pathways leading to each of them, but also the need of doing further research for identifying risk 8,11 and protective 11 factors, and looking for different diagnostic and therapeutic approaches according to these different phenotypes, with the final aim of providing a most personalized and accurate clinical framework for prevention, detection, and intervention of frailty, 38 thus contributing to healthy aging.
Conclusions
These results show sarcopenia as a modulator of frailty suggesting the existence of two different clinical phenotypes of frailty (with and without sarcopenia) associated to different prognosis. This raises the need of assessing sarcopenia as a second step after diagnosing frailty in the daily clinical management of this condition. | 2022-07-30T06:16:50.700Z | 2022-07-28T00:00:00.000 | {
"year": 2022,
"sha1": "4ced07c4f87cd6b61b7ba906a9c146cf5cb7079f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "7eea3917afbfd40d5671387f855e6aeeb1d0a100",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119429508 | pes2o/s2orc | v3-fos-license | Curious Variables Experiment (CURVE). Superhump Period Change Pattern in KS UMa and Other Dwarf Novae
We report extensive photometry of the dwarf nova KS UMa throughout its 2003 superoutburst till quiescence. During the superoutburst the star displayed clear superhumps with a mean period of Psh = 0.070092(23) days. In the middle stage of superoutburst the period was increasing with a rate of $\dot P/P = (21\pm12)\times 10^{-5}$ and later was decreasing with a rate of $\dot P/P = -(21\pm8)\times 10^{-5}$. At the end of superoutburst and during first dozen days of quiescence the star was showing late superhumps with a mean period of Plate = 0.06926(2) days. This phenomenon was observed even 30 days after beginning of the superoutburst. In quiescence the star shows quasi-periodic modulations with amplitude reaching 0.5 mag. The most common structure observed during this stage was sinusoidal wave characterized by a period of about 0.1 days. Comparing KS UMa to other SU UMa stars we conclude that this group of dwarf novae shows decreasing superhump periods at the beginning and the end of superoutburst but increasing period in the middle phase.
Introduction
identified 10 cataclysmic variables among the stars of the Second Buryakan Sky Survey (Makarian and Stepanian 1983). Seven of these variables were already known, but three were new. One of these new objects was KS UMa (SBS 1017+533). In 1998 the star was observed in the bright state. Detection of the superhumps with period of 0.0697 day during this event by T. Vanmunster 1 proved that KS UMa belongs to the group of SU UMa type dwarf novae.
Historical light curve of the star based on the Harvard College Observatory photographic plates was obtained by Hazen and Garnavich (1999).
KS UMa most probably coincides with the ROSAT X-ray source J1020.4+5304 (Snowden et al. 1995) located only 10 arc sec away from the variable.
On 2003 February 18/19 KS UMa went into the superoutburst again as was announced by Eddy Muyllaert and Gary Poyner. 2 Because of the excellent visibility, and because of the lack of the entire coverage of a superoutburst in the past, we performed extensive CCD photometry of the star both in the superoutburst and quiescence.
Observations
Observations of KS UMa reported in the present paper were obtained during 23 nights from 2003 February 21 to April 1 at the Ostrowik station of the Warsaw University Observatory. They were collected using the 60-cm Cassegrain telescope equipped with a Tektronics TK512CB back illuminated CCD camera. The scale of the camera was 0.76"/pixel providing a 6.5 ′ × 6.5 ′ field of view. The full description of the telescope and camera was given by Udalski and Pych (1992).
We monitored the star in "white light". This was due to the lack of an autoguiding system, not yet implemented after recent telescope renovation. Thus we did not use any filter to shorten the exposures in order to minimize guiding errors.
The exposure times were from 30 to 90 seconds during the bright state and from 150 to 300 seconds in the minimum light.
A full journal of our CCD observations of KS UMa is given in Table 1. In total, we monitored the star during 110.18 hours and obtained 3150 exposures.
Data Reduction
All the data reductions were performed using a standard procedure based on the IRAF 3 package and the profile photometry has been derived using the DAOphotII package (Stetson 1987).
Relative unfiltered magnitudes of KS UMa were determined as the difference between the magnitude of the variable and the magnitude of the comparison star GSC 3815:610 (R.A. = 10 h 20 m 28.25 s , Decl. = +53 • 06 ′ 36.6") located 2.2' to the north of the variable. This comparison star is marked in the chart displayed in Fig. 1.
Note that a faint companion of magnitude V ≈ 18 located few seconds to the east of the variable does not affect our profile photometry in any substantial way.
The typical accuracy of our measurements varied between 0.001 and 0.012 mag in the bright state and between 0.006 and 0.055 mag in the minimum light. The median value of the photometry errors was 0.005 and 0.014 mag, respectively. 1 VSNET alert no. 1448 2 VSNET-superoutburst 1919 alert 3 IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation. Fraction of the HJD Fraction of the HJD Our observations started with fourth night of the superoutburst and with the third night of the presence of the superhumps. We observed the superoutburst during seven consecutive nights from Feb. 21/22 to Feb. 27/28. During this time the star faded by 0.85 mag giving the mean decrease of 0.13 mag per day. Our observing run from Mar. 05/06 shows that the star was still 0.5 mag brighter than in the quiescence. During the night of Mar. 06/07 this difference was only 0.1 mag. Thus we conclude that the superoutburst lasted until Mar. 06/07 i.e. 16 days.
Amplitude
In a typical SU UMa star the amplitude of the superhumps reaches its maximum around third day of the superoutburst. Starting from this moment the amplitude monotonically decreases. Few days later superhumps evolve from the tooth-shape light curve to more complicated shape showing more scatter and secondary maxima called interpulses. Around the end of the superoutburst low amplitude superhumps switch into late humps characterized by the same period but phase shift of ∼ 0.5 cycles.
As was described in the VSNET e-mail list 7 the peak-to-peak amplitude of the superhumps in KS UMa reached a maximum of 0.30 mag on Feb. 20. During the first night of our run i.e. on Feb. 21/22 the amplitude was 0.21 mag as is clearly visible in our Fig. 6 where we show the mean superhump profiles for each night. These profiles were obtained by phasing our observations with the superhump period for each night and averaging them in 0.02 -0.05 phase bins.
Our longest run obtained on Feb. 22/23 lasting almost 12 hours shows gradual change in the superhump profile. During this interval the amplitude of the superhumps decreased from 0.20 mag (first four humps) to 0.16 mag (last 3 humps). Additionally, the main superhump maxima became weaker, while other peaks became stronger.
During the night of Feb. 23/24 the light curve was quite noisy with amplitude of the modulations equal to 0.13 mag. Surprisingly, on Feb. 24/25 the amplitude increased to 0.17 mag and the shape of the superhumps became sharp and smooth again as during the first days of the superoutburst.
Increase of the amplitude of the superhumps has been continued until Feb. 25/26 when it reached its local maximum with 0.18 mag. However the shape of the light curve during this night changed markedly in comparison with Feb. 24/25. The interpulses and other secondary humps were also clearly visible in the light curves from Feb. 26/27 and 27/28 and during these nights amplitude was 0.16 and 0.12 mag, respectively. 7 VSNET-superoutburst 1929
Period
From each light curve of KS UMa in superoutburst we removed the first or second order polynomial and then analyzed them using ANOVA statistics and two harmonics Fourier series (Schwarzenberg-Czerny 1996). The resulting periodogram is shown in Fig. 7. The most prominent peak is found at a frequency of f = 14.263 ± 0.010 c/d, which corresponds to the period P sh = 0.07011(5) days (100.96 ± 0.07 min). The peak visible at 7.13 c/d is a ghost of main frequency arising due to use of two harmonics. The harmonic peak at 28.53 c/d appears to be real. The inset in Fig. 7 shows the magnification of the power spectrum around main frequency. Apart from this main peak and its aliases the inset shows no other significant periodicities. For nights from Feb. 21 to 27 we determined 27 times of maxima of superhumps. They are shown in Table 2 together with their cycle numbers E. The least squares linear fit to the data from Table 2 gives the following ephemeris: indicating that the mean value of the superhump period P sh is equal to 0.070087(26) days (100.93 ± 0.04 min). This is in good agreement with the value obtained from the power spectrum analysis. Combination of both our determinations gives the mean value of superhump period as equal to P sh = 0.070092(23) days.
The O − C departures from the ephemeris (1) are given also in Table 2 and shown in the lower panel of Fig. 8. The best fit to these data shown as a solid line corresponds to the ephemeris: ±0.0022 ±0.000233 ±0.63 ±0.47
Is KS UMa untypical?
The upper panel of Fig. 8 shows the evolution of the peak-to-peak amplitude of the superhumps during 2003 superoutburst of KS UMa. It resembles in all details evolution of the superhump period shown in the lower panel. Both the amplitude and the superhump period behavior looks untypical for SU UMa stars. As we wrote earlier the amplitude of the superhumps usually decreases monotonically during the superoutburst. Till the mid of 1990ties all members of SU UMa group seemed to show only negative superhump period derivatives (Warner 1985, 1995, Patterson et al. 1993). It was interpreted as a result of disk shrinkage during the superoutburst and thus lengthening its precession rate (Lubow 1992). This picture become more complicated when the first stars withṖ > 0 were discovered. Positive period derivatives were observed only in stars with short superhump periods close to the minimum orbital period for hydrogen rich secondary (e.g. The diversity ofṖ behavior is well represented in theṖ/P versus P sh diagram shown in Fig. 9. This diagram is taken from Kato et al. (2003a) with additional point for V1141 Aql (Olech 2003). It shows also the position of the periodic gap and minimum period for dwarf nova systems with hydrogen rich secondaries (Paczyński 1981). The outliers such as V485 Cen, 1RXS J232953.9+062814, KK Tel and TU Men are also marked.
Recently, Nogami et al. (2003) reported observations of Var73 Dra -the new SU UMa dwarf nova in the period gap. They found that the star having mean superhump period of P sh = 0.10623(16) days showed its change rate ofṖ/P = −1.7 ± 0.2 × 10 −3 which is one order of magnitude larger than the largest values known. For clarity we do not plot the position of Var73 Dra in our Fig. 9.
There are however few dwarf novae in which the superhump period derivative is not constant and changes its sign. The first case of the complex period behavior was observed during 1995 superoutburst of AL Com (Howell et al. 1996) when during the first stage of superoutburst the period was increasing with a rate ofṖ/P = 2.1 × 10 −5 and later decreased quite rapidly.
Complex behavior of period and amplitude of the superhumps was observed recently in ER UMa (Kato et al. 2003b) and V1028 Cyg (Baba et al. 2000). In the first stage of the outburst, superhump period of ER UMa was increasing and around 5th day after superoutburst maximum ordinary superhumps were switched into late superhumps which was connected with change of the amplitude of the modulation and sign of the period derivative. In the case of KS UMa the superhump period was also increasing and changed its derivative between third and fourth night of our observations (i.e. around fifth day after the maximum light as in case of ER UMa). But as it is clearly visible from our Fig. 8 change of the sign of period derivative in KS UMa was not connected with transition to the late superhumps because we did not observe the ∼0.5 phase shift in the superhumps maxima. Period and amplitude changes of KS UMa resemble closely those of V1028 Cyg during its 1995 superoutburst (Baba et al. 2000). In that case the superhumps were fully developed on 1995 July 31. During the next six days the amplitude decreased from 0.25 to 0.05 mag and the period was increasing with the rate ofṖ/P = 8.7 ×10 −5 . Starting from 1995 Aug. 6 amplitude of the superhumps was larger again and period was decreasing. The O −C diagram shown by Baba et al. (2000) shows no signs of 0.5 phase shift around Aug. 6 thus periodic light curve modulations observed after this date are still ordinary not late superhumps.
According to Baba et al. (2000) V1028 Cyg may be a link between ordinary SU UMa stars and WZ Sge subgroup of these variables. WZ Sge stars are characterized by very long supercycles, large amplitudes of superoutbursts and short orbital periods (close to 80 min boundary). V1028 Cyg with orbital period around 87 min, supercycle over 400 days and amplitude of the superoutburst around 6 mag is placed exactly between ordinary SU UMa stars and WZ Sge variables. On the other hand it this not the case of KS UMa whose supercycle is around one year, orbital period around 98 min and amplitude of the outburst around 4 mag.
A possible explanation of the untypical behavior of KS UMa is assumption that it is in fact quite typical. If we would start our observations from night of Feb. 23/24, not two days earlier, we would conclude that the period of the superhumps was decreasing with a rate ofṖ/P = −20 ± 8 × 10 −5 (as is shown by dotted line in Fig. 8 and filled circle in Fig. 9).
Recent progress in development of cheap but quite sensitive CCD detectors allowed astronomy amateurs to observe outbursts and detect superhumps of many dwarf novae and collaborate with professional astronomers. The excellent examples of such a fruitfull collaboration are Center fo Backyard Astrophysics (CBA) run by Joseph Patterson from Columbia University and also VSNET run by Taichi Kato and Daisaku Nogami. Thus during last years we had usually very good coverage of superoutburst of interesting objects and we have started to discover such "peculiarities" as in case of KS UMa, V1028 Cyg, ER UMa or AL Com. The question is when the number of such "peculiar" objects become so large that we will begin to consider such a behavior as typical.
To check this hypothesis we reviewed the literature in search for reported period variations in stars with P sh close to period of KS UMa. The results of our search are summarized in Table 3 when we show designation of the star, its mean superhump period, period derivative in units of 10 −5 and corresponding reference. The O − C diagrams for stars from the papers listed in Table 3 are shown in Fig. 10. The cycle numbers E were renumerated to have E ≈ 0 corresponding approximately to the moment of birth of the superhump in the light curve of the star.
What can we learn from Fig. 10? The most interesting thing is that this figure forces us to revise previous statements on the superhump period behavior. Warner (1985Warner ( , 1995 and Patterson et al. (1993) argued that the period derivative in SU UMa stars has a rather common negative value ofṖ/P ∼ −5 × 10 −5 . From our Figs. 8 and 10 we can clearly see that this is not true. Recently Kato et al. (2001bKato et al. ( , 2003a indicated that most of long-period systems show a "textbook" decrease of the superhump periods but short-period systems or infrequently outbursting SU UMa type systems predominantly show an increase in the superhump period. The transition between short and long period systems is around period of 0.062 day thus V1028 Cyg with P sh = 0.06154 day was shortperiod system and itsṖ/P was positive while V1159 Ori with P sh = 0.0642 day was included into a group of long-period systems with negativeṖ/P (as shown in Table 3).
Fortunately, observational coverage of the superoutbursts of V1028 Cyg (Baba et al. 2000) and V1159 Ori was excellent and comparison of the both O − C shown in Fig. 10 indicates that in fact at the beginning of the superoutburst the superhump period was decreasing, in the middle phase of superoutburst was increasing and in the third -the longest phase was again decreasing. Baba et al. (2002) selected middle phase as representative for whole superoutburst of V1028 Cyg and obtained positiveṖ/P. On the other hand, Patterson et al. (1995) simply fitted the parabola to all determined maxima of V1159 Ori and therefore obtained negative value ofṖ/P.
The final conclusion of this section is that most probably all SU UMa stars, both short and long period, show decreasing superhump period in the beginning and in the end of the superoutburst but increasing period in the middle phase. Our Fig. 10 proves it for medium and long period systems.
Recent observations of short period stars such as WZ Sge (see Fig. 17 In the case of KS UMa during the first four days of our observations (days 4 -7 of superoutburst) the period of superhumps was increasing with a rate ofṖ/P = (21 ± 12) × 10 −5 and later (days 7 -12 of superoutburst) was decreasing with a rate ofṖ/P = −(21 ± 8) × 10 −5 .
Late superhumps
The 2003 superoutburst of KS UMa lasted until March 06/07 but modulations with period close to P sh were observed even till March 22/23 (see Fig. 4). The shape and amplitude of these modulations was changing very quickly, sometimes from cycle to cycle, thus for searching for periodicities we decided to use ordinary Fourier transform. The power spectrum for period from March 05/06 to 21/22 is shown in Fig. 11. The highest peak is found at the frequency of 14.38 ± 0.020 which corresponds to the period of 0.0695 ± 0.0001 days. In the case of these late superhumps the minima were more clear and sharp than maxima thus were much better for O −C analysis. Finally, in period Mar. 06 -22 we determined 11 times of minima, which are listed in Table 4 Table 5 together with cycle numbers E and O − C departures computed according to the ephemeris (1). The O − C values are shifted in phase by 0.5. The O − C departures for common and late superhumps are shown in Fig. 12. Provided our cycle count is correct, we can conclude that in the second stage of the superoutburst and during late superhump phase the superhump period decreased with a rate ofṖ/P = −(6.0 ±1.1) ×10 −5 i.e. value quite typical for medium and long period SU UMa stars.
Quiescence
As displayed in Fig. 5 KS UMa in quiescence shows quasi-periodic modulations with amplitude reaching even 0.5 mag. The most characteristic feature observed in this stage was sinusoidal wave with period around 0.1 days clearly visible during late March nights.
The Fourier power spectrum for nights Mar. 24/25 -Apr. 01/02 is shown in Fig. 13. Before calculation the light curves were prewhitened using second order polynomial.
The periodogram yields no specific frequency of these modulations. The highest peak found in Fig. 13 corresponds to the frequency 10.20 ± 0.02 c/d i.e. to the period of 0.0980 ± 0.0002 days and exceeds only marginally compeating features.
Summary
We reported extensive photometry of the dwarf nova KS UMa in its 2003 superoutburst and quiescence. The amplitude of the superoutburst was 3.9 mag. The maximal brightness of the star was 12.3 mag and the mean magnitude in quiescence was 16 In the middle stage of superoutburst the period of superhumps was increasing with a rate ofṖ/P = (21 ± 12) × 10 −5 and later was decreasing with a rate ofṖ/P = −(21 ± 8) × 10 −5 . Comparing KS UMa to other SU UMa stars we concluded that this group of dwarf novae shows decreasing superhump periods at the beginning and the end of superoutburst but increasing period in the middle phase. This is contrary to the original suggestion of Warner (1985Warner ( , 1995 and Patterson et al. (1993) that superhump periods usually decrease with a rate around −5 × 10 −5 and also in contrast with a recent investigation of Kato et al. (2001a) who concluded that short period systems shows increasing periods but long period SU UMa stars are characterized by decreasing periods. At the end of the superoutburst and during first dozen days of quiescence the star showed late superhumps with a mean period of P late = 0.06926(2) days. This phenomenon was observed even 30 days after beginning of the superoutburst.
In quiescence the star shows quasi-periodic modulations with amplitude reaching 0.5 mag. The most common structure observed during this stage was sinusoidal wave characterized by period of 0.098 days. | 2019-04-14T01:37:52.141Z | 2003-06-13T00:00:00.000 | {
"year": 2003,
"sha1": "f9157e01e76262799ef7bb01e7edf1e48031d975",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c65fb96d7dde4c4cfc1bbaf993dc2f77fd1e1d65",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225805269 | pes2o/s2orc | v3-fos-license | Better than nothing ? a review and critique of child sponsorship ¿
The aim of this paper is to review and synthesize research focused on child sponsorship (CS) and, in doing so, to present a critique grounded in conceptualizations of justice, solidarity, ethical relationships, and international development education. As discussed in this paper, a review of the literature yields eight motivations for becoming involved in child sponsorship: Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 2 Personal connection; altruism; guilt; small win; part of something bigger; distrust of government; not faceless; advancing development. Following the research synthesis and discussion of these motivations, a critique is constructed by viewing these motivations through three theoretical lenses: conceptualizations of the good citizen, the complex audience member and, finally, a pedagogical tool and framework referred to as HEADS UP. The paper concludes with questions centring on power, poverty, responsibility, complicity, justice and peace, and, ultimately, provides a response to the question of “is it better than nothing?” The argument put forth in this paper is that, in its noted absence of a more critical examination of the root causes of poverty and global injustices, child sponsorship is, in fact, not better than nothing.
Introduction
In 1985, the New Internationalist wrote that "[h]owever well-intentioned [child sponsorship] may be, the kernel is the creation of a paternalistic relationship which is unnecessary and potentially harmful" (NI, 1985, p. 150). According to that NI issue, child sponsorship "plays on Western individualism and the donor's desire to visualise and obtain feedback from the recipient of the aid" (p. 149). Now, thirty-five years later, the number of sponsored children is "estimated to be between 8 to 12 million children across the world" (Noh, 2019(Noh, , p. 1420) with more than CAD 2 billion raised in Canada alone. In 2018, one child sponsorship organization, World Vision Canada,reported sponsoring 415,113 children during the year, at $39 per month each (Charity Intelligence Canada, 2020). What is clear from these few statistics is that child sponsorship has become a global fundraising machine. Yet, as I argue here, its kernel remains in tact: child sponsorship is an example of a charitable act which "target[s] symptoms and short-term fixes, not root causes, thus promoting band-aid solutions to complex systemic problems" (Saskatchewan Council for International Cooperation (SCIC), 2017, p. 3). In this paper, my aim is to review and synthesize research literature focused on child sponsorship; to view that literature through key theoretical lenses; and to establish and articulate a clear and specific position on child sponsorship. That is, in this paper, I make an argument against child sponsorship (CS).
As I reviewed literature and shared with family, friends, and acquaintances the fact that I was engaged in writing this critique, I was frequently confronted with one question: "... but isn't it better than nothing?" In other words, the complex issues and motivations for becoming involved in CS were being reduced to an implicit binary-based question of "should I sponsor a child or do nothing?" The argument put forth in this paper is that, in its noted absence of a more critical examination of the root causes of poverty and global injustices, child sponsorship is, in fact, not better than nothing.
The paper begins by presenting working definitions and assumptions about CS, including how/if this action aligns with the bigger picture of justice, global citizenship, and development education. Following this, the results of the review are presented in the form of a collection of reasons or motivations for becoming involved in CS, as teased out of the research literature. These reasons or motivations are then viewed through key theoretical lenses, always with an eye focused on whether these motivations, and those nongovernmental organizations (NGOs) promoting CS, reflect the tenets of higher levels of Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 4 justice, solidarity, and ethical relationships. In other words, I focus on producing a summary for how/why people become involved in CS programs and formulating a critique of these reasons based on conceptualizations of the good citizen (Westheimer & Kahne, 2004) and the complex audience member (education student) (Andreotti, 2016). With these discussions forming the ground work, I then move into a deeper analysis and critique of child sponsorship by drawing on a pedagogical tool and framework conceptualization referred to as HEADS UP (Alasuutari & Andreotti, 2015;Andreotti, 2012bAndreotti, , 2016.
Contexts, Caveats, and Confessions (Some Background)
In addition to outlining here what this paper aims to do, I also find it valuable to discuss what the paper does not aim to do. In this text, I do not endeavour to provide a history on the development of CS programs; several others have done this elsewhere (Fieldston, 2014;Watson, 2015). It is significant to note however that, even though CS programs have recently become large-scale fundraising machines, this situation is not reflective of how or why they were originally developed in the 1920s. According to Watson (2015), "a key feature of the early child sponsorship programme seems to have become lost, namely, that it was designed for the short-term support of undernourished children, primarily within family or institutional settings at times of chronic food shortages" (p. 877). This short term, less widespread support soon evolved into much larger-scale, longer-term international programs, presumably "because of their usefulness as a marketing tool for mobilizing resources in rich countries to reduce poverty in poor countries" (Wydick, Glewwe, & Rutledge, 2013, p. 400), or so they were marketed. In this text, I also do not aim to discuss specific arrangements of different child sponsorship organizations nor into whether evidence exists for how/if they are doing what they say they do and the corresponding impacts. Wydick et al. (2013) set out to study the impact of one organization, Compassion International, by conducting interviews with previously sponsored children who are now adults. These authors state that "[g]iven the number of individuals involved in child sponsorship relationships and the billions of dollars committed to them, it is surprising that almost no research exists that evaluates the impacts of these programs" (p. 397-398). Watson and Clarke (2014) also point to how the topic is underresearched, and "[t]hat so few scholars and industry insiders have sought to interrogate the emergence, evolution and contribution of CS INGOs makes it difficult to evaluate their legitimacy" (p. 3). These authors claim that available information is primarily in the form of Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 journalistic/newspaper type stories or in-house publications from the NGOs themselves, and only a small quantity of "fragmented scholarly literature" (p. 3).
Some Foreground on Child Sponsorship
This paper begins from the premise that child sponsorship (CS) is a form of charity, not justice. A clear distinction between the two is provided by Saskatchewan Council for International Cooperation (SCIC) in stating that "... charity is aid given to those in need; justice is fairness, equitable distribution of wealth, resources and power among all members of society" (SCIC, 2017, p. 3). While charity may be defined very simply by these few words ("aid given to those in need"), Rabbitts (2012) reminds us of the messiness of charity, that it "is ethically and practically embedded in everyday life [through] a constellation of decisions, values, strategies and practices" (p. 926). According to SCIC (2017), "[o]ne of the biggest risks of doing charity only work is that charity often satisfies people's impulse for change. If they feel like they've already 'done their part' or created change, then they may move on without actually having made any long-term difference" (p. 18). This paper seeks to carefully and critically examine one (ubiquitous) form of charity known as child sponsorship -"an attractive charitable scheme for people in the Global North that has enjoyed enduring, indeed increasing, popularity since its inception in the 1930s" (Rabbitts, 2012, p. 926).
While this paper presents an argument against CS, I feel the need to begin by acknowledging the complexity of the issue and to note that I am not suggesting child sponsors have been duped into participating or that they are always unaware of where CS 'fits' in the big scheme of things with respect to justice, global citizenship, and development goals. Ove (2018) found in his study that many sponsors "described sponsorship as a way to help that was relatively minor but something they could manage" (p. 112), that "it was easily something they could do" (p. 115) and it was perceived by them as providing "valuable and straightforward benefits to the child" (p. 11). However, in spite of unintentional and misrecognized outcomes on the part of child sponsors, it is critical to raise awareness that "well-intended interventions might circularly reproduce the very patterns that they seek to transform" (Andreotti et al., 2018, p. 14).
I position myself in this text in a similar manner to Yuen (2008) in believing that child sponsorship reflects "well-intentioned but misguided acts of charity" (p. 2). Thus, I am careful not to approach this task of reviewing and critiquing with disdain for those involved since I Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 6 recognize that "sponsors of children donate out of a genuine concern for underprivileged 'others'" (Yuen, 2008, p. 2). However, as Andreotti (2012a) aptly points out with regard to Northern initiatives aimed at alleviating poverty in the global South, we must become more aware of "the reproduction of historical harm through the solutions we propose" (p. 21), and comprehend the connection between our 'good' intentions to stop harm and our complicity in doing harm in relation to poverty interventions. In other words, Andreotti (2012a) argues that "if we understand the problems and the reasons behind them in simplistic ways, we may do more harm than good" (p. 25).
What is Child Sponsorship?
As defined by Yuen (2008), child sponsorship "entails a personal relationship between a sponsor and child, with monthly payments being sent by the sponsor in exchange for a picture of the child, letter exchanges, an annual report on how the child is progressing, and a general sense of connection" (p. 3). Simply stated, child sponsorship programs "are based on the concept of a one-to-one relationship between a donor in a developed country and a child in a developing country" (Noh, 2019(Noh, , p. 1420. CS is, by far, "the most successful fundraising tool of all time" (Smillie, 2017, p. 116).
For example, Smillie (2017) reports that, in 2014, World Vision "raised CAD 270 million in cash donations, of which almost 83 percent were in the form of child sponsorship" (p. 116).
That is close to CAD 225 million raised because of CS (because is italicized here since organizations confirm that, while CS may be the tool to raise the funds, not all of the funds raised through this tool are actually used for CS).
Descriptions of child sponsorship programs, including their purpose, activities, audience and effectiveness vary somewhat across contexts, making it important to carefully outline a few working definitions and assumptions for this paper. Firstly, child sponsorship (CS) is an activity of many, but not all, international non-governmental organizations (INGOs or just NGOs). These NGOs are typically, but not always, associated with a church/denomination and, in Canada, include (among several others) World Vision Canada, Plan International Canada, Compassion Canada, Chalice, and Canadian Feed the Children.
While these organizations exist and operate in Canada, they are generally part of larger multinational (globally-based) organizations. Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 7 As noted by Watson and Clarke (2014), the CS activities of NGOs share "a number of common characteristics including a historic emphasis on regular giving, the motivation of donating to benefit individuals, and the provision of regular updates for the benefit of sponsors" (p. 2). In addition, of course, the one common characteristic across all contexts is the focus on children, though it has not always been on children in the global South but on short term support throughout the world in times of food shortages (as briefly discussed in the previous section).
Connections between child sponsorship, justice, global citizenship and development education
As offered earlier, justice can be described as "fairness, equitable distribution of wealth, resources and power among all members of society" (SCIC, 2017, p. 3). According to social theorist Nancy Fraser (2007), parity of participation is "the most general meaning of justice" (p. 20), where "justice requires social arrangements that permit all to participate as peers in social life" (p. 20). Westheimer and Kahne (2004), in their work on educating for democracy, ask the question of what kind of citizens support such national and global goals. Noh (2019) defines 'global citizens' as people "who have critical understanding of interconnectedness, share values of responsibility, respect for differences, and commit themselves to actions" (p. 1422). Notwithstanding the tendency of NGOs to "associate child sponsorship with the booming concept of the 'global citizen'" (Noh, 2019(Noh, , p. 1422, such an association is, it seems, a far cry from the truth. This paper draws attention to why acts of charity (in the form of child sponsorship or other), initiated to help the 'other', do not construct/educate global citizens. In fact, MacQueen and Ferguson-Patrick (2015) refer to charity as a "reflexive response" which should be avoided since it positions those in the global south as the less fortunate 'other' and does not achieve lasting change (p. 115).
Educating to construct global citizens takes "a more reflective and critical pedagogy and While it has already been noted there is a paucity of research overall on child sponsorship, including its effectiveness and its impact, perhaps the greatest deficiency of information is with respect to how CS advertising impacts young people in the North (Tallon & Watson, 2015, p. 298). Tallon and Watson (2014) support, overall, the promotion of child Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 8 sponsorship in schools; however, they also admit that even a well-intentioned teacher's use of CS in development education "is potentially complex and can reinforce binary divisions regarding the world... [and] this may undermine effective development education for young people whose depth of understanding and motivation for informed engagement will impact North-South interactions in the future" (p. 298). Hennick et al. (2012) claim "there is little empirical research to understand how those at the centre of development practice define and implement programmes that promote empowerment as a route towards development and poverty reduction" (p. 204). This paper argues that, "in the absence of a pedagogically sound development education curriculum" (Tallon & Watson, 2014, p. 299), advertising, promoting, and facilitating CS in schools is not an empowering route toward development. Addressing root causes of poverty and inequality in the South (which unavoidably implicate those in the North) along with "advancing student understanding of poverty, exclusion, geographic disadvantage, unfair trade, colonial legacies and a range of related issues" (Tallon & Watson, 2014, p. 299) are essential components of development education. Such issues, however, challenge many aspects of privileged lifestyles in the North and so tend to be only superficially discussed, or avoided altogether. In fact, Andreotti's (2016) research on international development and global citizenship education in higher educational contexts focuses "on the difficulties of starting important conversations about social historical processes that systemically reproduce material, discursive and political inequalities" (p. 101).
What the Literature on Child Sponsorship Says
In this section, I present my results of the review in the form of a collection of reasons, or motivations, for becoming involved in child sponsorship as a sponsor, as synthesized from the research literature. For consistency of presentation, I will use the word motivation, rather than reason, in this discussion. Motivation can be defined "in terms of drives, urges, hopes, or aspirations that trigger a progression of events leading to a behavior" (Prendergast & Maggie, 2013, p. 131).
A careful reading and synthesis of the literature on child sponsorship revealed eight (8) different motivations for becoming involved in child sponsorship. In essence, the motivations serve to foreground, in this paper, why/how CS has been so 'successful' before moving into the critique. In reality, the motivations overlap, intersect, contradict, and, often times, even erase each other, though the way in which they are presented and discussed here Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 9 does not accurately reflect those complexities and intersections. The eight motivations discussed are: personal connection; altruism; guilt; small win; part of something bigger; distrust of government; not faceless; advancing development.
Personal connection
According to Rabbitts (2012), the "popularity of child sponsorship within the landscape of global charity rests on its offer of felt personal connection and dialogue with specific others" (p. 934). Wydick et al. (2013) suggest that those who market child sponsorship programs realize that "contact with an individual child creates a commitment device to help donors contribute a fraction of their monthly income to alleviating child poverty in developing countries via a relationship with a particular child living in poverty" (p. 400). That is, "international sponsorship programs mobilize resources by drawing on the psychological and moral instincts people possess to care for their own children" (p. 400-401).
Other research supports this; for example, based on interviews with their child sponsor participants, Prendergast and Maggie (2013) recommend that, from a marketing perspective, child sponsorship organizations would be wise to capitalize on the sponsors' reported importance of feeling like their sponsored child is a friend or family member; that such a personal and familial link ("birds of a feather" phenomenon (p. 135)) further drives the commitment and leads to satisfaction. Similarly, Rabbitts (2012) highlights the importance of the ordinary, everyday contexts in noting that "child sponsorship is experienced and made meaningful through the familiar, and particularly the familial" (p. 930). This author contends that "[c]entral to the appeal of child sponsorship is the promise of personal connection... [and] the satisfying feeling of making a difference" (p. 929).
Altruism
Some research points to the action/behavior of charity (charitable donations) as being driven by a combination of four motives: altruism, egoism, accountability and guilt (Prendergast & Maggie, 2013, p. 131). These four motives are, however, generally associated with one-time charitable giving behavior and so do not fully represent an understanding and Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 untangling of the motivations behind the more sustained, long-term charitable action of child sponsorship.
Altruism can be defined as the belief in, and practice of, unselfish concern for the well-being of others, even at the expense of it being risky or costly to the giver. Merely defining charity, and correspondingly child sponsorship, as altruistic acts done for public benefit fails to acknowledge "that charity is a socially situated practice, inseparable from wider relational contexts, as well as more intimate geographies within bodies, minds, hearts and souls" (Rabbitts, 2012, p. 927). It could be said that "[d]espite the common association of charity with altruism... charitable ethics are irreducible to it" (Rabbitts, 2012, p. 929) since charitable "gifts are shown to be inextricably bound up in webs of reciprocity and relations of power" (p. 929), demonstrated in, for example, child sponsorship letter-writing correspondence between donor and child.
Prendergast and Maggie (2013) elaborate on altruistic motives by explaining them in terms of wanting to enhance the lives of those who are considered disadvantaged and also in connection with humanitarian goals and emotions of simply wanting to help others (p. 131).
Like Rabbitts (2012), these authors also suggest that "reciprocity has been linked with a wide variety of ostensibly altruistic behaviors," as well as the expectation of "some future return" (p. 131). Wydick et al. (2013) claim that "[i]nternational sponsorship programs arose because of their usefulness as a marketing tool for mobilizing resources in rich countries to reduce poverty in poor countries" (p. 400). Prendergast and Maggie (2013) position a discussion on guilt central to a key motive for becoming involved, specifically existential guilt (p. 131).
Guilt
Equating existential guilt with social responsibility guilt, these authors state that such feelings "arise when one feels guilty about being more fortunate than other people" (p. 131). In Ove's (2018) study, one research participant (child sponsor) was quoted as saying: "I feel almost guilty living in the lap of luxury in this beautiful part of the world that it eases my conscience somewhat that I am contributing, even though it is in a minute way, to a child of the Third World" (p. 118). Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574
It has been suggested that "[p]eople often define social problems in ways that
overwhelm their ability to do anything about them" (Weick, 1984, as cited in Mittelman & Neilson, 2011. Accordingly, Weick (1984) "recommends a strategy of 'small wins', where people identify a series of smaller, less overwhelming actions which can lead to visible results" (as cited in Mittelman & Neilson, 2011, pp. 385-386). In other words, child sponsorship is seen by the public (potential donors) as a 'small win' in the face of so many overwhelming problems that seem out of the realm of any control or influence. As will be discussed later, Andreotti (2016) suggests that focusing on the small win that child sponsorship offers serves to highlight a person's need to be affirmed as doing good and making a difference without the risk of "paralysing and alienating" the person (p. 106). Many CS organizations highlight this need in their marketing, such as Canadian Feed the Children's statement: "Sponsoring a child is recognized more and more as a terrific way to make a difference in the world" (Canadian Feed the Children, 2020). The language of "make a difference", "help", "personally rescue" and "save a life" appears throughout child sponsorship promotional materials, and was drawn on extensively by sponsors and sponsorship organization staff in Ove's (2018) study. However, as noted by Ove (2018), these promotional materials seldom, if ever, attempt to educate the donor on the issues; that is, they fail to raise awareness of the deeper injustices and inequities, or the role played by the North in producing and reproducing them. Unfortunately, the 'small win' here, which is designed to make sponsors feel good and avoid paralysing or alienating them, often comes at a huge ethical cost: leading sponsors to believe that they are involved in a valuable development intervention, while failing to understand that child sponsorship "is primarily a fund raising tool even though it is routinely described as something else" (Ove, 2018, p. 68). That is, "the ethical value of child sponsorship comes disproportionately from its misrecognition as something other than an effective way to raise money" (Ove, 2018, p. 68).
... but it's also part of something bigger
Following the previous motivation for becoming involved in CS-that is, because CS is seen to represent a small win- Rabbitts (2012) offers that that such a small win can also provide "a sense of being 'part of something bigger'" (p. 934). In fact, CS organizations are Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 12 strategic about interweaving their presence into the everyday lives and concerns of donors. Rabbitts (2012) offers an interesting metaphor for why CS is so successful in describing how it works "through familiar faces, languages and practices" (p. 934) to define a collectivity (or 'community of the faithful') which can be seen to combat "the aches and pains of poverty through the generous movements of its healthier limbs" (p. 934). In other words, the metaphor depicts how CS organizations will often seek to enfold their fund-raising efforts into familiar, already-existing networks and spaces of potential sponsors, and (some would say) none any better than churches, "a context where hearts and minds are (in theory) predisposed to care about and through charity" (Ibid, 2012, p. 934). In fact, several of Rabbitts' (2012) research participants were significantly influenced by "Biblically-based frameworks for selfdevelopment" (p. 929) and, for them, "sponsorship becomes a performance of individual obedience to God" (p. 929). Some of these participants expressed excitement at the thought that their performance of obedience could lead to evangelism through charity. Based on her research interviews, Rabbits (2012) suggests that "[f]or many Christians, enabling evangelism through their giving is as important as fighting poverty, or even more so" (p. 930).
The 'something bigger' is found in the interweaving of "Christian moral landscapes with landscapes of charity" (Rabbits, 2012, p. 934), serving to demonstrate "how charities seek strategically -even evangelisticallyto enter into familiar networks and spaces of supporter lives... Churches can become key nodes in webs of advocacy, forming networks of encouragement to responsible action and providing both involvement opportunities and a culture in which charity is highly valued" (Rabbits, 2012, p. 933). O'Neill (2013) writes of the "evangelical Christian imperative" that arose in the 1980s which "ultimately put a premium on those charitable organizations that could deliver bite-sized bits of caritas to the masses" (p. 209), which child sponsorship delivered "in spades" (p. 209).
Distrust of foreign aid and government programs
Some argue that CS emerged at a time when "a climate of public disillusion and distrust surrounded foreign aid programs" (Mittelman & Neilson, 2011, p. 372), leading to "a turning point in international development efforts focused on children" (p. 371). As noted by Noh (2019), "[w]hile support for foreign aid tends to wane due to some negative images of developing countries with growing concern about security... children are perceived as innocent victims of chronic poverty and civil wars" (p. 1421). Similarly, Fieldston (2014) Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 13 offers that "[c]hild sponsorship programs promoted a new understanding of world affairs that transformed foreign relations from the realm of politicians and diplomats into the province of ordinary men, women and children" (p. 240). This, however, raises the question of whether distrust in government aid has been replaced by a naïve trust in aid through non-governmental organizations. Perhaps the new concern with development aid is that "today it goes largely unquestioned that its purpose is to help meet the basic needs of the less-fortunate who are unable to meet those needs for themselves… In reality, however, basic needs-as defined today-are a modern fiction" (Esteva, Babones & Babcicky¸ 2013, p. 17). Wydick et al. (2013) consider CS programs to be "among the most effective means of mobilizing resources to benefit children in developing countries" and even in harsh economic times, CS survives quite well compared to another "large, well-intentionedyet relatively faceless-nonprofit organization" (p. 401). In these "faceless" charitable situations, donors do not generally expect or anticipate personal, direct contact with the beneficiaries of their giving. While donors may be driven to donate by the highly influential motives of altruism, egoism, accountability, and/or guilt, it is the faceless, nameless, and impersonal nature of their commitment which enables giving to end with much less pull on the heart strings.
Child sponsorship is not faceless
In the study by Prendergast and Maggie (2013), a key finding was that because sponsors had a close knowledge of and relationship with the child, and strongly believed that the money was directly impacting the lives of the children, the sponsors expressed concern "about the impact on the children if sponsorship was withdrawn" (p. 138). In some cases, the sponsors admitted to sustaining their sponsorship because they felt guilty and did not want to damage/lose the close relationship with the child. Prendergast and Maggie (2013) share that "[e]ven though some sponsors may face financial problems and think of giving up, they will be reluctant to stop because they have already established a close relationship with their sponsored children and do not want to let the children down and damage the current relationship or the child's living conditions" (p. 134). Eekelen (2013) confirms this advantage for NGOs to keep a name and a face for child sponsors, as the strategy makes sponsors feel important by "tell[ing] each sponsor that much depends on his or her monthly contributions as nobody else sponsors this child" (p. 471). The website of Canadian Feed the Children states: "Remember: there is no such thing as a selfish reason! Children will benefit whatever your reason" (Canadian Feed the Children, 2020, emphasis in original). Whether the motivation is fed by loyalty, guilt, or other is unknown, but it does make "sponsorship an unusually lengthy and stable source of NGO income" (Eekelen, 2013, p. 472).
It is key to note that even though some NGOs have transitioned to a community-based model, their fund-raising strategies still involve the selection of a specific child to sponsor.
Perhaps this makes sense in light of Eekelen's (2013) comment that "people find it easier to empathise with an individual than with a group, and are thus attracted to programmes in which their contributions benefit a needy person with a face and a name" (p. 469). In other words, Eekelen (2013) refers to "the 'empathetic telescope' effect: by nature, people are most easily persuaded to assist when they hear a cry for help from a single individual" (p. 471).
Charitable organizations who fundraise through child sponsorship know "the central tenet of marketing: understanding the needs and wants of their customers (donors)" (Prendergast and Maggie, 2013, p. 130).
To feel that one is advancing the project of 'development'
Eekelen (2013) proposes that CS "provides the sponsors with a window into the lives of people in a developing country [which] may lead to more active interest in international development efforts" (p. 472). Ziai (2013), on the other hand, advocates for abandoning the discourse of 'development,' calling it an "all-too-vague concept with dubious implications" (p. 133), including the (re)production of a less/more dichotomy (further discussed in section 6.2). In later work, Ziai (2017) refers to present day as the post-development period, 25 years after The Development Dictionary (Sachs, 1992). Through metaphors of obituaries, corpses, and zombies, Ziai (2017) (and others; see NI, 2020) draws on the work of several postdevelopment scholars to assess the condition of development, wondering whether it is "alive and well, rotting away or already undead?" (p. 2555). He notes that "[w]hat is clear is that the problems often referred to under the heading of 'underdeveloped'-misery and inequality, violence and hunger, to name but a few-have not disappeared" (p. 2555). Escobar (2012), who traces critiques of development discourses back to the 1960s and 1970s, points out that "the term underdeveloped-linked from a certain vantage point to equality and the prospects of liberation through development-can be seen in part as a response to more openly racist conceptions of 'the primitive' and 'the savage'" (p. 227). Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4. (2012) is optimistic, even hopeful, in offering that "a growing number of researchers, activists, and intellectuals outside of the academy are heeding the urge to provide alternative understandings of the world, including of development" (p. xi). He refers to these as "complex conversations" (p. xi) and, in my assessment, they are becoming audible in the conversations and debates across the field focused on rethinking and renaming international development (Büscher, 2019;Fischer, 2019;Horner & Hulme, 2019). In fact, Horner & Hulme (2019) advocate for a more holistic understanding by "[m]oving from international to global development [as] a recognition that we live in 'one world'-albeit with major inequalities-and not in a 'North' or 'South' or in First and Third Worlds" (p. 368). When it comes to development and child sponsorship, New Internationalist has some valuable advice: "Whenever you encounter the word 'development' try and substitute another word or phrase that makes it clearer what is meant," offering "that the best substitute word in this case would be 'justice'" (NI, 1992).
On the topic of development education for sponsors, Clarke and Watson (2014) note that "the 'marketing' of development has seemingly overtaken development education" (p. 326). Even these authors, who are strong advocates for CS, acknowledge a lack of deeper engagement and call for sponsors to receive "information that considers larger issues of inequity, power imbalances, national security, et cetera" as a way to "strengthen development education and decouple-to a large extent-knowledge transfer of development from further fundraising appeals and campaigns" (p. 326).
Theoretical Perspectives of Critique
Having presented a collection of several motivations for people becoming involved in CS programs, as teased out of the research literature, in this section I focus on formulating a critique of these motivations grounded in, primarily, conceptualizations of the good citizen (Westheimer & Kahne, 2004) and the complex audience member (education student) (Andreotti, 2016). I draw on these conceptualizations due to my strong identification with the school classroom and the education of future teachers as global citizens; others, however, have conducted more specific citizenship education research with in-service and preservice teachers (see, for example, Buchanan & Varadharajan, 2018;Tupper & Cappello, 2012). With these two conceptualizations forming the ground work, I then move into a deeper analysis and critique of child sponsorship by drawing on the pedagogical tool and framework Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4. Andreotti, 2015;Andreotti, 2011Andreotti, , 2012bAndreotti, , 2016. In the following three subsections, I provide an overview of each of these conceptualizations, referring (where relevant) to illustrations of their application.
Through the lens of the good citizen
Westheimer and Kahne (2004) Their framework includes three conceptions of citizenship: personally responsible, participatory, and justice-oriented. A poignant illustration provided by these researchers to illustrate the distinction between the 3 kinds of citizens is offered in terms of actions completed by each citizen to address, for example, the issue of hunger in a local community: the personally responsible citizen would be donating food, the participatory citizen would likely organize the food drive, and the justice-oriented citizen would be "asking why people are hungry and acting on what they discover" (p. 242).
In relation to schools and citizenship education, Westheimer and Kahne (2004) note that these three types of citizens "embody significantly different beliefs" and "carry significantly different implications for pedagogy, curriculum, evaluation, and educational policy" (p. 263). These authors claim that, by definition, the personally responsible citizen has a "focus on individual acts of compassion and kindness, not on collective social action and the pursuit of social justice" (Westheimer & Kahne, 2004, p. 244). Their "focus is conservative and individualistic in that it emphasizes charity, personal morality, and the efforts of individuals rather than working to alter institutional structures through collective action" (p. 266). Considering these 3 types of citizens, informed by the literature reviewed on child sponsorship, my claim here is that the child sponsor is an example of the personally responsible citizen.
To address the question of whether people will normally shift, or evolve, through the different levels of citizenship as they become more engaged and educated, Westheimer and Kahne (2004) claim "initiatives that support the development of personally responsible citizens may not be effective in increasing participation in local or national affairs" and "programs that champion participation do not necessarily develop students' abilities to analyze and critique root causes of social problems" (p. 264). In fact, these authors even Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 suggest "there are some indications that curriculum and education policies designed to foster personal responsibility undermine efforts to prepare both participatory and justice-oriented citizens" (p. 264). Consider, for example, one particular organization operating throughout Catholic schools in Canada (Chalice); in effect, their promotion of child sponsorship programs falls into the category of championing the personally responsible citizen which, I would claim, comes at a cost of neglecting the deeper issues and concerns of the justiceoriented citizen.
Through the lens of the complex audience member
Andreotti (2016) describes her challenges as an educator and educational researcher in the areas of global citizenship and international development. With some notable parallels to Westheimer and Kahne's (2004) three conceptions of citizenship, Andreotti offers a four audience-orientation conceptualization which "reflect different levels of willingness to engage with [international development] issues in depth" (pp. 105-106). For her, the audience is primarily university students; however, the applicability of the conceptualization, I would argue, extends beyond that specific audience and into the realm of the general public.
The four audience-orientations are (p. 106): Seeking awareness for inspiration; problem solving for personal affirmation; circular criticality; education for existence otherwise. Students in the first two audience orientations are generally described as those willing to pay attention to an issue as long as practical solutions are readily available and the issue (or solution) does not threaten their existing investments or privilege. In other words, there is a need "to feel, to look, and to be seen as doing 'good'" (p. 106). The third audience, which might be considered comparable to Westheimer and Kahne's (2004) justice-oriented citizen, includes students who are open to deeper critiques of injustices and they can even begin to recognize their own complicity in historical asymmetries and structural harms.
However, it is only the fourth audience that appreciates the full complexity and uncertainty involved in reframing and re-centring the modern subject, vocabulary, and institution. Andreotti (2016) suggests that the majority of her students are situated in the second audience. With respect to the topic at hand in this paper, child sponsorship can be seen to fit into the "feel, look and be seen as doing good" characterization of the first or second audience, with critiques of injustices and/or awareness of complicity in (re)producing injustices being mostly absent from the CS discourse. Research, Society and Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 Andreotti (2015) describe these questions as "the kinds of questions that could be asked [of an Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 initiative] in the process of supporting Northern development workers to interrupt problematic patterns of representation and engagement with Southern communities" (p. 84). In asking the question, I turn to the topic of CS and offer important perspectives for arguing that CS does not interrupt problematic patterns but, instead, is more likely to be implicated in reinforcing the "seven problematic patterns of representations and engagements commonly found in narratives about development, poverty, wealth, global change, particularly in North-South engagements, as well as engagements with local structurally marginalised populations" (Andreotti et al., 2018, p. 15).
Hegemony
Andreotti (2012b) defines the problematic of hegemony as "justifying superiority and supporting domination" (p. 2). In essence, a hegemonic practice is one that lies unchallenged, while reinforcing and justifying the status quo (Andreotti et al., 2018). An important characteristic of hegemony is in how a well-intentioned action escapes questioning with regard to how it might be complicit in the reproduction of the problematic. Tallon and Watson (2014), in their application of HEADS UP to child sponsorship, pose the question corresponding to the problematic of hegemony as "How can an initiative like CS support or counter the idea that the Global North is superior?" (p. 309). Their response is, in my analysis, sketchy at best.
There is little chance that child sponsorship as it is presently conceived and operationalized can interrupt the hegemony problematic since, as discussed earlier in this paper, motivations for becoming a child sponsor centre on feelings of helping Others from a privileged and perceived superior (economically or otherwise) position. Not only are the standards of the west viewed as superior and conditions to aim for but a belief is perpetuated that those from the west have the power or capability to change and enrich the lives of Others.
As claimed by Chalice (a Catholic-based child sponsorship organization): "Since its inception, Chalice has been enriching lives, while restoring hope and dignity to people in developing countries through our sponsorship program" (Chalice, 2019).
As noted earlier, existential and social responsibility guilt are key in motivating and sustaining the charitable act of child sponsorship. SCIC (2017) states, however, that along with charity "it is imperative that we dig deeper to identify and understand the root causes of poverty. This can be done, in part, through a justice and solidarity approach to global poverty" Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 20 (p. 3). Such an approach, however, can represent a crisis for those whose own privilege risks being placed under a microscope. Taylor's (2011) work analyzes "the ways students resist crises of implication and difficult knowledge as well as moments in which they sit in the crisis in attempt to respond and self-position in exploratory, ethical ways" (p. 181). The problem, according to Taylor (2011), when pedagogical practices "offer consolation rather than critical and ethical tools to respond" (p. 181) is that "these practices operate to close down the anxious, violent crisis of learning selves exposed to the overwhelming, disorienting call to recognize and revise their habitual and hegemonic relationship to global Others, a closure wrought through the restoration of their moral superiority and authority" (p. 181).
Ethnocentrism
In Andreotti (2012b), ethnocentrism is defined as "projecting one view as universal" (p. 2), with this one view often seen (by those who have it) as superior to all other perspectives. The fact that other voices, or perspectives, are not heard or valued contributes to the (re) production of dangerous and simplistic binaries, such as us/them and have/have not.
To clarify the problematic of ethnocentrism, Andreotti (2012b) asks if the initiative implies "that anyone who disagrees with what is proposed is completely wrong or immoral" (p. 2).
About this problematic, Tallon and Watson (2014) ask: "How can CS address ethnocentrism and seek to portray a more complex notion of 'going forward' and alternative futures that include a range of voices?" (p. 309).
As noted in the discussion on motivations for becoming involved in CS, one common guilt-producing message used in CS fundraising is to imply that the sponsor is fortunate to be on the 'right' side of the have/have not binary and that if they choose not to sponsor (or to not continue sponsoring) the child, then no one else will. Instead of embracing greater complexity and encouraging a range of voices to be included, CS fundraising initiatives generally make a point of emphasizing how, in the face of overwhelming complexity, CS can be seen as manageable, as a 'small win,' and a means of doing one's part for development.
Arguing for dismissing or replacing the concept of development, Ziai (2013) states that the concept has "Eurocentric, depoliticising, and authoritarian implications" (p. 127).
These implications include framing the 'Other' as lacking, backward, and inferior, and in need of social change "therapy" that will move them closer to the standards of the West, including being "more modern, more productive, more secular, more democratic, etc." (p. Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 21 128). On the idea of 'more, ' Ziai (2013) proposes that if one is drawn to "measure the qualities of different ways of living and compare them" (p. 134) for the purposes of deciding who has a more 'developed' or 'better' life, then perhaps we should be sure to include in our measurement data reports on "incidences of suicide and violent crime, racism and sexism, the propensity to conduct wars, the relation to nature and other societies, and therefore the pressing question to what extent a certain way of living depends on the subordination of other economics and ecologies (their resources, their labour power) for its consumption patterns or on the production of exclusion and inequality" (p. 134). In proposing a radical manifesto for the future of development, Esteva et al. (2013) offer: "It is impossible and illegitimate to compare different notions of living well and to declare one of them better or worse than the others" (p. 20).
In citing international development scholars, Andreotti (2016) indicates that "to justify interventions and continuous exploitation that benefitted the 'First World', the 'Third World' was necessarily produced as 'backward, irrational, poor, terroristic, weak, exotic, fundamentalist, passive, etc. [so that the West could be produced as] civilised, rational, scientific, rich, strong, secular, active, etc.'" (Kapoor, 2014(Kapoor, , p. 1127. From a psychoanalytical perspective, Kapoor (2014) shows: … exposing the production of these historical hierarchical dichotomies is not enough to change them because our attachments to these hierarchies are not only cognitive or conscious... we are libidinally bound to the pleasures of this uneven global imaginary and its by-products (nationalism, exceptionalism, consumerism, materialism and individualism) as we enjoy the (false) sense of stability, fulfilment and satisfaction that they provide (belonging, community, togetherness, prestige, heroism and pride). (Kapoor, 2014, as cited in Andreotti, 2016 Decolonial scholars Mignolo and Walsh (2018) offer that "decoloniality seeks to make visible, open up, and advance radically distinct perspectives and positionalities that displace Western rationality as the only framework and possibility of existence, analysis, and thought" (p. 17). Ove (2018), who presents a "critique of child sponsorship by attempting to locate it within the broader networks of power and knowledge referred to as the discourse of development" (p. 151), describes how the key "underlying tension surrounding the value of child sponsorship… comes from an ethical dilemma at the core of the idea of development: what defines a good life, who gets to decide this, and how can people best be allowed to achieve it?" (p. 151). In line with this critique of the discourse of development, Esteva et al. (2013) state: Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 … the entire development literature… frames the issue of development from the standpoint of those who haven't needing to catch up with those who have. The US and Europe are ahead; the rest of the world is behind. The task of development theory and practice is to guide 'the rest' toward catch-up with the West (pp. 28-29). Andreotti et al. (2018) define ahistorical thinking as "forgetting the role of historical legacies and complicities in shaping current problems" (p. 15). A question posed from this point of analysis might be: "[D]oes this initiative introduce a problem in the present without reference to why this problem exists and how 'we' are connected to the making of that?" (Andreotti, 2012b, p. 2). Framed in the language of CS, the problematic named ahistoricism asks if CS introduces the problem of child poverty without reference to why child poverty exists and how the Global North (including the sponsors themselves) are connected to, and implicated in, the problem of child poverty in the Global South.
Ahistoricism
Instead of offering "a complex historical analysis of the issue" (Andreotti, 2012b, p. 2), the literature in support of CS suggests that CS will often lead to greater interest and involvement in development education. However, at the same time, other literature points to the fact that sponsors are generally disinterested in further, in-depth analysis of the issues connected with CS, such as global poverty, etc. (Ove, 2018). I return now to theory introduced earlier in relation to the kinds of citizens (Westheimer & Kahne, 2004) and the four audience members (Andreotti, 2016). Involvement as a child sponsor can be seen to demonstrate the first level of citizen: the personally responsible citizen who has a "focus on individual acts of compassion and kindness, not on collective social action and the pursuit of social justice" (Westheimer & Kahne, 2004, p. 244); their "focus is conservative and individualistic in that it emphasizes charity, personal morality, and the efforts of individuals rather than working to alter institutional structures through collective action" (p. 266). In the language of the audience member, child sponsorship depicts level one or two, where the sponsor is inspired toward charity or awareness-raising initiatives as long as their self-image and existing investments/privileges are not threatened (Andreotti, 2016).
A possible response to this discussion on citizens could include rationalizing that these different levels of citizens and corresponding involvement will always exist and that there is a place in society for the personally responsible citizen just as there is with the justice-oriented.
Rationalizing in this way, however, avoids digging deeper into root causes of injustices and examining one's own positioning and perspective that demands a shift. According to Ove (2018), the shift in perspective "comes down to not seeing raising money as the key element to combating global poverty and inequality" (p. 149). What is necessary is a clear and honest portrayal of "the deeper issues involved in global inequality" (Ove, 2018, p. 149), which is seldom part of child sponsorship promotional and educational materials. Instead, CS educational and promotional materials advertise that "for little effort on the part of sponsors, they can make a profound impact in the world and on themselves… sponsors do not just get to feel good about themselves temporarily, but they become better people" (Ove, 2018, p. 145). Ove (2018) continues: More than anything, this ridiculous ease with which we are invited to throw off history and injustice and to consume our individual portion of the liberal pie is what makes child sponsorship problematic. As part of a movement that sees people doing good by enjoying or improving themselves, child sponsorship and its advertising helps reposition what it means to live ethically in a terribly unequal and unjust world (pp. 145-146).
Depoliticization
In the HEADS UP framework, depoliticization is characterized as "disregarding power inequalities and ideological roots of analyses and proposals" (Andreotti, 2012b, p. 2). A key question associated with this problematic is to ask "[w]hat analyses of unequal power relations between the parties involved has been performed?" (Alasuutari & Andreotti, 2015, p. 86). The parties involved, in this case, could be the CS organizations and the sponsors or the sponsors and the sponsored children. Tallon and Watson (2015) express concern with the fact that CS "has been criticized in the past for being apolitical" (p. 309) and so they ask how such a criticism "can be addressed without confusing and alienating people, or dominating the debate with simplistic or idealistic solutions" (p. 309). The irony in this question is that, for the most part, CS organizations actually work to keep much of 'the truth' hidden (or at least not obviously visible) from sponsors, in matters ranging from how the funds are used through to issues of Northern complicity in global poverty and inequality.
With respect to the use of funds, Ove (2018) offers that " [w]hile no sponsorship organization is explicitly fraudulent in their marketing about where sponsorship money goes-it is always in the fine print that this money does not go directly to the child-they are Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 24 not terribly forthright about it either" (Ove, 2018, p. 148). In conveying the message that the money raised is either directly the solution, or serves to fund the solution, to the problem of global poverty, Northern sponsorship programs "create the perceived situation in which the more funds raised, the more 'development' done" (Ove, 2018, p. 68). Ove (2018) argues for a development philosophy which "should not blame the poor or exonerate the wealthy, should express a reliance on securing political will as much as promoting education, and should argue the necessity for change in the North as much as in the South" (p. 150). Esteva et al. (2013) simply and meaningfully declare: "People who are seriously interested in alleviating other people's suffering should begin by asking themselves if they are directly or indirectly contributing to that suffering" (p. 19).
Self-congratulatory and self-serving
Self-congratulatory and self-serving (also referred to as salvationism in Andreotti's earlier work) are the terms used in the framework associated with being "invested in selfcongratulatory heroism" (Andreotti et al., 2018, p. 15), "oriented toward self-affirmation / CV building" (Andreotti, 2016, p. 108), and "framing help as the burden of the fittest" (Andreotti, 2012b, p. 2). The inter-connected questions posed by Andreotti in an attempt to interrupt this problematic are: "How are marginalised peoples represented? How are those... who intervene represented? How is the relationship between these two groups represented?" (Andreotti, 2016, p. 108). In referring to this problematic as salvationism, Alasuutari and Andreotti (2015) disavow how "marginalised peoples [are] presented as helpless and those who intervene as benevolent, innocent, heroic and/or indispensable global leaders" (p. 86).
This problematic ties in so closely with several of the motivations previously discussed (personal connection, guilt, altruism, and child sponsorship is not faceless) that it begs the question of whether this one problematic and its question about how marginalised people are represented stands out among the other six as signifying the face and body of this child sponsorship critique. As noted in a 1982 issue of New Internationalist: "It is hardly surprising that the sponsorship agencies choose children to be the focus of attention. Young children produce instant sympathy and a ready response" (NI, 1982). However, in response to the question of how marginalised people are represented, one need only look at how children are (re)presented to potential sponsors. Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 25 An article written by Mittelman and Neilson (2011) describes how the origins of child sponsorship strongly objectified/commodified children through the use of photos, a practice that continues today. Despite the fact that many organizations have altered this practice to ensure that the photos of children are no longer considered "development porn" (Mittelman & Neilson, 2011), organizations know the emotional draw of a child and this is played upon such that child selection is akin to catalogue shopping. In fact, it is common for CS organizations to "allow donors to choose from an array of 'profiles', which present information about individual children with a recent photograph" (Rabbitts, 2012, p. 930). As an example of such profiling, one of the participants in Rabbits' (2012) study was quoted as saying: "There were so many beautiful children on the table we couldn't choose" (p. 930, italics added). In Li's (2017) analysis of the "consumption-oriented philanthropy" practices of World Vision Canada, she writes: The World Vision Canada gift catalogue is a prime example of idealizing the transformative power of consumption-that is, the basic premise of the charity gift catalogue is that donors can "shop for change"... the catalogue is so invested in the idea that consumption is the most attractive and convenient type of action for donors that it does not shy away from representing child victims as the "Product." (p. 460) Yuen (2008) synthesizes research which is critical of 'using' the child: "to represent the innocence of youth isolated from the political and religious turmoil often affecting their home countries" (p. 8); to be "emotionally manipulative" (p. 8), serving to either extract money from wallets or elicit despair and guilt for not taking care of "our future"; and to render the child as both consumer subjects and objects, with the latter reflected in the ability to shop in a child catalogue where sponsors can select a preferred country, age, and gender of the child.
In writing this text on Treaty 4 land (in the province of Saskatchewan, Canada), and mindful of the reference to objectification/commodification above, I find it challenging not to draw connections to the "Sixties Scoop," which one source (Spencer, 2017) describes as the practice of "how government authorities, often with no evidence of neglect required, took thousands of Indigenous children away from their families based on the widespread belief that Indigenous families were unfit to raise children" (p. 58). Bendo et al. (2018) claim "... the Sixties Scoop was predicated on child welfare that presented a positive facade" (p. 400) when it was really a cultural eliminationist strategy achieved through "forced removal and adoption Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 26 under the child-saving guise" (p. 401). Spencer (2017) adds that "removing indigenous children was viewed at this time as a public 'feel good' practice" (p. 58).
Bendo et al. (2018) describe a newspaper initiative (entitled Today's Child, written by
Helen Allen) which began as a way to deal with the "hard-to-adopt" Indigenous children: In 1964, The Telegram began running the Today's Child written by reporter Helen Allen. Allen's column targeted would-be parents and advertised the child's characteristics, including appearance, race, sex and disability. Each edition featured a photo of at least one child along with an address to contact Allen if the reader was interested in adopting (p. 402).
As an example of settler colonialism, this sixties scoop worked to dominate the 'prior' Indigenous inhabitants (and by 'prior', Spencer (2017) refers to "an ancient past that is incommensurate with the nation, with its promise of progress and civilization" (p. 60)) through many "techniques, including more direct forms of genocide as well as deceptive approaches such as the normalisation and enculturation of Indigenous people into the dominant settler ways of life" (p. 400). In this paper, I am drawn to connect these attitudes and actions of the sixties scoop to a form of colonialism identified in the actions of CS-a colonialism that lays claim to a "feel good practice" aimed at addressing the neglect experienced by children of the Global South and, at the same time, promising the "progress and civilization" associated with the Global North. Martin and Pirbhai-Illich (2015) "argue that colonial ways of knowing and being, prevalent during the spread of imperialism, are still privileged in relations between the Global North and the Global South today" (p. 136).
This discussion leads to important points about ethics and the construction of the ethical subject. Ove (2018) offers: …the most important implication of the way the practice of sponsorship constructs (or facilitates the co-construction of) sponsors as ethical individuals is that it aligns with, and not against, the processes that structure the modern world in all its violence and inequality. In other words, far from being a definitive solution to the problems of world poverty, sponsorship is yet another way that contemporary relations of power are expressed (p. 110).
Embedded in these relations of power are the relative privileged/non-privileged positions of sponsor/sponsored; in fact, sponsors "are constantly reminded that their comparatively minor donations have miraculous consequences in the lives of Others" (p. 110). Thus, according to Ove (2018), "sponsorship not only plays a prominent role in the ethical identity of the sponsor but also serves as a mechanism that helps reproduce the categories (such as race, nation, gender, class) that structure our lives" (p. 110).
Un-complicated solutions
In defining this sixth problematic in the HEADS UP acronym as "offering easy and simple solutions that do not require systemic change" (Andreotti, 2012b, p. 2), Andreotti poses the question of whether the "initiative offer[s] simplistic analyses and answers that do not invite people to engage with complexity or think more deeply" (p. 2). In later work (Andreotti, 2016), she rewords the question slightly, asking: "Has the urge to 'make a difference' weighted more in decisions than critical systemic thinking about origins and implications of 'solutions'?" (p. 108).
The phrase 'make a difference' comes across in many child sponsor testimonials (Ove, 2018), though what is troubling is the belief that one can make a difference (a small win) without damaging one's own privileged position. This is what Andreotti (2016) refers to as the first or second audience orientation and Westheimer and Kahne (2004) as the first level (personally responsible) citizen. According to Andreotti (2016), focusing on the small win that CS brings with it is akin to being a member of (at best) the second audience-orientation, where there is a need to be affirmed as doing good and making a difference without the risk of "paralysing and alienating" (p. 106). At the same time that CS can be seen as a small win, it is also very clear that it represents a small loss for those who have (and want to keep) privilege.
In essence, I argue here that child sponsorship has been so successful because it is operationalized (intentionally) in such a way as to not threaten the sponsor's existing privilege.
Even Tallon and Watson (2014), self-claimed proponents of CS, state that the minor commitment associated with monthly donations has been a key reason for much criticism.
They call for CS organizations to "move people beyond just a 'donate now' option to a deeper engagement with complex issues" (p. 309). However, the key message being conveyed here in this critique is that if child sponsors were compelled to pursue deeper engagement with the complex issues of poverty, power imbalance, inequity, etc., they would soon realize that child sponsorship not only reflects an overly simplistic and uncomplicated solution to a complex problem, but they might begin to see themselves and their privileged positions reflected in the actual (re)production of the problem. In other words, their "well-intended interventions might Development, v. 9, n.8, e26985574, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i8.5574 28 circularly reproduce the very patterns that they seek to transform" (Andreotti et al., 2018, p. 14).
As Ove (2018) points out, it is "a complex relationship between Southern poverty and Northern lifestyles" (p. 74) but that child sponsorship organizations tend to maintain "a separation, or what might be more descriptively referred to as the 'Othering' of poverty" (p. 74). According to Ove (2018), if "the causes and direct solutions to problems of global poverty are disconnected from the North, [then] the role of Northerners is logically limited to charitable donations that are typically assumed to be generous for this very reason" (p. 74). In other words, "for charitable donations to be charitable there is often assumed to be no connection between, for example, one person's poverty and another's wealth" (p. 74); "systemic explanations of global poverty" (pp. 74-75) which strongly implicate the North are excluded from "the mainstream discourse on development" (p. 75) and thus this exclusion serves to ensure that "fundraising is perceived as an appropriate and sufficient Northern response to global poverty" (p. 75).
Paternalism
Andreotti (2012b) defines paternalism in the HEADS UP framework as "seeking affirmation of authority/superiority through the provision of help and the infantilization of recipients" (p. 2). To identify whether an initiative reproduces this problematic pattern, Andreotti (2012b) poses the question of whether the "initiative portray[s] people in need as people who lack education, resources, maturity or civilization and who would and should be very grateful" (p. 2) for the help. For this final problematic, it seems fitting that I have come full circle to the introduction of this paper, where I cited a New Internationalist article as suggesting that the kernel of child sponsorship "is the creation of a paternalistic relationship which is unnecessary and potentially harmful" (NI, 1985, p. 150). The article in that NI issue went on to offer: One-to-one sponsorship does not create genuine personal bonds between donors and foster children. It can, however, distort the recipients' vision of an unjust economic order and create aspirations far removed from the reality of their lives. Children and their families may be permanently marked by psychological and material dependence on their 'padrino' from the North (NI, 1985, p. 150).
Once again, as with the self-congratulatory and self-serving problematic, it is difficult not to draw parallels between child sponsorship and the Sixties Scoop in relation to the issue of paternalism. Canada-wide, settlers have forgotten the legacy of the sixties scoop and fail, in most cases, to notice the parallels between that practice of indigenous adoption (removal from homes and parents) and child sponsorship, especially with respect to its representation as the savior to the marginalised, highlighting this paternalistic problematic at work. Child sponsorship, according to Ove (2018), "reproduces colonial relations of power and knowledge, and it allows for the deterioration of conditions in the South despite the appearance of enormous efforts in the North" (p. 146).
In concluding this HEADS UP analysis and critique, I would argue that the discussion and analysis provided here strongly support a claim that child sponsorship "reproduces these seven problematic historical patterns of thinking and relationships" (2012, p. 2) and, in the words of Ove (2018), "if one accepts [the critique presented here], there is really no salvation for child sponsorship" (p. 147).
Closing Thoughts on Moving Forward with Justice
To close, I revisit the introduction to this paper, and the one question I was frequently asked as I shared with others that I was researching and writing this critique: "... but isn't it better than nothing?" Ove (2018), in his critique of child sponsorship, also draws on the phrase "better than nothing," though in his research he was drawn to conclude that child sponsorship is better than nothing, even though, he admits, that is about as strong a conclusion as he is willing to make. Reflecting back on this paper and the complexity behind the motivations-coupled with a critical reflection on descriptions of the good citizen, the critical audience member and the HEADS UP pedagogical toolexposes how overly simplistic this binary-based question really is. Given my argument above that CS is implicated in reinforcing the "seven problematic patterns of representations and engagements commonly found in narratives about development, poverty, wealth, global change, particularly in North-South engagements, as well as engagements with local structurally marginalised populations" (Andreotti et al., 2018, p. 15), I am left with the insurmountable task of responding to 'now what?' In other words, the question looms (even on the minds of readers), "if not this, then what?" If, as a reader, your response to each of the above questions is no, then you probably support the argument and final response of this paper. No, child sponsorship is NOT better than nothing. | 2020-07-09T09:04:58.682Z | 2020-06-20T00:00:00.000 | {
"year": 2020,
"sha1": "567693b8f8f4ade4fd0f64dff6efe7c6109c65d0",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/5574/4370",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fd1f9a32fd36aee31b943e4c52957787ec175cb5",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
250698517 | pes2o/s2orc | v3-fos-license | Conditioned media of pancreatic cancer cells and pancreatic stellate cells induce myeloid-derived suppressor cells differentiation and lymphocytes suppression
As pancreatic cancer cells (PCCs) and pancreatic stellate cells (PSCs) are the two major cell types that comprise the immunosuppressive tumor microenvironment of pancreatic cancer, we aimed to investigate the role of conditioned medium derived from PCCs and PSCs co-culture on the viability of lymphocytes. The conditioned medium (CM) collected from PCCs and/or PSCs was used to treat peripheral blood mononuclear cells (PBMCs) to determine CM ability in reducing lymphocytes population. A proteomic analysis has been done on the CM to investigate the differentially expressed protein (DEP) expressed by two PCC lines established from different stages of tumor. Subsequently, we investigated if the reduction of lymphocytes was directly caused by CM or indirectly via CM-induced MDSCs. This was achieved by isolating lymphocyte subtypes and treating them with CM and CM-induced MDSCs. Both PCCs and PSCs were important in suppressing lymphocytes, and the PCCs derived from a metastatic tumor appeared to have a stronger suppressive effect than the PCCs derived from a primary tumor. According to the proteomic profiles of CM, 416 secreted proteins were detected, and 13 DEPs were identified between PANC10.05 and SW1990. However, CM was found unable to reduce lymphocytes viability through a direct pathway. In contrast, CM that contains proteins secreted by PCC and/or PSC appear immunogenic as they increase the viability of lymphocytes subtypes. Lymphocyte subtype treated with CM-induced MDSCs showed reduced viability in T helper 1 (Th1), T helper 2 (Th2), and T regulatory (Treg) cells, but not in CD8+ T cells, and B cells. As a conclusion, the interplay between PCCs and PSCs is important as their co-culture displays a different trend in lymphocytes suppression, hence, their co-culture should be included in future studies to better mimic the tumor microenvironment.
www.nature.com/scientificreports/ GM-CSF Granulocyte-macrophage colony-stimulating factor VEGF Vascular endothelial growth factor 7AAD 7-Aminoactinomycin D CM Conditioned medium Th1 T helper 1 cell Th2 T helper 2 cell Treg T regulatory cell TGF-β Tumor growth Peripheral blood mononuclear cell (PBMC) isolation. Whole blood was donated by volunteers and collected in Vacutainer ® blood collection tubes with anticoagulant (EDTA or heparin). The blood was then layered on top of histopaque-1077 (Sigma-Aldrich, USA) in a 1:1 ratio and centrifuged for 30 min at 400×g. After centrifugation, the opaque interface containing the mononuclear cells was aspirated and washed with phosphate buffered saline solution (PBS) thrice. After the last wash, supernatant was discarded, and the pellet was resuspended with 1 mL of culture medium.
Conditioned medium (CM) collection. Group C1-C4 were seeded at a total density of 1.5 × 10 5 cells/ well in 6-well culture plates (Eppendorf, Germany). Cells were incubated for 3 days, and the CM was collected and stored at -80 °C.
Phase I: treatment of PBMCs with CM. In phase I, PBMCs were treated with CM collected from group C1-C3, and their total lymphocytes population was assessed using flow cytometry analysis. PBMCs (2 × 10 6 cells/well) were seeded in 6-well culture plates and CM was added to achieve a concentration of 10% (total volume per well = 3 mL). As each cell line had a different growth rate, normalization was performed (formula shown below) to adjust the final volume of CM used to treat the PBMCs. This would avoid potential bias due to the difference in concentration of secreted proteins in CM (CM from groups with a lower cell number will have lower concentration of secreted proteins from PCCs and/or PSCs).
Cells were cultured for 7 days with medium changed on day 3. After 7 days, the cells were collected and processed for flow cytometry analysis.
Flow cytometry. After 7 days of treatment with CM, PBMCs were analyzed by flow cytometry and the total lymphocytes population was identified based on cell size and granularity. The viability dye, 7-aminoactinomycin D (7AAD) was used to exclude dead cells that may lead to false-positive results.
Proteomic analysis. Conditioned media collection and concentration. Both PCC lines were seeded at a total density of 1.5 × 10 5 cells/well in 6-well culture plates (Eppendorf, Germany) and incubated for 48 h. Next, the media were replaced with serum free media. After 24 h, the CM were collected and concentrated using the Pierce™ Protein Concentrator with PES membrane and molecular weight cut-off at 3 kDa (ThermoFisher Scientific, USA).
Protein digestion and LC-MS/MS. The concentrated CM were subjected to protein digestion with MS grade trypsin (Merck, Germany) using dithiothreitol (DTT) (Sigma-Aldrich, USA) as reducing agent and iodoacetamide (IAM) (Merck, Germany) as alkylating agent. After protein digestion, the samples were desalted with Pierce™ C18 Tips (ThermoFisher Scientific, USA) prior to drying using vacuum concentrator (Eppendorf, Germany). The lyophilized samples were reconstituted in 0.1% formic acid in H 2 O and run in an Agilent 1200 HPLC coupled with Agilent 6550 iFunnel Q-TOF LC/MS (Agilent Technologies, USA).
Differentially expressed proteins (DEPs) identification and GO terms analysis. Raw data was processed with Peaks X+ and DEPs were identified using RStudio v.2022.02.2 + 485 and DEBrowser v.1.22.5. Proteins with false discovery rate (FDR) < 0.05 and fold change (FC) ≥ 2 were classified as significantly differentially expressed. Gene ontology (GO) enrichment analysis was performed on the DEPs using DEBrowser, GO terms with FDR < 0.05 were considered significantly enriched.
Phase II: treatment of CD4 + , CD8 + T cells, and B cells with CM and CM-induced MDSC. CD4 + T cells isolation and differentiation. Naïve CD4 + T cells were isolated from PBMCs using an immunomagnetic negative selection kit (Stemcell Technologies, Canada). The isolation was performed according to the manufacturer's protocol. At the end of isolation, the naïve CD4 + T cells fraction was divided into 3 portions. Each portion was cultured in ImmunoCult™-XF T Cell Expansion Medium supplemented with ImmunoCult™ Human CD3/ www.nature.com/scientificreports/ CD28/CD2 T Cell Activator and the differentiation supplement cocktail for T helper 1 cells (Th1), T helper 2 cells (Th2), and T regulatory cells (Treg) respectively (Stemcell Technologies, Canada). Th1 and Treg was incubated for 7 days for activation, whereas Th2 was incubated for 14 days. Medium was changed every 2-3 days with the density maintained at 1 × 10 6 cells/mL. All cultures were maintained in 37 °C incubator, 5% CO 2 .
CD8 + T cells isolation. CD8 + T cells were isolated directly from whole blood using an immunomagnetic negative selection kit (Stemcell Technologies, Canada) according to manufacturer's protocol. The isolated CD8 + cells were cultured in ImmunoCult™-XF T Cell Expansion Medium supplemented with ImmunoCult™ Human CD3/ CD28/CD2 T Cell Activator for 9 days, with medium changing every 2-3 days and density adjusted according to manufacturer's recommendations (Stemcell Technologies, Canada).
B cells isolation. B cells were isolated from whole blood directly using an immunomagnetic negative selection kit (Stemcell Technologies, Canada) according to the manufacturer's protocol. The isolated B cells were then seeded into a 96-well plate in complete DMEM: DMEM/Ham's F12 medium.
MDSCs isolation. CM collected from culture groups C1, C3, and C4 were used to treat isolated PBMCs for 7 days to induce MDSCs differentiation. As a control, PBMCs were also seeded without CM treatment to access the suppressive properties of uninduced MDSCs. On day 7, the uninduced and CM-induced MDSCs were isolated using an immunomagnetic positive selection isolation kit (Stemcell Technologies, Canada). Isolated MDSCs were then seeded in 96-well plate at a density of 0.25 × 10 4 cells per well and incubated overnight.
Treatment of isolated lymphocytes with CM and MDSCs. In phase II, lymphocyte subtypes were treated with CM (direct pathway) and CM-induced MDSCs (indirect pathway) using the CM collected from group C1, C3, and C4, and the cell viability was assessed using a microplate reader. For the lymphocytes that were treated with CM, activated CD4 + , CD8 + T cells, and B cells were seeded into 96-well plate and treated with medium conditioned by group C1, C3, and C4 at the concentrations of 10%, 20% and 30%. Untreated lymphocyte subtypes were seeded as control. The cell viability was then assessed using CellTiter-Glo ® Luminescent Cell Viability Assay at 48 h (Promega, USA).
As for the lymphocytes that were treated with MDSCs, activated Th1, Th2, Treg, CD8 + T cells, and B cells were seeded into the wells with MDSCs at a density of 0.25 × 10 4 cells per well. Untreated lymphocyte subtypes and untreated MDSCs (both uninduced and induced by CM) were also seeded as controls. All groups were incubated for 48 h, and at the end of incubation, CellTiter-Glo ® Luminescent Cell Viability Assay was used to measure the viability (Promega, USA). The viability of lymphocyte subtypes after treatment was calculated according to the following formula.
Statistical analysis.
All experiments were performed in triplicates and statistical analysis was performed using Statistical Package of Social Sciences (SPSS) software (version 25). Analysis of Variance (ANOVA) was carried out, followed by Duncan post-hoc test to analyze the differences among groups. A p-value less than or equal to 0.05 was considered significant.
Ethics approval. The research has been approved by Ethical Board of International Medical University and the whole research process complies with the principles of the Declaration of Helsinki. Consent to participate. Written informed consent was obtained from the participants of the study. Figure 1 shows the percentage of lymphocytes after 7 days of CM treatments. All CM treated groups had a lymphocytes percentage that was at least 50% lower than the untreated group. Noteworthy, 100% PANC10.05 CM treated group had a lymphocytes percentage that was at least 2 times higher than other treated groups. These results suggested that the secreted proteins in the CM were able to induce lymphocytes suppression, in which a stronger suppression was observed in the SW1990 treated group as compared to the PANC10.05 treated group.
Effects of PCCs and PSCs CM on lymphocytes populations.
The DEPs in PCC lines. In order to determine the potential proteins responsible for the different suppressive properties of PANC10.05 and SW1990, CM that contains the secreted proteins were analyzed using LC-MS/ MS. Figure 2a shows a Venn diagram that represents the number of secreted proteins detected in each PCC line CM. Furthermore, a Volcano plot that serves as a visual tool for the overall protein expression was generated using the log2 FC score and − log10 padj (Fig. 2b). In total, 13 DEPs were found based on the cut-off criteria (padj < 0.05 and FC ≥ 2), in which 6 proteins were upregulated in PANC10.05 and 7 proteins were upregulated in SW1990 (Table 1). Furthermore, GO enrichment analysis was performed as shown in Fig. 2c-e. The DEPs were significantly enriched in biological processes containing cellular response to nerve growth factor stimulus, response to nerve growth factor, and positive regulation of neuron apoptotic process (Fig. 2c) www.nature.com/scientificreports/ molecular function, the DEPs were related to extracellular matrix structural constituent, metalloaminopeptidase activity, and aminopeptidase activity (Fig. 2d). Whereas for cellular components, the enriched GO terms were associated with collagen-containing extracellular matrix, basement membrane, and secretory granule lumen (Fig. 2e).
Effects of CM on isolated T lymphocytes.
As mentioned in section "Effects of PCCs and PSCs CM on lymphocytes populations", lymphocytes suppression was observed when PBMCs were treated with CM. Hence, we isolated the subtypes of lymphocyte and treated them with CM to further investigate if the CM is inducing lymphocytes suppression via the direct pathway. As shown in Fig. 3a, Th1 treated with 10% monocultures CM had a significantly lower viability (at least 20%) than untreated. Whereas for the co-cultures, Th1 treated with PANC10.05/PSC CM was not significantly different from untreated, but SW1990/PSC CM treated group had a viability that was about 30% higher than untreated. As the concentration of CM increased, the viability of all monocultures increased significantly while both cocultures remained to be the same as 10% CM. For Th2 treated with CM, both co-cultures treated groups had a viability that was at least 30% higher than the monocultures and 100% higher than untreated ( Fig. 3b). At 20% CM, all groups except PANC10.05 monoculture had achieved a similar level of Th2 viability. As the concentration of CM increased to 30%, all groups including PANC10.05 had achieved a similar viability. In addition, the ratio of Th1 against Th2 (Th1:Th2) was determined to investigate the importance of T helper cells balance in the TME of PDAC. As shown in Fig. 4a, the Th1:Th2 ratios of all groups were smaller than 1, indicating a higher proportion of Th2 than Th1.
For Treg, the viabilities of all monoculture's CM treated groups were significantly lower than untreated and co-culture treated groups at 10% CM ( Fig. 3c). At 30% CM, all groups had a viability that was not significantly different from the untreated group, except for PSCs monoculture, which had a viability that was about 20% higher.
As for CD8 + T cells, the viability for all treated groups increased significantly at a CM concentration as low as 10% (Fig. 3d). As compared to untreated, the viabilities of all treated groups were at least 70% higher, with the highest viability in PANC10.05/PSC CM treated group, which was about 200% higher. Furthermore, the viability of PANC10.05 monoculture treated group remained to be the lowest in all concentrations. However, when co-cultured with PSCs, PANC10.05/PSC had a viability that was about 70% higher than its monoculture.
Lastly, Fig. 3e shows the viability of B cells, in which the viability of PANC10.05 monoculture treated group remained to be the lowest in all concentrations. However, in the presence of PSCs, PANC10.05/PSCs co-culture treated group had achieved the highest viability, which was about 900% higher than its monoculture and over 2000% higher than untreated at 30% CM. Whereas for SW1990, the viability of its monoculture was at least 500% higher than PANC10.05, and no significant difference was observed between its mono-and co-culture treated groups. In short, suppression was not observed in any of the lymphocyte subtypes, suggesting that the secreted proteins of PCC and/or PSC did not have a direct suppressive effect on the lymphocytes. www.nature.com/scientificreports/ PANC10.05 monoculture CM had a viability that was about 10% higher than SW1990 and PSCs monoculture treated groups. Whereas for the co-cultures, Th1 treated with PANC10.05/PSC CM-induced MDSCs had a viability that was about 20% higher than Th1 treated with SW1990/PSC CM-induced MDSCs. The uninduced MDSCs did not show significant suppression towards Th1, and the viability was not significantly different from untreated. For Th2, groups treated with SW1990 and PSC CM-induced MDSCs had a viability that was 20% lower than untreated (Fig. 5b). However, groups treated with PANC10.05 and both co-cultures CM-induced MDSCs did not show any significant difference in viability. Similar to CM treated groups, the Th1:Th2 ratio in CM-induced MDSCs treated group was determined. According to Fig. 4b, all monoculture-induced MDSCs resulted in a ratio slightly lower than 1, indicating similar proportions of Th1 and Th2. Whereas for the co-cultures treated groups, the Th1:Th2 ratios were at least 40% lower than monocultures treated groups. For Tregs, PANC10.05, PSC, and PANC10.05/PSC-induced MDSCs treated groups had a significantly lower viability (about 20% lower) than untreated, whereas the remaining groups were not significantly different from untreated (Fig. 5c).
As compared to the untreated control, the viability of CD8 + T cells increased significantly (at least 300%) upon treatment with PANC10.05-and SW1990/PSC-induced MDSCs, with the highest peak in uninduced MDSCs treated group, which was about 900% higher (Fig. 5d). Whereas for B cells, all groups that were treated with CM-induced MDSCs had a significantly higher viability than the untreated group (over 1000% higher), while uninduced MDSCs group was 150% higher than the untreated group (Fig. 5e). Taken together, CM could induce MDSCs that were suppressive towards the subtypes of CD4 + T cells, but not the CD8 + T cells and B cells. However, unlike the uninduced MDSCs, the CM-induced MDSCs were able to suppress the further proliferation of CD8 + T cells and B cells.
Discussion
In the past decades, studies have been conducted to investigate the immunosuppression in pancreatic cancer, hoping to develop an effective therapy that inhibits immunosuppression and improves patient's outcome. In this study, we have determined the mechanism of PCCs and PSCs CM in exerting lymphocytes suppression in vitro.
In phase I, suppression of total lymphocyte population was observed in both PCCs and PSCs CM treated groups, as well as their co-cultures (Fig. 6). Without direct cell-cell contact, the secreted proteins by PCCs and PSCs can induce lymphocytes suppression. As shown by the significantly higher lymphocytes percentage, the lymphocytes suppressive effect exerted by the secreted proteins of the primary tumor-derived PCCs was proven to be weaker than the secreted proteins of the metastatic tumor-derived PCCs and PSCs. However, the suppressive effect can be enhanced in the presence of PSCs secreted proteins. Hence, we deduced that as PANC10.05 is established from a primary tumor; it required the interaction with PSC at the early stage of carcinogenesis to suppress antitumor immune response. As the tumor progresses, the grade II SW1990 cells that are established from a metastatic tumor could suppress the immune system independently. According to the proteomic analysis of CM, we hypothesize that the strong suppressive properties of SW1990 were contributed by the upregulated Transglutaminase 2 (TGM2). The expression of TGM2 was found to be upregulated in several types of cancer, which is associated with most of the highly aggressive forms of cancer 37 . TGM2 confers a strong protective role www.nature.com/scientificreports/ on cancer cells against apoptotic stresses and thereby promotes cancer cells survival 38 . Besides, TGM2 catalyzes the protein crosslinking and involves in multiple signaling pathways including the NF-κB signaling pathway, PI3K/Akt survival pathway, and TGF-β signaling pathway 37,39-41 . The immune suppressive role of TGF-β in PDAC has been widely reported, in which it inhibits the antitumor immunity of effector T cells and induces the immunosuppressive cell types, such as T regulatory cells (Tregs), T helper 2 cells (Th2) or tumor-associated www.nature.com/scientificreports/ macrophages (TAMs) 24,[42][43][44] . Hence, we suggest that the upregulated TGM2 is one of the key players that confers the stronger suppressive properties of SW1990.
In phase I, we observed a significant suppression of total lymphocyte population induced by the secreted proteins in CM. As the secreted proteins of PCCs and/or PSCs are potent in inducing MDSCs differentiation, we hypothesize that MDSCs could be the key player that induces the lymphocytes suppression that we had observed in phase I 26,30 . Hence, we investigated the mechanism of CM in lymphocytes suppression via two different pathways in phase II (Fig. 6). In the direct pathway, different lymphocyte subtypes were isolated and treated with CM directly. Three concentrations of CM were used to simulate different stages of PDAC. The lower CM concentration (10%) simulates the early stage of PDAC, in which the low number of PCCs and PSCs limits the amount of secreted proteins available in the TME. Whereas the 20% and 30% CM simulate more advanced stages of PDAC, which have more PCCs and PSCs to enrich the TME with cytokines and other secreted proteins. Whereas in the indirect pathway, lymphocyte subtypes were isolated and treated with MDSCs induced by the CM of PCCs and/or PSCs.
The viability of T helper 1 cells (Th1) had increased significantly in all groups after being treated with CM, except for 10% monoculture CM treated groups. The precursor of Th1, naïve CD4 + T cells, polarize into different lineages and play a pivotal role in the activation and maintenance of the effector cells. Th1 is mediated by pro-inflammatory cytokines, and it promotes antitumor cellular immune response by activating CD8 + cytotoxic T cells, secreting IFN-γ that has a direct cytotoxic effect on PCC, and inducing humoral immune response through CD40 ligand signaling 45,46 . According to our results, we postulated that in the early stage of PDAC (at low CM concentration), monocultures secreted proteins can suppress the viability of Th1, thus reducing antitumor immune response. However, their co-culture secreted proteins are immunogenic and trigger the proliferation of Th1 even at low CM concentration, showing that the interplay between PCCs and PSCs is important as it promotes antitumor immune response. According to literatures, tumors regularly provoke adaptive immune response against tumor antigens are known to be immunogenic, such as the CD8 + T cell-mediated responses, although the majority were also self-antigens 47,48 . Immunogenic tumors have significant numbers of infiltrate immune cells and upregulated immune network, which can be either immune-suppressive or non-suppressive 23 . Hence, the secreted proteins that increase the lymphocyte viability were said to be immunogenic, as they can trigger the activation and proliferation of lymphocyte subtypes, regardless of its suppressive nature. Of note, we have also found out that there is a maximum efficiency for each CM to activate Th1 proliferation. Once the threshold is reached, increment of CM concentration will not trigger further expansion of Th1. In the presence of PSCs, SW1990 cells were found to be more immunogenic than PANC10.05 cells, which is expected as the metastatic, well-differentiated PCCs will express more tumor-specific antigens than the primary tumor-derived PCCs.
Th2 (T helper 2 cell) is known to be pro-tumorigenic as it releases cytokines that promote the expansion of other immunosuppressive cells, such as TAM 25,45 . Besides, as the antitumor immune response mainly depends on cell mediated immunity, Th2 that promotes humoral immunity will reduce the efficiency of antitumor immune response and promote chronic inflammation 49 . According to our results, we postulated that the interplay between PCCs and PSCs is important for immunogenicity, especially at the early stage of PDAC, disregards of their cell of origin. Our data also shows that the PANC10.05 cells are less immunogenic than the SW1990 cells and PSCs, disregards if the immune response is pro-tumorigenic (Th2) or anti-tumorigenic (Th1).
In PDAC, the differentiation of T helper is skewed from Th1 to Th2, thus limiting the antitumor cell-mediated immune response 24,50,51 . However, instead of the absolute number of either Th1 or Th2, the balance between Th1 and Th2 in the TME is more relevant to the clinical outcomes, as high Th1:Th2 ratio correlates with prolonged survival 49,52 . According to our data, the proportion of Th2 is higher than Th1 in all CM treated groups. Namely, the balance of immune response is tilted towards the pro-tumoral humoral immunity rather than antitumor cell mediated immunity. Other than affecting the balance of immune response, Th2 can also promote cancer www.nature.com/scientificreports/ cell growth, activate cancer-associated fibroblast that reduces the infiltration of immune cells, and induce the differentiation of TAM, which further enhances cancer progression 45,53 . www.nature.com/scientificreports/ Tregs (T regulatory cells) act as an immune mediator in healthy individuals to prevent autoimmune diseases by suppressing immune response. In tumor immunology, Tregs were reported to be one of the immunosuppressive cells that suppress antitumor immune response 54,55 . According to our results, a similar trend with Th1 was also observed in Tregs, where the monocultures secreted proteins suppressed Tregs viability in the early stage of PDAC. This shows that without the cell-cell interactions between PCCs and PSCs, both pro-tumorigenic (Tregs) and anti-tumorigenic (Th1) immune responses would be suppressed in early PDAC. However, the Tregs viability was generally unaffected by CM treatment at higher concentrations, which suggests that the direct induction of Tregs by PCCs and PSC secreted proteins is unlikely to be the primary mechanism responsible for immunosuppression in advanced PDAC.
Cytotoxic (CD8 + ) T cells are the main effector T cells responsible for tumor-specific cell mediated immunity. This function is carried out by, i. producing IFN-γ that can induce differentiation of effector cytotoxic T cells, in which IFN-γ is also responsible for induction of antigen-specific cytotoxic T cells that leads to expansion of memory cells that are effective during cancer recurrence, and ii. producing cytotoxic granule components, such as granzymes and perforin 43,46,[56][57][58] . According to our data, the trend is consistent with the observation in Th1, Th2, and Tregs that the primary tumor derived PCCs secreted proteins being the least immunogenic for both anti-tumorigenic and pro-tumorigenic immune responses, but the immunogenicity can be greatly enhanced in the presence of PSCs. Notably, the increment of viability percentage in all groups was much larger compared to the effect size observed in Th1 and Th2, which is an indication that the proteins secreted by PCCs and PSCs are more immunogenic towards CD8 + T cells than T helper cells.
The role of B cells in tumor immunology has remained unclear, as they have contradictory roles in tumor immunology. B cells promote antitumor immune response by being an antigen-presenting cell (APC) that enhances the expansion of antigen specific CD4 + and CD8 + T cells; on the other hand, they reduce the secretion of Th1 cytokines and impair cytotoxic (CD8 + ) T cells response 59,60 . According to the result, the effect size in the increment of viability was the greatest among all lymphocyte subtypes (at least 1800% higher than untreated). Hence, we hypothesized that 1. the secreted proteins of PCCs and PSCs have strong effects on effector lymphocytes proliferation especially B cells, 2. the induction of B cells division may be the primary mechanism responsible for PDAC immunosuppression, 3. the interaction with PSCs is necessary for the primary tumor-derived PCCs to trigger the pro-tumoral humoral immunity but not the metastatic tumor-derived PCCs, thus efforts targeting the interaction between PCCs and PSCs to reverse the immunosuppressive TME may be useful in primary tumor but futile in metastatic tumor. However, further study is required to validate this finding.
As lymphocytes suppression observed in flow cytometry analysis is not contributed by the secreted proteins of PCCs and PSCs directly, we deduced it may be caused by the MDSCs differentiated from PBMCs upon exposure to CM. To test this hypothesis, the MDSCs that were induced by PCCs and PSCs CM were isolated and co-cultured with various lymphocyte subtypes. According to the DEPs (Table 1), TGM2 and lipocalin 2 (LCN2) have been reported to associate with the accumulation of MDSCs [61][62][63] . Of note, the role of these proteins in MDSC differentiation has not been fully established, and further studies might be required for validation. According to the results, all groups of CM-induced MDSCs were suppressive against Th1, in which both cocultures displayed lower Th1 viability than monocultures, suggesting that PCCs and PSCs work synergistically in Th1 suppression. Besides, in order to confirm that the proteins secreted by PCCs and/or PSCs are necessary www.nature.com/scientificreports/ to activate the suppressive MDSCs, we have isolated uninduced MDSCs from PBMCs (without CM treatment) and accessed its ability to exert lymphocytes suppression. The result shows that the uninduced MDSCs did not affect Th1 viability, as the viability of Th1 treated with uninduced MDSCs was not significantly different from untreated Th1. This data suggests that without CM induction, the uninduced MDSCs do not possess the ability to suppress Th1. Whereas for Th2, although SW1990 and PSCs CM-induced MDSCs resulted in a lower Th2 viability, while the uninduced MDSCs resulted in a higher viability. Notably, the effect size was only about 20%. Hence, we hypothesized MDSCs do not play a major role in the viability of Th2. The Th1:Th2 ratio was also calculated for CM-induced MDSCs treated groups. Both co-cultures treated groups displayed a lower Th1:Th2 ratio than the monocultures treated groups. This shows that PCCs and PSCs work synergistically in differentiating MDSCs that reduce Th1:Th2 ratio, wherein promoting humoral immune response that is pro-tumoral. Besides, although not statistically significant, the SW1990 and PSC co-cultureinduced MDSCs resulted in a slightly lower Th1:Th2 ratio than the PANC10.05 and PSC co-culture-induced MDSC, which might reflect a stronger pro-tumoral immune response in the metastatic PDAC.
According to the results, PANC10.05, PSCs, and their co-cultures CM-induced MDSCs are suppressive towards Tregs. As Tregs can suppress effective antitumor immune responses, thereby promote tumor development and progression 64 , we deduced that the suppressed Tregs by the PANC10.05 cells and PSCs leads to the better clinical outcome in early PDAC. Whereas in the advanced stage of PDAC, the SW1990 cells induce MDSCs that are not suppressive against Tregs will ultimately lead to cancer progression as Tregs suppress other antitumor immune responses. However, it is noteworthy that the observed effect size was small (20%).
For CD8 + T cells, instead of suppressed cells viability, the CD8 + T cells viability of all treated groups had increased by at least 80%. Noteworthy, the effect size of viability increment in CD8 + T cells treated with uninduced MDSCs was much larger than the other lymphocyte subtypes treated with uninduced MDSCs. Not only that it shows MDSCs without CM induction were not suppressive, but they are also having a strong effect in promoting the proliferation of CD8 + T cells. However, CM-induced MDSCs resulted in a smaller increment of CD8 + T cell viability compared with uninduced MDSCs. To our knowledge, this is the first report of the ability of MDSCs to induce the proliferation of CD8 + T cells, showing that MDSCs do not only play suppressive roles. The circulating MDSCs (immature and undifferentiated) from the PBMCs of healthy individuals could induce CD8 + T cells proliferation. However, when the MDSCs were differentiated and mature in vicinity to PCCs and PSCs, their ability to induce CD8 + T cells proliferation reduces. We deduce that if the incubation period was longer or higher concentration of CM were used, we would see suppressive nature of the MDSCs that was reported in other studies. Furthermore, it is unclear about the functionality of proliferated CD8 + T cells, such as their ability to release cytotoxic molecules.
As for the viability of B cells after treated with CM-induced MDSCs, the effect size of increment in the viability was even larger than the increment observed in CD8 + T cells. Some studies have proven that MDSC could increase the proliferation of B cell and produce antibodies that inactivate T cell responses [65][66][67] . As humoral immune response is not an effective anti-cancer immune response, we hypothesized that the huge increment in B cells viability could contribute to cancer progression by tilting the immune response towards humoral rather than cell-mediated immunity. This observation is in line with the results observed in Th1, Th2 and Th1:Th2 ratio (Figs. 4b, 5a,b).
In phase I, it was found that when PBMCs were treated with CM, the total lymphocytes of all groups were greatly suppressed, and we observed a different intensity of suppression between the PCC lines that were derived from different stages of tumor. This led to our further investigation on the direct and indirect effects of CM towards each lymphocyte subtype, and what contributes to the milder suppressive effect exerted by the primary tumor-derived PCCs. To provide a better visualization for the hypothesis that we have made based on the results from phase I and phase II, the complex interplays between lymphocyte subtypes and CM-induced MDSCs are as shown in Fig. 7. The primary tumor-derived PCCs had a weaker total lymphocytes suppressive effect than the metastatic tumor-derived PCCs due to (1) weaker Th1 suppression, (2) higher CD8 + T cells expansion, and (3) stronger Tregs suppression 44,55,68 . Eventually, the combinatory effects resulted in a weaker lymphocytes suppressive effect, thus leading to the better anti-cancer response and better prognosis in the early stage of PDAC. On the contrary, the metastatic tumor-derived PCC-induced MDSCs exhibit strong total lymphocytes suppression due to, (i) stronger Th1 suppression, (ii) stronger Th2 suppression, and (iii) no Treg suppression. The combinatory effects will then lead to suppressed antitumor immune responses with poorer prognosis in the advanced stage of PDAC. In accordance with a published work, Trovato et al. had reported that MDSCs isolated from patients with different stages of pancreatic cancer possessed a different degree of immunosuppression, which is not correlated to the MDSC subtype but the genomic and transcriptomic profiles 69 . This report is in line with our results that the MDSCs induced by the CM of SW1990 cells had stronger pro-tumoral characteristics. Lastly, despite the PSC-induced MDSCs that are strongly suppressive against Th2 that promote pro-tumoral immune response, a strong suppression towards Th1 was observed when PSCs are co-cultured with PCCs. This is an indication that the co-culture of PCCs and PSCs is important as it enhances the suppression towards antitumor immune response, disregards if it is in early or advanced stage of PDAC.
Conclusion
A limitation of this study is that the lymphocyte subtypes were isolated and treated with CM and CM-induced MDSCs separately. Hence, the interactions between immune cells will not be observed, such as the suppression exerted by Tregs on antitumor immune cells, and the overview of cell-cell interaction in Fig. 7 is deduced based on our results and literature review. Nonetheless, a direct co-culture of lymphocyte subtypes with CM or CMinduced MDSCs will clarify the relationship between immune cells, facilitating the discovery of the underlying mechanisms. Furthermore, the bioactive secreted proteins in the CM should be identified as they may serve as www.nature.com/scientificreports/ potential targets for pancreatic cancer immunotherapy. In conclusion, CM did not have direct suppressive effects against any of the lymphocyte subtype. However, the MDSCs induced by CM of different cancer stages exhibited a different degree of lymphocytes suppression. Besides, the co-culture of PCCs and PSCs showed significant difference in their suppressive effects as compared to their monocultures. Hence, the co-culture should be included in future studies to better mimic the TME of PDAC.
Data availability
The datasets used and/or analyzed during the current study are not publicly available due to individual privacy concern but are available from the corresponding author on reasonable request. | 2022-07-21T06:16:20.995Z | 2022-07-19T00:00:00.000 | {
"year": 2022,
"sha1": "46352241b87d374e015c4bac791b910958c47ce3",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-16671-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bfa844e94766da5fe7ebb24cb942efae09e2be19",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53038834 | pes2o/s2orc | v3-fos-license | LncRNA BLAT1 is Upregulated in Basal-like Breast Cancer through Epigenetic Modifications
Long-noncoding RNAs (lncRNAs) have been shown to participate in oncogenesis across a variety of cancers and may represent novel therapeutic targets. However, little is known about the role of lncRNAs in basal-like breast cancer (BLBC), the aggressive form of breast cancer with no molecularly defined therapeutic target. To examine whether altered lncRNA expression contributes to the aggressive phenotype characteristic of BLBC, we performed a comparative analysis of BLBC versus non-BLBC using microarray profiling and RNA sequencing of primary breast cancer. We identified RP11-19E11.1 as a significantly up-regulated lncRNA in BLBC tumors and named it Basal-Like breast cancer Associated Transcript 1 (BLAT1). Analysis of pan-cancer datasets showed the highest expression of BLAT1 in BLBC tumors compared to all other cancers. Depletion of BLAT1 in breast cancer cells led to significantly increased apoptosis, partly because of accumulation of DNA damage. Mechanistically, BLAT1 expression is regulated at the epigenetic level via DNA methylation at CpG islands in the promoter. Concordantly, patients harboring tumors with BLAT1 hypomethylation showed decreased overall survival. Our results suggest that increased expression of BLAT1 via CpG site hypomethylation may contribute to the aggressive phenotype of BLBC, raising a possibility of new biomarkers for prognosis of aggressive BLBC tumors.
Our ever-growing understanding of the human genome has revolutionized advances in cancer biology. The human transcriptome is large, including both protein-coding mRNAs and noncoding RNAs 1 . Long noncoding (lnc) RNAs, ranging from 200 nucleotides to 100 kilobases, are pervasively transcribed throughout the genome and participate in a wide array of cellular processes, particularly through cis-or trans-regulation of gene expression at enhancers, chromatin remodeling, and post-transcriptional regulation of mRNA processing [2][3][4] .
An expanding body of evidence points to lncRNAs as mediators of tumorigenesis in multiple types of cancer, and lncRNAs may represent a new class of targets in cancer therapy 5 The lncRNA MALAT1 (metastasis-associated lung adenocarcinoma transcript 1) is highly expressed in non small-cell lung cancer cell lines and contributes to tumor invasion and metastasis 6 . Inhibition of MALAT1 in the MMTV-PyMT mouse model of breast cancer resulted in highly differentiated primary tumors and a nearly 80% reduction in lung metastasis 7 . The lncRNA PVT1 (plasmacytoma variant translocation 1) is expressed from the PVT1 gene located adjacent to MYC on human chromosome 8q24 and is coamplified with MYC in 98% of cancers 8,9 . PVT1 lncRNA regulated MYC protein and increased cell proliferation and tumorigenicity in cells with MYC amplifications 9 .
To gain a better understanding of the contribution of lncRNA to human cancer, especially different molecular subtypes of breast cancer, we examined differentially expressed lncRNA in breast cancer. Triple-negative tumors, lacking estrogen receptor (ER), progesterone receptor (PR), and epidermal growth factor receptor-2 (HER2) amplification, make up 15%-20% of all breast cancer cases 10 . TNBC more often affects younger women, women of African descent, and women with BRCA1 mutations [10][11][12][13] . TNBC is challenging to treat because of its heterogeneity and paucity of defined molecular targets 10 . Though patients with TNBC have a higher response rate to neoadjuvant chemotherapy than patients with receptor positive breast cancer, those who do not achieve pathologic complete response tend to relapse and develop distant metastatic disease. Additionally, triple-negative tumors often present at higher grades at diagnosis and display aggressive clinical behavior 10,11 . As a result, TNBC is associated with poor prognosis, recurrence, and shorter survival 10,11,14 . TNBC, clinically defined by tumor receptor status based on immunohistochemistry and fluorescence in situ hybridization, can be further divided into molecular subgroups by gene expression signature. The majority of triple-negative tumors fall under the basal-like breast cancer (BLBC) molecular subtype; about 75% of TNBCs are classified as basal-like based on gene expression profiling, while the other 25% cluster with other mRNA subtypes (luminal A, luminal B, HER2-enriched or normal breast-like) [10][11][12][13]15,16 . Likewise, approximately 80% of BLBCs are negative for ER, PR and HER2 15 . Similar to TNBC, BLBC is a heterogeneous disease leading to a wide range of clinical outcomes; those patients who develop complete response to chemotherapy have excellent outcomes whereas the remaining patients with non-responsive tumors have the worst prognosis of all subgroups 11 .
Basal-like tumors have a high frequency of mutations in TP53, RB1, BRCA1, PIK3CA, and MYC amplification 15 . Accordingly, BLBC cells often display a highly invasive, proliferative, and dysregulated cell cycling phenotype 10 . Of note, 20% of basal-like tumors have a germline and/or somatic BRCA1 or BRCA2 variant, implying a significant hereditary component to BLBC 15 . Currently, there is a lack of selective agents to target basal-like tumors, leaving only chemotherapeutic options 16 . Unfortunately, this group of cancer often develops resistance to chemotherapy leading to recurrence and metastatic disease 17 . Thus, it is imperative that we better understand the unique biological drivers of BLBC in order to identify new therapeutic targets.
Here we report on the BLBC-specific lncRNA, BLAT1/RP11-19E11.1, which is involved in regulation of cell proliferation and cell death. Mechanically, this lncRNA is regulated epigenetically through CpG site methylation. We observed decreased promoter methylation and increased BLAT1 expression in a large cohort of patients with BLBC from TCGA, as well as in BLBC cell lines. There was a trend of decreased 10-year overall survival among patients with BLAT1 promoter hypomethylation. These results suggest that altered promoter methylation and expression of BLAT1 may represent a biomarker for BLBC with possible prognostic implications.
Results
LncRNAs are specifically expressed in basal-like breast cancer tumors. We applied two approaches to identify lncRNAs differentially expressed in breast cancer. First, we performed human lncRNA array using thirty breast tissues consisting of non-malignant breast tissues (n = 11) and breast tumors (n = 19). Because BLBC has a higher prevalence among African American (AA) women 18,19 , we oversampled AA women in the study (Supplementary Table 1). Unsupervised hierarchical clustering of lncRNA expression showed a different lncRNA expression signature of BLBC tumors compared to non-BLBC tumors or normal breast tissues (Fig. 1A). Among the top twenty lncRNAs specifically expressed in BLBC tumors (Supplementary Table 2), RP-11-19E11.1 represents a significantly up-regulated lncRNA in BLBC tumors, compared to normal breast tissues and tumors of other subtypes (p = 0.004) (Fig. 1B). Based on its high expression and association with BLBC tumors, we named it as Basal-Like breast cancer Associated Transcript 1 (BLAT1).
For a validation set, we conducted rRNA-depletion based RNA sequencing (Ribo-Zero RNA-seq) on fifty breast tumors from diverse patients including 66% African Americans (Supplementary Table 1). Ribo-Zero method was chosen to allow the analysis of both coding and non-coding transcripts. We confirmed a signature of differentially expressed lncRNAs in BLBC tumors compared to non-BLBC tumors (Fig. 1C). A significant up-regulation of BLAT1 was again observed in BLBC tumors compared to other tumor subtypes (p < 0.0001) (Fig. 1D). Based on the two data sets, we concluded that BLAT1 is specifically expressed in BLBC tumors.
Analysis of BLAT1 expression in pan-cancers. We next analyzed BLAT1 expression in 33 types of human cancers (n = 9,811) using previously reported lncRNA profiles in pan-cancer samples of TCGA 20 and NCI Genomic Data Commons. BLAT1 is expressed in many types of human tumors, but with the highest expression levels in BLBC tumors (BRCA-Basal), compared not only to non-BLBC tumors (BRCA-non Basal) but also to all other tumors ( Fig. 2A). The data confirmed increased expression of BLAT1 in BLBC tumors, validating BLAT1 as a novel biomarker for BLBC across all human cancers. BLAT1 expression is upregulated in BLBC cell lines. We examined BLAT1 expression in a total of twenty breast cancer cell lines divided by molecular subtypes (Fig. 2B). Expression of BLAT1 in breast cancer cells was analyzed by qRT-PCR relative to the expression in non-malignant breast cells (184 A1 and HMEC). Basal A cell lines (n = 5) showed higher expression of BLAT1 in general, compared to non-basal cancer cell lines (n = 13). HCC-1569 and MDA-MB-468 cells showed a 13,491 ± 496 and 1,325 ± 195 fold increase compared to 184A cells, respectively. We therefore chose these two cell lines for further functional characterization of BLAT1 in vitro.
Characterization of BLAT1 identified through in silico analysis. BLAT1 (RP-11-19E11.1) is located at 2q14.2 adjacent to EN1 (Engrailed 1), a transcription factor known to be exclusively overexpressed in BLBC and to contribute to survival pathways and chemotherapy resistance (16). BLAT1 is surrounded by a rich epigenetic landscape shown by ChIP-seq data from ENCODE, including strong H3K4me3 and H3K9ac in NT2-D1 cell lines (Fig. 2C). It also contains CpG islands with 104 CpG dinucleotides in the promoter region, suggesting that epigenetic changes specific to BLBC may play a functional role in BLAT1 expression.
Knockdown of BLAT1 expression increased apoptosis of BLBC cell lines. Because we observed
cell death and a decrease in cell proliferation after BLAT1 knockdown, we assayed cleavage products of caspase-3 and -7 in MDA-MB-468 and HCC-1569 cells after ASO treatment. BLAT1 knockdown induced a significant increase in apoptosis in both cell lines. Caspase 3/7 activity levels were 2.70 ± 0.11 and 2.72 ± 0.013 fold higher in MDA-MB-468 cells transfected with ASO 1a and 1b, respectively, compared to cells transfected with the control ASO (Fig. 3D). Likewise, HCC-1569 cells demonstrated caspase 3/7 luminescence that was 3.89 ± 0.26 and 3.06 ± 0.18 fold higher when transfected with ASO 1a and 1b, respectively, than caspase 3/7 levels of HCC-1569 cells transfected with the control ASO (Fig. 3F). To confirm the caspase 3/7 results, we additionally quantified apoptosis with flow cytometry of annexin V-Alex488 and PI stained ASO-transfected MDA-MB-468 cells. BLAT1 knockdown resulted in a higher proportion of annexin V-positive cells compared to MDA-MB-468 cells that were transfected with the control ASO (37.1% versus 9.6%) (Fig. 3E). Flow cytometry of HCC-1569 cells also confirmed a higher proportion of annexin V-positive cells compared to HCC-1569 cells transfected with the control ASO (21.8% versus 8.1%) (Fig. 3G).
Depletion of BLAT1 increased the DNA damage response.
To determine the mechanism of how BLAT1 knockdown drove apoptosis and cell death, we first examined changes in mitochondrial enzyme activity in MDA-MB-468 cells. This is partly because the adjacent gene, EN1, is known to contribute to survival pathways by regulating mitochondrial activity in neurons 21 as well as breast cancer 17 . However, when we measured mitochondrial OXPHOS Complex I enzyme activity in MDA-MB-468 cells, we did not find any difference in the activity between the control and BLAT1 ASO treated cells (Fig. 3H). We next examined aberrations in the DNA damage pathway using γ-H2AX, a marker of DNA double strand breaks 22
Hypomethylation of the BLAT1 and EN1 promoters in Basal-like tumors. We tested if BLAT1
expression is regulated by CpG site methylation by analyzing methylation levels of three CpG dinucleotides (cg18250846, cg19957905, and cg20599967) using the TCGA HumanMethylation450 Array data (Fig. 4A). These results are expressed as beta values (0 to 1) with increasing values from hypomethylation to hypermethylation. Out of 838 patient samples with DNA methylation profile, we selected 587 samples for the analysis, as previously described (22). One-way ANOVA and Tukey's multiple comparison tests at the three CpG islands showed significantly lower methylation in the CpG islands of the BLAT1 promoter in BLBC tumors compared to normal breast tissues and other subtype breast tumors (Fig. 4B). These results indicate that promoter methylation of BLAT1 may be an epigenetic modification leading to the expression differences seen between luminal and basal-like subtypes of breast cancer.
When we analyzed the promoter methylation of the adjacent gene, EN1, we also found significantly lower methylation in seven CpG sites of the EN1 promoter in BLBC tumors, compared to normal breast tissues and other subtype breast tumors (Fig. 5 A&B). Concordantly, EN1 expression is significantly higher in basal-like tumors compared to other subtype tumors, both in TCGA (Fig. 5C) and our RNA-seq datasets (Fig. 5D). When we compared EN1 expression among pan-cancer samples, we confirmed the highest expression of EN1 in BLBC Correlation between BLAT1 and EN1 methylation. Because both BLAT1 and EN1 are hypomethylated and highly expressed in BLBC tumors, we investigated the relationship between the two genes using TCGA methylation and RNA-seq datasets ( Supplementary Fig. 1C,D). We found that methylation of the BLAT1 CpG site, cg18250846, is significantly correlated with the methylation status at three sites in the EN1 promoter (cg20248516, cg22030072, and cg17167253) in breast cancer (p < 0.0001) (Supplementary Fig. 1C). Similarly, Fig. 1D). The results suggest a concordant regulation of expression of both genes by DNA methylation.
Bisulfite sequencing reveals hypomethylation of the BLAT1 promoter in BLBC cell lines. To further investigate the results of TCGA analysis described above, we performed bisulfite sequencing of three CpG sites (cg20599967, cg19957905, and cg18250846) within the promoter region of BLAT1 (Fig. 6). A total of six cell lines were treated with bisulfite, including three basal-like cell lines (MDA-MB-468, HCC-1569, and UACC-3199) and three non-basal-like cell lines (T47D, MDA-MB-175VII, and MDA-MB-231). Ten or twenty clones from each cell line were sequenced and the average percentage of methylation levels was calculated. In general, the basal-like cell lines showed a lower percentage of methylation at the three sites studied compared to the non-basal-like cell lines (Fig. 6A,B). HCC-1569, a basal-like cell line with highest expression of BLAT1, showed 0% methylation at all three sites. MDA-MB-468 basal-like cells exhibited 21%, 13%, and 29% of methylation respectively at cg20599967, cg19957905, and cg18250846, whereas T47D non basal-like cells showed 100%, 56%, and 67% of methylation respectively. These data support our hypothesis that increased expression of BLAT1 in BLBC may be due to reduced methylation at the promoter. to DMSO treated cells (Fig. 6C). This concomitant increase in BLAT1 expression in 5-Aza-2′-Deoxyctidine treated cells further suggests that BLAT1 expression maybe regulated through CpG dinucleotide methylation.
CpG site methylation of BLAT1 is associated with clinical outcomes. Because BLAT1 expression
plays an important role in regulating apoptosis and is epigenetically regulated, we tested an association of patient outcome with methylation levels of the BLAT1 promoter using TCGA methylation array datasets (Fig. 7). Patients with lower levels of BLAT1 promoter methylation, which is related to higher expression of BLAT1 in BLBC tumors, showed poor overall survival (p = 0.034) compared to those with higher methylation levels (Fig. 7A). A similar pattern was also observed in the promoter methylation of the adjacent gene, EN1 (Fig. 7B), indicating the epigenetic significance of this locus associated with clinical outcomes.
Furthermore, when we analyzed the association of patient outcomes with BLAT1 expression levels using TCGA RNA-seq datasets, we found a significant association of higher expression of BLAT1 lncRNA with worse survival in the TCGA patient cohort (p = 0.028) (Fig. 7C). Altogether, our results indicate a potential for BLAT1 methylation and expression as a biomarker for BLBC and breast cancer prognosis.
Discussion
BLBC, the major molecular subtype of TNBC, is a clinically challenging disease to treat due to its aggressive nature, heterogeneity, and lack of targeted therapies to date. We identified differential expression of BLAT1 in BLBC, using lncRNA array and RNA-sequencing of human breast tumors. Knockdown studies revealed that BLAT1 is functionally active in BLBC cell lines, contributing to cell survival and the DNA damage response. To gain insight into the mechanism underlying the differential expression of BLAT1 in BLBC, we compared the methylation status of the BLAT1 promoter across the intrinsic subtypes. TCGA dataset analysis showed hypomethylation of CpG sites in the BLAT1 promoter region that was specific to BLBC tumors. Using bisulfite sequencing, we confirmed hypomethylation of the BLAT1 promoter in BLBC cell lines compared to non-BLBC cell lines. Inhibition of DNA methylation with 5-Aza-2′-Deoxyctidine treatment increased BLAT1 expression in breast cancer cell lines. Finally, we reported that hypomethylation of the BLAT1 promoter was associated with worse survival in the TCGA patient cohort. These observations suggest that BLAT1 is epigenetically regulated via DNA hypomethylation and that a hypomethylation signature in BLBC leads to high levels of BLAT1 expression, contributing to aggressive clinical features.
Breast cancer subtypes appear to be associated with DNA methylation-based signatures 23,24 . A large fraction of BLBC tumors are characterized by hypomethylation events occurring within the gene body, whereas the luminal-B subtype tumors are characterized by CpG island hypermethylation events. A few selected methylation markers showed their association with clinical parameters, suggesting that these methylation markers can provide valuable information on disease prognosis in breast cancer. Our study identified a specific hypomethylation at the lncRNA promoter in BLBC tumors and its association with worse survival, strengthening the usage of methylation markers for disease prognosis, not only including protein coding regions but also including non-coding genomic regions. These markers together perhaps provide a systematic diagnostic and prognostic tool to detect and prevent breast cancer progression.
It is worthy to note that this study is based on samples mostly from AA patients. African and AA women have the highest mortality from breast cancer of all racial/ethnic groups. However, there is paucity of data on how the genomic alterations in tumors from women of diverse backgrounds interact with their physical and socio-cultural environment and how these interactions underlie the poor clinical outcomes observed in AA women. Based upon the epidemiological finding that AA women show a higher percentage of ER-negative tumors, triple-negative tumors and BLBC, we asked whether we could identify novel molecular markers for aggressive BLBC specific to AA population. We have assembled a large cohort of breast cancer cases from the diverse population of patients treated at The University of Chicago, with AA women from the South Side of Chicago making up a large proportion of these cases. We believe it is critical to continue to generate hypotheses about the biological determinants of aggressive breast cancer that disproportionately affects women of African ancestry in future studies like this.
Our results are consistent with previous findings, in which RNA-seq performed on tumors from two cohorts of breast cancer patients revealed lncRNA clustering patterns that corresponded to intrinsic subtype 25 . Bradford et al. found six lncRNAs to be significantly over-expressed within the BLBC subtype compared to non-BLBC, including RP11-19E11.1, whose expression correlated to EN1 upregulation in BLBC 25 . Although our study mainly focuses on lncRNAs, we also recognized the potential for the neighboring protein coding gene, EN1, as a biomarker for BLBC tumors. EN1 is a transcription factor shown to be overexpressed in BLBC and to contribute to survival pathways and chemotherapy resistance 14 . EN1 is hypomethylated and upregulated in BLBC (Fig. 5), with significant correlation to BLAT1 methylation and expression (Supplementary Fig. 1). Our data suggest a simultaneous regulation of BLAT1 and EN1 by DNA methylation in human tumors, leading to exceptionally high levels of their expression in BLBC tumors.
Interestingly, although the two genes are co-expressed in BLBC and involved in the cell survival pathway, their biological roles in the pathway seem dissimilar. EN1 regulates mitochondrial complex I activity, whereas BLAT1 knockdown increased γ-H2AX, a DNA double strand marker, with no change in mitochondrial complex I activity. Although the two adjacent genes are epigenetically co-regulated and specifically expressed in BLBC tumors, they might play distinct roles in the development of BLBC tumors, one in the DNA damage response and the other in the regulation of mitochondrial activity, which perhaps contribute to the aggressive features of BLBC tumors together.
Our report of the mechanistic regulation of a BLBC-specific lncRNA is a step forward in understanding this heterogeneous disease. While further in vivo studies are required before the biologic consequences of BLAT1 transcript upregulation can be extrapolated to human tumors, our in vitro functional analyses and the trend of decreased long-term overall survival in patients with tumors that highly expressed BLAT1 suggest that this lncRNA is biologically active and may contribute to the aggressive disease phenotype of BLBC. In the future, BLAT1 could be used as a biomarker and prognostic indicator for clinically aggressive BLBC. The functional significance of BLAT1 in BLBC cell lines makes it a potentially attractive targeted therapy. Furthermore, BLAT1 is undoubtedly just one of many lncRNAs that remain to be characterized in the complex biologic milieu of BLBC. Future study in the field of lncRNAs is necessary to expand our understanding of not only BLBC, but may also help elucidate the drivers of malignancies across the board.
Materials and Methods
Sample Selection. All the studies included in the lncRNA array and RNA sequencing have been approved by the Institutional Review Boards of the University of Chicago hospitals. All participants in this study provided written informed consent to allow for use of their tissue samples for research. All the methods were carried out in accordance with the guidelines and regulations of the University of Chicago. Case selection was derived from a pool of breast cancer patients who had undergone surgery at the University of Chicago hospitals. We selected female patient cases (IDC/DCIS) with frozen tissue available. Because we were interested in pre-treatment gene expression, we excluded patients who had received neoadjuvant chemotherapy.
RNA Extraction, Sequencing, and Expression Quantification. Areas of malignant tissue were identified through light microscopy using representative top slides derived from 5μm sections of frozen tumor samples. These areas were removed using a scalpel blade and tissues were homogenized by TissueLyzer LT (Qiagen, Valencia, CA). RNA was extracted using the Qiagen AllPrep DNA/RNA/Protein mini kit protocol (Qiagen, Valencia, CA). Quality control was performed with the Agilent 4200 TapeStation system (Agilent Technologies, Santa Clara, CA). RNAs with RNA integrity number greater than 6 were selected. RNA samples were subjected to human lncRNA microarray V3 (Arraystar Inc, Rockville, MD), which includes 30,600 non-coding genes and 26,100 coding genes. For RNA-sequencing, cDNA libraries were constructed using the Illumina TruSeq Stranded Total RNA with Ribo-Zero Human kit (Illumina, San Diego, CA). RNA sequencing using 100-bp paired-end reads was performed on the Illumina HiSeq. 4000 at a depth of 80 million reads per sample. Adapter sequences were removed by Trimmomatic, alignment was performed using the Spliced Transcripts Alignment to a Reference (STAR) software and expression quantification was achieved using HTSeq. Molecular subtypes were determined by PAM50 markers, as previously described 26 . Cells were tested negative for mycoplasma contamination and validated for species and unique DNA profile using short tandem repeat analysis by the provider or by the authors. All cell lines were cultured in RPMI Medium 1640 (Life Technologies, Carlsbad, CA) supplemented with 10% fetal bovine serum, 1% Antibiotic-Antimycotic containing penicillin, streptomycin and Fungizone (Invitrogen, Carlsbad, CA), and 1% HEPES at 37 °C in an atmosphere containing 5% CO2.
RNA isolation and qRT-PCR from cell lines. Total RNA was isolated from cells using RNeasy Mini Kit
(Qiagen, Valencia, CA). Reverse transcription of lncRNAs and mRNAs was performed using Superscript III First Strand Synthesis kit (Invitrogen, Carlsbad, CA) with random primers. qRT-PCR was carried out in the 7900HT Fast Real-Time PCR System (Applied Biosystems, Carlsbad, CA) using TaqMan Gene Expression Assay and gene-specific TaqMan primers (Life Technologies, Carlsbad, CA). Relative quantity of expression was calculated with the ΔΔCt method using rRNA 18S as an internal control. Samples were analyzed in quadruplicate.
Mitochondrial Complex I activity assays and Western blot analysis. The mitochondrial activity assay was performed using Complex I Enzyme Activity Microplate Assay kit (ab109721) from Abcam (Cambridge, UK), according to the manufacturer's protocol. Briefly, two days after transfection of ASOs, MDA-MB-468 cells were washed and extracted in detergent solutions. Various amounts of protein extracts (25, 50, 100, or 200 µg) were incubated with NADH and dye. The changes in absorbance were measured at OD 450 nm for 30 minutes with 20 second intervals. Western blot was performed using a standard protocol. Anti-Phospho-Histone H2A.X (Ser139) antibodies (#9718) were purchased from Cell Signaling Technology (Danvers, MA). | 2018-10-25T14:53:42.756Z | 2018-10-22T00:00:00.000 | {
"year": 2018,
"sha1": "d660e2c10b514081b61eccc77049c88230e29a34",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-33629-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d660e2c10b514081b61eccc77049c88230e29a34",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
155658252 | pes2o/s2orc | v3-fos-license | Omani Students Involvement in Evaluating Their Pharmacy Program
Our study addressed the fact that language could be a barrier for non-English speaking students in their learning process. This fact was clearly reflected in the results of the study as students were dissatisfied with their online courses compared with didactic lecture courses. Unlike online courses, in didactic lecture courses the instructor usually explains difficult English words and terminologies and this makes Arab students prefer regular courses to online courses. This attitude is mainly attributed to the ability of these students to match better between thinking, understanding and speaking using their mother language.
Introduction
The Pharmacists' professional roles and responsibilities have evolved historically from a focus on medication compounding and dispensing to extended pharmaceutical care services [1].
Nowadays their roles vary in different parts of the world.Examples are: community service, industry, research, academia, quality control and finally, the clinical service.This concept took turn in 1990 based on the introduction of the term "pharmaceutical care" by the researchers Hepler and Strand [2].In addition, it is widely believed that pharmacists can make great contributions to the primary provision of health care ensuring the safe and effective use of medications [1].As a result, the Universities and colleges, worldwide, responded by re-engineering their education system in an attempt to improve the quality of their undergraduates to best meet the needs of society [3].Now there is a global attitude on the improvement of teaching methods in Pharmacy Colleges to produce better-qualified undergraduate [4].This has equally been given much attention by our faculty and administration at the School of Pharmacy, College of Pharmacy and Nursing, University of Nizwa.
The Effectiveness in learning, like all other aspects of human behavior, is highly related to obtaining of satisfaction by the learner [5].Most institutions of higher education have carried out a variety of research projects with an increased concern being placed upon the phenomena of student satisfaction [6].The issue on student satisfaction in higher institutions of learning has gained popularity in recent years and research findings have established that student ratings can be a reliable and valid indicator of effective teaching [5,7,8].Despite of this, faculty debate exists regarding the validity and reliability of student evaluations [9,10].
Nevertheless, the opinion and feedback of Pharmacy students as the recipients of the program are important parameters to assessing the quality and the efficacy of our Pharmacy Program.Hence, the objective of this study is to assess the levels of Omani student's satisfaction toward the existing Pharmacy Program at University of Nizwa.
Pharmacy students who attended, at least one online course (N=96).Demographic data of the participants were student's age, gender, years spent in the School of Pharmacy and the student status, either regular student (who is enrolled in Diploma (D Pharm)/Bachelor (B Pharm) program), or bridging student (who is upgrading from Diploma to Bachelor) (Table 1).
Survey instrument
The survey instrument was a self-administered questionnaire generated in consultation with the literature as well as with anecdotal information acquired from faculty members and students.In addition to participants' demographics, the questionnaire consisted of six criteria for the assessment of Pharmacy Program, including study plan, instructor, methods of teaching, practicum/training, online courses and finally, a general question was added to assess students' levels of satisfaction with the overall Pharmacy Program.Each of these assessment criteria provides four statements except the study plan (provides three statements) and the overall Pharmacy Program (provides only one statement).
We did a pilot study including 14 students (8 from class 4 and 6 from class 5) who represent all students' statuses and attended, at least, one online course.Thereafter, changes were made to improve the assessment statements.The project was reviewed and approved by the University of Nizwa research and ethics committee.
Data analysis
The collected demographic data of the participants and the six items of the Pharmacy Program were assessed using a four-point-modified-Likert tool: strongly satisfied (1), satisfied (2), unsatisfied (3) and strongly unsatisfied (4).Analyses were conducted using STATA® v13 (2013) [11] with descriptive and inferential statistics such as means and standard deviations.
Results
Data analysis revealed that the number of female participants was greater than the number of males, representing 85.4%.Age ranged from 20 to 29 and the majority of the participants' ages were in the range of 20-23 (55.2%).The number of B Pharm students was greater than the number of D Pharm students, representing 82.3%.The fifth and fourth year Students were 97.2% and 20.8% of the participants respectively.Most of students were regular (87.5%), the reset were bridging students, those who joined the University to upgrade their status from D Pharm to B pharm (Table 2).Table 3 lists the 20 statements that elicit the student's level of satisfaction.These statements represented the six aspects/items of the Pharmacy Program including study plan, instructors, methods of teaching, practicum courses/training, online courses and finally, the overall Pharmacy Program at University of Nizwa.The responses to these statements constitute the dependent variable of the study.Figure 1 illustrates the percentages of students' satisfaction with each assessment item.
Table 3 shows the attainment of satisfaction levels with numbers, percentages, mean and standard deviations followed by the overall satisfaction levels for each evaluation statement.The level of satisfaction with the Pharmacy Program study plan was 68.7%.Satisfaction was positive towards the prerequisites arrangement (M±SD = 2.3±0.79), and the easiness of following the study plan (M±SD = 2.3±0.74).
Students were satisfied with instructor teaching and assessment in a similar manner (68.7%).Student appeared to be more satisfied with the instructor's teaching material that enhances their knowledge and skills, (M±SD = 2.2±0.72).
Regarding the method of teaching, the overall satisfaction was 67.7%.Lectures were presented in understandable manner (M±SD = 2.3±0.82), and there is full integration between lectures and laboratory sessions (M±SD = 2.3±0.82).Table 3: Attainment of satisfaction levels with numbers, percentages, mean and standard deviations followed by the overall satisfaction levels for each evaluation statement.(N=96).* Mean ± standard deviations (M±SD) are representing satisfaction levels 1-4; 1= strongly agree, 2= agree.3= disagree, 4= strongly disagree.With practicum and training courses, the overall satisfaction level was 51.0%.The lack of synchronization between theory and practice during practicum courses represented the most dissatisfying element, (M±SD = 2.5±0.83),followed by the insufficiency in the arrangement of course logistics like transportation and/or accommodation facilities, (M±SD =2.4±0.79)Study results showed a clear dissatisfaction with the online courses.The reported average satisfaction level was 38.5%.Results revealed that online courses were not as informative as regular courses (M±SD = 2.9±0.86),online courses do not cover all contents of the subject (M±SD = 2.9±0.89),there is not sufficient time to interact with the instructor (M±SD = 2.5±0.95), and the online course discussion does not motivate students to read more (M±SD = 2.8±0.86).
Finally, students reported their satisfaction level with the overall Pharmacy Program at University of Nizwa as 67.7% (M±SD = 2.3±0.69)(Table 3).
The purpose of this study was to evaluate students' levels of satisfaction with the Pharmacy-program at University of Nizwa.In this study, we assessed the learner-perceived levels of compliance with his/her desires or demands towards the existing Pharmacy Program.While promoting the quality in Pharmacy Programs, higher education institutions consider student satisfaction as one of the major principles in higher education; the higher the service quality, the more satisfied the student [12].Accordingly, satisfaction is based on customer expectations and perception of service quality.Consequently, institutions have been paying more attention to meeting the expectations and needs of their students [9,13].Therefore, this study assessed the satisfaction levels of students with a number of elements contributing to the existing Pharmacy-program at University of Nizwa.
The Students' level of the satisfaction with the Pharmacy Program study plan was 68.7%.Students showed high level of satisfaction with the way study plan guides them to register for each semester (M±SD = 2.3±0.78); and with the clearly-stated pre-requisites and co-requisites for each course (M±SD = 2.3±0.79).This result is in-line with another study assessing students' academic satisfaction conducted in King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia where satisfaction levels with the curriculum study plan was 84% [14].Arbaugh highlighted the importance of the convenience and flexibility of any educational program study plan as an important contributor to student satisfaction [15].
Our students were satisfied with the instructor in a similar manner (68.7%) and in this aspect they were more satisfied with the Instructors interaction style that enhances the student knowledge and skills, (M±SD = 2.2±0.72).A 9-year-period study by Tessema et al. [5], in Midwestern United States University, focused on the evidence of factors affecting college students' satisfaction with major curriculum.The findings of this study showed that quality of instruction is one of the statistically significant impacts on student satisfaction.Some authors believe that both quality and quantity of interaction is crucial for student satisfaction [16].Majority of researchers believe that student ratings are useful means of evaluating teaching [5,7,8].However, some educators and researchers continue to believe that student ratings of teachers are bad tools for assessing teaching effectiveness [9,10].
Regarding the method of teaching, the overall satisfaction was 67.7%.Sixty-three percent of the students agreed there was a full integration between lectures and laboratory sessions (M±SD = 2.3±0.82).Students (68.8%) also agreed that the lectures were presented in understandable manner.Although most of our students are Arabic speaking, the level of satisfaction toward the statement "Learning material is presented in an understandable manner" was positive.This could be attributed to the huge effort by the instructors to troubleshoot the communication problem among the students as 62.8% of the students agreed that they can clarify unclear issues during the lecture (M±SD = 2.3±0.82).As the teaching language in our University is English, language appeared to be a troublesome that influence communication between students and tutors.
Language barriers and cultural differences are affecting many of our students because they feel embarrassed when they speak in English in-front-of others.This problem is common especially within group work [17].
The overall satisfaction level with practicum and training courses was 51.0%.The lack of synchronization between theory and practice during practicum courses represented the most dissatisfying element, (M±SD = 2.5±0.83),followed by the insufficiency in the arrangement of course logistics like transportation and/or accommodation facilities, (M±SD =2.4±0.79).
Although the students were satisfied with the training period and its usefulness, they were not quite satisfied with the integrity of what is studied and what is practiced.In a similar study conducted in the University of Jordan, the satisfaction level of students with their practicum experience was 49%.Students' concerns emphasized issues like connections between university courses and practicum requirements, field sites and supervision [18].
One notable important finding in our study was the student satisfaction level with the online courses, which was 38.5%.Students' responses reflected that online courses were not as informative as regular courses (M±SD = 2.9±0.86);online courses do not cover all content of the subject (M±SD = 2.9±0.89);there is not sufficient time to interact with the instructor (M±SD= 2.5±0.95); and online course discussion does not motivate students to read more (M±SD = 2.8±0.86).
A meta-analysis about studies of comparison between online education and traditional methods showed that students find online education as satisfactory as traditional classroom [19].Another study by Sikora showed that 70% of students enrolled in undergraduate courses reported that they were satisfied with their online course experiences than with their traditional classroom experiences [20].
In our study, the students' satisfaction with online learning was generally negative (table 1).Unlike online course, in didactic lectures the instructor usually explains difficult English words and terminologies.This, among other factors, makes students prefer regular courses to online courses.All our students have to take International English Language Testing System (IELTS) or Test of English as a Foreign Language (TOEFL) before being admitted to the University.
Nevertheless Language could still be a barrier for non-Englishspeaking students when they start a course.Some researchers argue that different language systems may cause difficulty for non-Englishspeaking students to understand English language, particularly when they move with their learning abilities from first language to the second language [21].The last statement in this study was to assess the students' satisfaction level with the overall Pharmacy Program at University of Nizwa.Results revealed a satisfaction level of 67.7% (M±SD = 2.3±0.69).This result is chary compared to similar study conducted at the Midway School of Pharmacyin Chatham, Kent, UK, as their students' satisfaction with their Pharmacy-program showed a percentage of 97% giving them the most satisfied result compared with any other higher education institution in the UK [22].
Figure 1 :
Figure 1: Percentage of students' satisfaction with each assessment item. | 2019-05-17T13:33:48.521Z | 2018-04-28T00:00:00.000 | {
"year": 2018,
"sha1": "94d9612accae0cc76c8900f509ef51093afa77ac",
"oa_license": "CCBY",
"oa_url": "https://www.graphyonline.com/archives/IJCPP/2018/IJCPP-136/article.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d4573e73d5e0ffcb74f4453a798cafda491c3758",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
218908323 | pes2o/s2orc | v3-fos-license | Low-Intensity Physical Exercise Improves Pain Catastrophizing and Other Psychological and Physical Aspects in Women with Fibromyalgia: A Randomized Controlled Trial
Fibromyalgia (FM) is a chronic syndrome characterized by widespread pain and other physical and psychological features. In this study, we aimed to analyze the effect of a low-intensity physical exercise (PE) program, combining endurance training and coordination, on psychological aspects (i.e., pain catastrophizing, anxiety, depression, stress), pain perception (i.e., pain acceptance, pressure pain threshold (PPT), and quality of life and physical conditioning (i.e., self-perceived functional capacity, endurance and functional capacity, power and velocity) in women with FM. For this purpose, a randomized controlled trial was carried out. Thirty-two women with FM were randomly allocated to a PE group (PEG, n = 16), performing an eight-week low-intensity PE program and a control group (CG, n = 16). Pain catastrophizing, anxiety, depression, stress, pain acceptance, PPT, quality of life, self-perceived functional capacity, endurance and functional capacity, power, and velocity were assessed before and after the intervention. We observed a significant improvement in all studied variables in the PEG after the intervention (p < 0.05). In contrast, the CG showed no improvements in any variable, which further displayed poorer values for PPT (p < 0.05). In conclusion, a low-intensity combined PE program, including endurance training and coordination, improves psychological variables, pain perception, quality of life, and physical conditioning in women with FM.
Introduction
Fibromyalgia (FM) is a chronic condition characterized by widespread pain associated with other physical symptoms, such as fatigue or decreased physical capacity, and psychological alterations [1].
One of the psychological alterations that has been associated with FM is pain catastrophizing, a specific psychosocial construct of pain, which includes cognitive and emotional processing, sense of helplessness, pessimism, and rumination about pain-related symptoms [2]. Pain catastrophizing has been associated with pain severity and disability [3], which is being considered a risk factor for pain chronification [4]. Furthermore, this construct of pain has been shown to decrease pain acceptance, which, in turn, may aggravate the symptomatology of FM [5]. Pain acceptance is lower in FM patients [6], which has been linked to a higher degree of disability [7] and a lower quality of life [8].
In addition to pain catastrophizing, other psychological alterations that can aggravate the symptomatology of FM are anxiety and depression. These alterations, together with high levels of stress, have been proposed as precipitating and/or perpetuating factors of this condition [9] and are inversely related to quality of life among FM patients [10]. In this regard, it has been suggested that the higher the level of pain catastrophizing, anxiety, and depression in FM individuals, the greater their sensitivity to non-painful stimuli and difficulty in coping with the painful process [11].
Interestingly, pain catastrophizing, has also been inversely related to muscular endurance [12]. This tendency has proven to have a negative impact on neuromuscular, cardiovascular, immune, and neuroendocrine systems [13]. In turn, such an impact causes an alteration of functional capacity [4], which can be assessed both objectively and subjectively. An objective decline in physical conditioning has a detrimental effect on the ability to perform activities of daily life, but also the subjectively altered perception of functional capacity can lead to actual physical inactivity and a progressive deconditioning [14]. Physical deconditioning may negatively impact the individual's quality of life [15] and his/her professional performance, which leads to absenteeism [16].
Since a direct relationship between health care costs and severity of FM symptoms has been documented [17], implementing an effective therapeutic approach remains a paramount challenge for the medical community. Current FM management is usually based on pharmacological treatment, which, despite being equally effective as a non-pharmacological therapy, has greater side effects and lower acceptance by FM patients [18]. One of the most promising and cost-effective non-pharmacological approaches is physical exercise (PE). Thus, a number of protocols have been proposed, such as aerobic [19][20][21][22][23] resistance [19,22,[24][25][26][27][28], flexibility [24,26,28], combined [20,[29][30][31][32][33][34], or other modalities [23,35,36], which have achieved improvements mainly in quality of life, pain, fitness, and depression. Overall, it has been suggested that a protocol including endurance and coordination would be the treatment of choice [37] with progressive workloads adapted to the individual's condition to promote adherence [38].
In this regard, to the best of our knowledge, no previous study has been carried out to analyze the impact of a low-intensity exercise program, combining endurance training, i.e., aerobic and resistance exercises aimed at improving endurance and coordination, and adapted to the symptomatology of patients (i.e., individualized and progressive) on pain catastrophizing and other psychological variables such as pain acceptance or self-perceived functional capacity in women with FM. Given the previously mentioned deleterious effects of the negative cognitions on FM symptoms, we hypothesized that a low intensity PE program would improve catastrophism in women with FM, which results in an improvement in other related psychological and physical variables. Thus, the aim of this study was to determine the effects of a low-intensity PE program, combining endurance training and coordination, on pain catastrophism in women with FM. Furthermore, we aimed to assess the effects of the proposed protocol on other psychological aspects (i.e., anxiety, depression, and stress), pain perception (i.e., pain acceptance and pressure pain threshold), quality of life, and physical conditioning (i.e., self-perceived functional capacity, endurance and functional capacity, power, and velocity) in women with FM.
Participants
Thirty-two women diagnosed with FM were recruited from several Fibromyalgia Associations from February to May 2019 to participate in this study. Inclusion criteria for the participants were: (i) women between 30-70 years old, an age range in which FM becomes more prevalent [39], diagnoses according to the 2016 American College of Rheumatology criteria for FM [40], and having received pharmacological treatment for more than three months with no clinical improvement. Exclusion criteria were: (i) pregnancy or breast-feeding, (ii) any known advanced-stage pathology associated with the locomotor system that contraindicates physical activity (arthritis, osteoarthritis, uric acid), (iii) epilepsy, (iv) intake of drugs that reduce the seizure threshold, (v) history of intense headaches, (vi) neurological disorder, (vii) peripheral neuropathy, (viii) known serious cardiovascular disease (i.e., endocranial hypertension, uncontrolled arterial hypertension, heart failure, cardiac pacemaker), (ix) pneumothorax, (x) neoplasia, (xi) surgery in the last four months, (xii) diagnosis of alcohol addiction, and (xiii) use of psychoactive drugs or narcotics. Moreover, patients should not have been enrolled in any PE program in the two months before the study began.
Study Design
A randomized controlled trial was performed (NCT03801109). The participants were randomly allocated to two different groups using the simple randomization method with the Random Allocation Software [41] by an external assistant who was blinded to the study objectives: physical exercise group (PEG) (n = 16) and control group (CG) (n = 16). To analyze the effect of the interventions, two assessments were performed: one at baseline (T0) and another following the intervention (T1). The physical therapist performing the assessments was unaware of the group the patients had been assigned to. To reduce bias, participants were instructed not to tell the assessor about the treatment they received.
All enrolled participants provided informed written consent prior to entering the study. All procedures were conducted in accordance with the principles of the World Medical Association's Declaration of Helsinki and the protocols were approved by the Ethical Committee of the Universitat de València.
Sample Size Calculation
Sample size was calculated by accounting for two study groups measured twice and with reference to a previous study conducted by Koele et al. [42] in which pain catastrophizing was measured. Accordingly, an effect size of d = 0.72 was expected. Furthermore, a type I error of 5% and a type II error of 20% were set. This calculation rendered 14 volunteers per group. Ultimately, 32 women were included to prevent loss of power derived from potential dropouts. G-Power ® version 3.1 was used for sample size estimation (Institute for Experimental Psychology, University of Düsseldorf, Düsseldorf, Germany).
Intervention Procedures
As reported, the participants were allocated to two groups (i.e., PEG and CG) whose interventions are explained below. During each session, potential discomfort or adverse effects, such as severe muscle pain (i.e., ≥7.5) [43] and/or excessive fatigue (i.e., ≥5) [44], were recorded using a 10-point Visual Analogue Scale and Borg Scale of Perceived Exertion, respectively.
Low-Intensity Physical Exercise
Participants of this group were enrolled in a low-intensity PE program combining endurance training (i.e., aerobic and low-load resistance exercises aimed at improving endurance) and coordination, supervised by a physical therapist with expertise in therapeutic exercise. All training sessions were carried out at the same time of day and in the same room. The administered protocol included 16 sessions, which were performed twice a week (60 min each) for eight weeks [29]. The sessions were divided into two stages with the first (i.e., sessions 1 to 4) being devoted to the participants' adjustment and familiarization with the exercise, and the second (i.e., sessions 5 to 16) aimed at personalized strength and coordination training. In this regard, training intensity was adjusted by controlling the individual's self-perceived exertion using the Borg CR-10 scale [45] as explained below.
Each session was divided into three parts: warm-up, training, and cool-down. (i) Warm-up consists of walking at a slow pace and moving the main joint structures (neck, shoulders, elbows, wrists, hips, knees, and ankles) within the patient's range of motion. (ii) Training is explained below. (iii) Cool down consists of walking at a slow pace, overall trunk stretching, and breathing deeply, while lying on the floor.
Training in the first stage (sessions 1 to 4) consisted of walking at a comfortable speed for 15 min, performing a 10-exercise circuit for 25 min, and cooling down for 20 min. Exercises were conducted using 1-kg dumbbells and weights at a velocity determined by a metronome set at 60 beats per minute. To ensure a weak or very weak perceived effort (i.e., 1-2 categories on the CR-10 Borg) [44,45], the perceived exertion was registered after each session and the work load was individually adjusted for the next session.
In the second stage (5th to 16th session), after a 10-min warm-up, the participants had to perform as many repetitions as possible in 1 min of the exercises of the 10-exercise circuit for 40 min, reporting, in this case, a perceived effort of 3-4 on the Borg scale, to ensure a moderate effort [44,45]. After this, they cooled down for 10 min. Table 1 shows the 10-exercise circuit for both stages 1 and 2. The work load varied depending on the participant since they were allowed to adapt the exercise according to their self-perceived pain or exertion each day [1]. However, the number of repetitions always ranged between 15 and 25 according to PE recommendations proposed by the 2014 Guide for the prescription of physical exercise of The American College of Sports Medicine for improving muscle endurance [38]. The combined aerobic and resistance training exercises aimed to work on endurance and coordination. Aerobic exercises included walking and moving the main joint structures, as explained previously. Low-load resistance training was oriented to the strengthening of the upper and lower limbs using dumbbells/weights with loads ranging between 0.5 and 2 kg for the upper limbs, and between 1 and 3 kg for the lower limbs based on the Borg scale scoring. A soft elastic band was also used for limb and trunk training and coordination exercises, as described in Table 1. Coordination exercises included standing calf raises, sitting down and standing up from a chair, stepping up and down, and throwing a ball into the air. Table 1. 10-exercise circuit included in the physical exercise group protocol.
1. Preacher curl while standing, palms facing forward 2. Leg extension while seated by lifting a sandbell 3. Bilateral dumbbell front raise while standing 4. Standing hip abduction with a soft elastic band 5. Chest lateral pull-ups while standing 6. Dumbbell shoulder external and internal rotation while standing 7. Sitting down and standing up from a chair without using arms 8. Throwing a ball above the head and catching it 9. Standing calf raise 10. Low Step-ups
Control Group
The participants assigned to this group received no intervention and were asked to perform their daily routines, while both groups continued to take their usual medication. To ensure that no participant undertook intense physical activity and should, therefore, be excluded from the analysis, a logbook was used to record the type of physical activity undertaken (domestic or recreational) and the approximate number of hours per week. The time elapsed between the first assessment and reevaluation was eight weeks for both groups.
Assessments
As discussed above, assessments were conducted twice, once at baseline and another at nine weeks following completion of the eight-week intervention. The following variables were assessed.
Pain Catastrophizing
Pain catastrophizing was measured with the validated Spanish version of the Pain Catastrophizing Scale (PCS) for people with FM. This is a self-administered scale consisting of 13 items with a score ranging from 0 "Not at all" to 4 "All the time." It presents three dimensions: (i) rumination, (ii) magnification; and (iii) helplessness. A total score is yielded (ranging from 0-52), whereby higher scores are representative of greater pain catastrophizing. The reliability of the scale is excellent (ICC = 0.94) [46].
Anxiety
Anxiety was measured with the validated Spanish version of the Hospital Anxiety and Depression Scale (HADS) especially with the anxiety subscale. This subscale consists of seven items with a score ranging from 0 to 3. A total score of more than 10 points indicates anxiety. A score ranging from 8-10 represents a borderline case and a score of less than 8 points represents no significant anxiety [47]. It has shown an excellent reliability (ICC = 0.85) [48].
Depression
Depression was evaluated by the validated Spanish version of the Beck Depression Inventory-Second Edition (BDI-II) [49], which is a widely used 21-item self-report inventory that has been proven to be highly accurate for measuring the severity of depression in patients with chronic pain [50,51] Each of the 21 items scores from 0 to 3 with a total score of 63 points. A score of 0-13 points means that there is minimal depression, 14 to 19 points means a mild depression, 20-28 points indicate a moderate depression, and 29 or more points indicate severe depression [49]. It has shown good reliability (ICC between 0.73 and 0.86) [52].
Stress
The Perceived Stress Scale-10 (PSS-10), which was validated for the Spanish population and whose reliability has been proven to be excellent (ICC = 0.82), was used for the stress assessment. It is a self-report instrument with 10 items that evaluate the level of perceived stress during the last month with a 5-point response scale (0 = never, 1 = almost never, 2 = sometimes, 3 = fairly often, 4 = very often). Higher scores indicate a higher perceived stress [53].
Perception of Pain
The perception of pain was measured using two approaches, which include pain acceptance and pressure pain threshold.
Pain acceptance was evaluated by the 15-item Spanish adapted version of the Chronic Pain Acceptance Questionnaire in patients with FM [54] (CPAQ-FM), which is a 15-item self-administered inventory measuring the acceptance of pain. The items are rated on a 7-point scale from 0 (never true) to 6 (always true). Higher scores indicate higher levels of acceptance. This tool has shown good internal consistency or reliability (Cronbach's α: 0.78).
The pressure pain threshold (PPT) was assessed using an algometer (WAGNER Force Dial TM FDK 20/FDN 100 Series Push Pull Force Gage, Greenwich, CT, USA) at each of the 18 tender points used to diagnose FM [55]. First, the presence and location of the tender points was confirmed via palpation and pen-marked by an experienced physiotherapist. The pressure threshold was then measured by applying the algometer directly to the tender point with the axis of the shaft maintained at 90 • relative to the examining surface. The area of the algometer tip was 1 cm 2 and the pressure values were reported in kg/cm 2 . The subject was instructed to verbally inform when pain or discomfort was initially felt. The procedure used has excellent intra-observer reliability [56]. The average of the PPT measured was used for subsequent analyses [24].
Quality of Life
Quality of life was assessed with the Spanish validated version of the Revised Fibromyalgia Impact Questionnaire (FIQR). This is a multidimensional self-administered questionnaire with 21 items divided into three domains: (i) physical function, (ii) overall impact, and (iii) severity of symptoms. Each item is evaluated on an 11-point numeric rating scale from 0 to 10, with 10 being the 'worst.' The summed score for physical function (range 0 to 90) is divided by 3, the summed score for overall impact (range 0 to 20) is not modified, and the summed score for symptoms (ranging from 0 to 100) is divided by 2. The total FIQR score is the sum of such three domain scores. It has an excellent reliability (ICC = 0.82) [57].
Physical Conditioning
We assessed both the subjective and the objective physical conditioning. To assess the subjective physical conditioning, we evaluated the self-perceived functional capacity. The objective physical conditioning was determined by evaluating endurance and functional capacity, power, and velocity, as described below.
1.
Self-perceived functional capacity was assessed based on the "Physical Function" subscale of the FIQR (FIQR-PF). This subscale consists of nine items assessing the self-perceived abilities to perform daily living activities (e.g., walk for 20 min, climb one flight of stairs . . . ). The maximal score is 30. The higher scores point to a poorer perception of physical function. It has shown a good reliability (ICC = 0.73) [57].
2.
Endurance and functional capacity were assessed by the six-minute walk test (6MWT). Participants walked down a 15-m long hallway for a total of six minutes. Any contra-indications were checked before the test started, so heart rate, oxygen level, and Borg Rate of Perceived fatigue were recorded besides the main variable, i.e., the walked distance. Patients were allowed to take as many standing rests as necessary, but the timer kept going. The instructions given to the patients were: "Walk to the turnaround point at each end. I am going to use this counter to keep track of the laps you complete. You may stand and rest, but you should walk as fast as you are able.
Remember that the aim is to walk as far as possible, but do not run." This test has shown an excellent reliability (ICC = 0.91) [58].
3.
Power was evaluated by the five-repetition sit-to-stand test (5STST) consisting of sitting down and standing up from an armless chair (43 cm high) five times as quickly as possible. Participants with arms crossed over their chest were instructed to stand up completely and make firm contact when sitting. Timing began at the command "ready-steady-go" and stopped when they sat after the fifth stand-up [59]. This test has shown an excellent reliability in adult women (ICC = 0.92) [60].Velocity was assessed by the Four-Meter Gait Speed Test (4mGST). The 4mGST consisted of walking a distance of 4 m at the usual pace. This test in addition to assessing the walking speed allows us to estimate the risk of disability for a given individual [61]. Both the test-retest and the inter-rater reliability have been shown to be excellent (ICC = 0.89 − 0.99 and ICC = 0.97, respectively) [62].
Statistics
All statistical analyses were performed with SPSS v.24 (IBM SPSS, Inc., Chicago, IL, USA). Standard statistical methods were used to obtain the mean and standard deviation (SD). Inferential analyses of the data were performed using two-way mixed multivariate analysis of variance (MANOVA) with an inter-subject factor called "group" having two categories (PEG and CG), and a within-subject factor called "treatment" having two categories (T0 and T1). Post-hoc analysis was conducted using the Bonferroni correction provided by the statistics package used, and the effect size was calculated using Cohen's d. We also compared age, weight, height, and level of pain between groups using a one-way ANOVA to ensure that the two groups were similar at baseline. The normality and homoscedasticity assumptions were checked by Shapiro-Wilk and Levene tests, respectively. Type I error was established as < 5% (p < 0.05).
Participants
Thirty-six subjects were assessed for eligibility. Two failed to meet inclusion criteria and two declined to participate. Therefore, 32 participants were included and then randomized (16 in PEG and 16 in CG) ( Figure 1). The mean (SD) age for the participants was 53 groups using a one-way ANOVA to ensure that the two groups were similar at baseline. The normality and homoscedasticity assumptions were checked by Shapiro-Wilk and Levene tests, respectively. Type I error was established as < 5% (p < 0.05).
Participants
Thirty-six subjects were assessed for eligibility. Two failed to meet inclusion criteria and two declined to participate. Therefore, 32 participants were included and then randomized (16 in PEG and 16 in CG) (Figure 1). The mean (SD) age for the participants was 53.06 (8.4) years for the PEG and 55.13 (7.35) years for the CG, weight, 70.35 (18.02) kg for the PEG, and 72.29 (13.94) kg for the CG, and height, 159.25 (6.2) cm for the PEG, and 160.38 (6.44) cm for the CG. There were no statistically significant differences in age, weight, height, and level of pain between groups (p > 0.05, data not shown). No incidents were reported at any point in time.
Intervenction Effects
The significant differences and the effect size among pre-treatment and post-treatment assessments (T0 and T1, respectively) for both groups and each variable are shown in Tables 2 and 3 as well as the differences between groups for each assessed variable.
As shown in Table 2, all the psychological constructs assessed (i.e., pain catastrophizing, anxiety,
Intervenction Effects
The significant differences and the effect size among pre-treatment and post-treatment assessments (T0 and T1, respectively) for both groups and each variable are shown in Tables 2 and 3 as well as the differences between groups for each assessed variable. Data are expressed as mean (SD), d: Cohen's d effect size reported only when the differences between times were significant: *: p < 0.05. Table 2, all the psychological constructs assessed (i.e., pain catastrophizing, anxiety, stress, and depression) significantly improved in the physical exercise group (PEG) after the intervention with increases of 7.31, 1.87, 2.43, and 7.32 points, respectively. Statistically significant improvements were also observed in PEG for pain perception both in the pain acceptance with an increase of 4.94 points, and, in the average PPT, with a mean increase of 0.32 kg/cm 2 . Lastly, PEG also improved significantly by 9.98 points in quality of life. On the contrary, the CG failed to improve in any of the analyzed variables, and further exhibited a significantly poorer average PPT, with an average decrease of 0.25 kg/cm 2 .
As shown in
In terms of the effect of the interventions on the individual's physical conditioning, as noted in Table 3, participants belonging to PEG experienced a statistically significant improvement in their physical conditioning after the intervention. They improved their self-perceived functional capacity, as indicated by a 3.14-point increase in the FIQR-PF mean score. They also improved their endurance and functional capacity by increasing the average distance walked in the 6MWT test by 32 m. Furthermore, they improved their power and velocity, as observed by improved speed rates in both 5CRT and 4mGST of 6.85 and 0.49 s, respectively. Regarding the CG, no statistically significant differences were observed in any of the previously mentioned variables.
Discussion
This study shows that a low impact PE protocol combining endurance training (i.e., aerobic and resistance training aimed at improving endurance) and coordination is effective for improving psychological features (i.e., pain catastrophizing, anxiety, depression, stress), pain perception (i.e., pain acceptance and pressure pain threshold ), quality of life, and physical conditioning (i.e., self-perceived functional capacity, endurance and functional capacity, power, and velocity) in women with FM.
Pain catastrophizing refers to a set of exaggerated and ruminating negative cognitions and emotions during perceived or actual painful stimulation [2] and has been linked with adverse pain-related outcomes and FM-related disability [3]. PE has been posited as one of the most effective strategies to distract attention from pain [63] and reduce negative thoughts about pain, especially rumination [64]. In this regard, we observed a significant decrease in pain catastrophizing scores after the PE intervention. In line with these results, previous studies using PE alone or in combination with psychological/cognitive techniques, reported beneficial effects on pain catastrophizing in people with FM or chronic pain, as disclosed by a number of studies. This includes those conducted by Lazaridou et al. [35], in which a combined physical and psychological therapy (i.e., Yoga) was used, and those completed by Casey et al. [65] who applied PE combined with Acceptance and Commitment Therapy, or conducted by Seemts et al. [66] who combined aerobic exercise, mainly in water, with cognitive-behavioral treatment. These results suggest that psychological or/and physical techniques, either alone or in combination, may be beneficial to improve catastrophism in patients with chronic pain. However, the previously mentioned studies used standard PE programs without taking into account a potential aggravation of symptoms experienced by women with FM (i.e., fatigue), which has been posited as the main cause of low adherence to PE programs [38]. Our study reports that a customized low impact PE program, adapted to the individual's self-perception of fatigue, is effective in improving pain catastrophizing. Conversely, no significant changes were observed in the CG.
This positive finding related to pain catastrophism was further confirmed by a significantly lower perceived pain, as indicated by higher pain acceptance and PPT values. Regarding pain acceptance, it has been associated with enhanced physical functioning in chronic pain patients. Likewise, the improved PPT may be due to a better physical conditioning [67,68], which, in turn, may lead to better pain acceptance [69]. Few authors have reported improvements in PPT after exercise programs [31,32] while using long-term interventions (i.e., 12-24 weeks), aquatic exercise, or psychological therapy. Therefore, their results are not entirely comparable. By contrast, CG subjects showed significantly poorer values for pain perception, as measured with an algometer, which may be due to the progressive physical deconditioning of these patients [67,68].
With regard to the other psychological variables analyzed (anxiety, depression, and stress), all of them significantly improved in the PEG. Improvements in anxiety may be due to the well-documented role of PE as a specific anxiety modulator [70]. In addition, anxiety has a direct relationship with pain acceptance [71], which, as discussed above, also improved in PEG. Some authors have documented the beneficial effects of PE on anxiety in people with FM [21,26,34]. The only study that analyzed the effect of a combined aerobic and resistance exercise protocol on anxiety reported a greater reduction than that obtained in our study (i.e., 41% compared to our 15%), which may be due to the well-known relaxing effects of warm water [34]. With regard to depression, we found positive results following the PE intervention with a similar [29,34] or even higher [20] reduction than that obtained in previous studies using combined aerobic and resistance PE protocols. This may be due to the release of neurotrophins triggered by PE, such as the brain-derived neurotrophic factor, as people with depression tend to display lower levels of this biomarker than their healthy counterparts, while PE induces its increase [72]. Lastly, the lowered stress levels observed in the current study suggests that PE could be a helpful approach to coping with stress, while also promoting stress resistance in women [64]. Previous studies have also concluded that moderate aerobic exercise [73] can reduce stress levels in people with FM, especially when working out in group settings, due to social interaction [74]. By contrast, we observed no improvements in the CG in any of the analyzed psychological variables. Overall, these results suggest that a combined low-intensity PE program, adapted to the individual's symptoms, is effective in relieving anxiety, depression, and stress in women with FM.
As noted above, quality of life is impaired in people with FM [15]. Our PE protocol induced improvements in all the analyzed psychological constructs as well as in pain perception, which may have contributed to improving quality of life [75]. Many studies have shown that PE improves quality of life in the FM population, either through aerobic [20,23], resistance [19,26,37], and flexibility [24,26] exercises, protocols combining aerobic and resistance training [20], and specific modalities such as Tai-Chi [23]. However, such authors failed to include coordination exercises, which have been shown to challenge the sensory, cognitive, and musculoskeletal systems, and, thus, improve quality of life in older adults [76]. Yet, it has never before been implemented in women with FM. Thus, our results suggest that our PE protocol may be a useful tool to improve quality of life in women with fibromyalgia. In this regard, it would be interesting to apply the proposed exercise protocol on an ongoing basis, as it has been shown that long-term physical exercise positively affects quality of life in people with FM [77].
All variables related to subjective (i.e., self-perceived functional capacity) and objective (i.e., endurance and functional capacity, power, and velocity) physical conditioning improved significantly in the PEG, but not in the CG. This is of importance since both subjective and objective physical functions have been shown to be markedly impaired in women with FM, the former to a greater extent than the latter [14]. Our positive results on the subjective physical conditioning are noteworthy, since people with fibromyalgia who feel that they are unable to perform daily physical activities may avoid performing such activities and participating in therapeutic PE programs, which, in turn, may lead to objective physical deconditioning [14]. We, thus, evaluated objective physical conditioning by means of 6MWT, which is an inexpensive, relatively quick, safe, and a well-tolerated technique for the prediction of VO 2 max [78], and may be considered an indirect measure of cardiorespiratory or maximal aerobic power fitness in this population. Furthermore, 5STST was chosen because not only lower limb strength and power are required, but also good coordination and balance are required. Therefore, it covers several important components of physical function [59,60]. Lastly, we assessed the 4mGST, since low gait speed has shown to be one of the main factors contributing to sarcopenia and, ultimately, to frailty [79]. Although the latter two variables have been mainly studied in older adults, they were used in the present study because women with FM have been show to display early aging and lower physical abilities compared to their age-matched healthy counterparts, which resembles healthy senior adults [80]. Our improvements in objective physical conditioning are in line with those reported by several authors following the implementation of different types of exercises, such as aerobic [22] or resistance exercises [22,25], or combined training (aerobic, resistance, flexibility, and patient education) [30].
Lastly, as pointed out before, lack of adherence seems to be typical in FM patients, which could be due to post-exercise soreness. The average adherence in reference studies was 85%, whereas adherence in our study was 100%. This may be due to the customized protocol we applied, which was duly tailored to each patient's symptoms. The authors of the present study strongly believe that therapies aimed at FM patients should encourage participation by focusing on protocols with individualized work-loads, rather than relying on standard protocols.
Limitations
The main limitation of the current study may be the small sample size. However, an a priori power analysis indicated that our sample size was sufficient. Future studies should confirm our findings in a larger population. However, therapeutic PE interventions should always be implemented in small groups in order to ensure proper performance of exercises, compliance with the protocol and, where necessary, an individualized correction of errors. Another limitation may be the fact that women were recruited from Fibromyalgia Associations, and, therefore, may present a different behavior than other FM patients. Regarding the protocol, a longer exercise program might have led to better results (i.e., differences between groups), and we did not perform any follow-up measurements to verify if the PE-induced benefits lasted in time. Lastly, since most FM patients are women, the current study was performed on women only, so this may bias the findings, which cannot be extrapolated to the general population.
Conclusions
The results obtained from this study show that a combined low-intensity PE program, including endurance training and coordination, improves pain catastrophizing in women with FM. Furthermore, the proposed protocol improves other psychological variables (i.e., anxiety, depression, and stress), perceived pain, quality of life, and physical conditioning in women with FM. | 2020-05-28T09:14:38.141Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "710ecefd24542f264976885b9053cef65c2f85d9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/10/3634/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c8a2368d155d66431a2a88bf2d05243c8625981",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260396200 | pes2o/s2orc | v3-fos-license | A CULTURE OF WELLBEING: WHY WE MUST PUT POSITIVE MENTAL HEALTH AT THE HEART OF OUR SCHOOLS
For young people, a key priority when rethinking education is considering how education affects our mental health. It will come as no surprise that young people are facing a mental health epidemic, and the education system has become a driving factor in this. Recent policies have fuelled toxic cultures in our schools, which glorify burnout and stigmatise those who rest as scroungers, whilst long-standing, paradigmatic problems have persisted. Too many young people try to learn in these fear-driven cultures each day – and this was my reality at school. After growing significantly aggrieved, I took action to ensure my school implemented what is called a ‘culture of well-being’ – one wherein rest is held in equal regard to work. One wherein the positive well-being of all is actively promoted – for it is recognised that positive well-being is an essential prerequisite to learning. The culture of well-being has created positive change to the realities experienced by young people on the ground – as well as for the whole-school community. In this article, I will introduce the culture of well-being, explain how to implement it in practice, and amplify the plea of young people for education to work with, not against, their mental health.
INTRODUCTION
Young people are facing a mental health epidemic. A survey from NHS Digital (2017) found that 1 in 9 children had a probable mental health condition that year. The Good Childhood Report (2020) reviewed evidence from previous instalments, which demonstrated a mostly consistent decrease in children's happiness with life and school since 2009. The Jacobs Foundation's Children's Worlds Report (2020) compared children's subjective well-being and satisfaction with school across international boundaries and found that children's satisfaction with school is poor in England compared with other nations.
Young people assert that the education system -and, within it, the exam system and standardised assessments -are key factors underpinning these findings, according to the mental health journal States of Mind (2020). More recent, youthfacilitated research from the British Youth Council (2022) found that young people feel that the education system (again specifically exams) was one of the most significant factors adversely affecting their general health and well-being.
It is worthwhile establishing, however, that it is not exclusively young people who are struggling in schools. Teachers, too, are presently under more pressure than they ever have been before, and the disillusionment this is generating amongst the education workforce is threatening to grind the system to a screeching halt. Education Support's Teacher Wellbeing Index (2020) shows that over half of teachers are considering leaving the profession due to the damage it is doing to their mental health.
There is some significant concern growing amongst school leaders for the well-being of young people. It is evident that exams have always been a driving force behind poor mental health among young people, but recent reforms have exacerbated this problem. In 2015, the UK Government reformed the General Certificate of Secondary Education (GCSE) to become more rigorous. This entailed the complexity of content and the difficulty of assessment increasing.
Coursework was discarded in favour of exclusively examination-based assessment taken at the end of a two-or three-year course, as opposed to modular assessment throughout. This was in order "to end a culture of low expectations" (The Guardian, 2013).
However, the rigid, high-pressure approach pursued by these assessments has fuelled the mental health crisis among young people. The Association of School and College Leaders (2018) polled their membership on the effect of the new GCSEs on the mental health of their pupils. Of the 606 leaders surveyed, 546 (90%) said they had knowledge that the new qualifications had caused greater stress and anxiety than previous incarnations. Figure one summarises the specific effects of the new qualifications on young people's mental health, showcasing the percentage of those 546 school leaders who testified that the new qualifications were having this effect on their students: The ASCL also stated that the new GCSEs were having an adverse effect on staff well-being as well, since they were on the frontline of helping young people to cope with the poor mental health caused by these new qualifications, increasing the emotional burden of the work they do. They also added to staff workload by demanding time to adjust to the new specifications and grading systems.
Yet these kinds of reforms are not exclusive to the GCSEs. Over the past 12 years, we've seen greater rigour injected into all stages of education. Similarly to the GCSEs, A-Levels are no longer modular, with everything now resting on a final examination series taken at the end of the second year.
The National Curriculum assessment (colloquially referred to as the Standard Attainment Tests, or SATs) has also been subject to similar reforms. Here, in a primary school setting, the greater demands placed upon the whole-school community have sparked even more concern and controversy than at later stages due to the younger age of the children affected and the inherent infringement of children's leisure time caused by the greater need for test revision and preparation. Research has suggested that SATs, too, are having an adverse effect on the well-being of young people and teaching staff. Bradbury (2019) surveyed 297 headteachers and interviewed 20 from schools across England. It revealed that 99% of the survey respondents agreed that "SATs put pressure on teachers", and 92% agreed that "SATs have a negative impact on teachers' well-being." Regarding the impact on children and young people, 83% of heads agreed with the statement "SATs have a negative impact on pupils' wellbeing".
High-pressure policies lead to high-pressure cultures on the ground at all stages of education. Gill & Gergen (2020) illustrate how children are being led to believe that success is only achievable through accepting the "tyranny of testing". All of this conveys to children and young people that high-stakes testing is the ultimate determinant of success in life, and that the failure to reach the desired standard at a particular point can have devastating consequences for life chances.
I will now take some time to share with you my own, subjective lived experience of the education system, which broadly reinforces the findings of all the literature cited hitherto. Although admittedly, it diverts from the rather nomothetic nature of this introduction, it is important to consider such experience as its qualitative nature conveys the humanity within a particular issue (that is not always possible using quantitative data) and thus illustrates the real impact of a problem on the ground. It also highlights issues that may previously have been overlooked as policy has been shaped almost exclusively by learned experiencethat acquired through research, reading and second-hand information gathering. In my case, my lived experience of school was also, so to speak, the "procedure" which led me to my "findings and conclusions" -the culture of well-being -that will be explained later. The consideration of lived experience is crucial to formulating unique, effective policies and solutions. Otherwise, we risk following the same procedure repeatedly and expecting different results, which, as Einstein is believed to have claimed, is the very definition of insanity.
MY STORY
I plodded along through most of my education with an attitude of indifference. I was never one to enjoy school, yet I never questioned my circumstances. I simply did what was required of me by law.
However, when I was at Secondary School, there were several occurrences that upset the balance, significantly harming my mental health. A major factor that had profound consequences on my mental health concerned interpersonal conflict and a lack of friends. I had been quite reclusive hitherto, but when I reached Year 8 (my second year of secondary school), my friendship circle expanded, and I became part of a solid group of four boys. I also enjoyed success on the romantic front. I valued these relationships deeply but became paranoid that my friends did not value me in the same way I valued them. I worried that they were going out and doing all sorts of things outside of school without me and that this, therefore, reflected an unspoken fact that I was not as much a part of the group as they were.
This climaxed in an exchange of hurtful words and actions between myself and the other boys. The consequences of this saw me dismissed from our friendship group, and my fears had now manifested themselves in reality. I really was not part of this group anymore. Having been so emotionally attached to these relationships, the sudden loss of them caused my mental health to rapidly decline.
This incident was a pivotal moment, as it instigated three years of manipulation, mind games, and general animosity towards one another. Alliances fluctuated, but it would always conclude with me being isolated again whilst everyone else made amends. Unlike my peers, I did not have other friends to fall back on for support when things went wrong.
I often found myself on the receiving end of various forms of abuse from some of these people, namely death threats and sexual abuse. The very nature of our education system often traps young people in forced association with people who wish to do them harm (and, in many cases, are actively causing them harm), perpetuating the damage and dragging it out.
Around the same time, my younger brother was preparing to take his Year 6 SATs. Although only two years had elapsed since I had taken mine, the UK Government had reformed the SATs quite significantly in that time (as outlined in the introduction.) In stark contrast to the kinds of questions I was expected to answer, 10 and 11-year-olds were suddenly expected to know the answers to questions such as "explain what the past progressive tense is", "differentiate between a subordinating conjunctive and a coordinating conjunctive" and "set out the definition of a modal verb." Even at my present age, I would not be capable of answering such questions, but it would have been especially difficult at age 11.
The greater rigour necessitated greater sacrifice in terms of revision & study time, denying these young people the innate liberty of play. In reality, this translated into children spending hours trapped inside, pouring over an array of worksheets and practice papers, which, as previously noted, was a dramatic contrast with my own experience of the same series of exams two years prior -for which I did no preparation at all (I spent all my time outside of school playing) yet still secured good results.
My brother would suffer meltdowns whilst navigating through seemingly endless sheets of paper. I recall all too vividly the sounds of his tears, screams, and anger from those terrible weeks. They still echo in the back of my mind when I think, speak, and write about them now.
When the examinations formally commenced nationwide in May 2016, I quite clearly remember coming home from school to find that the tests had been nothing short of a disaster across the country. News reports were emerging of children breaking down in tears during the tests, and teachers crying too, in their concerns for the children (ITV News, 2016;Ward, 2016;Sculthorpe & Joseph, 2016;Rosen, 2016;Zatat, 2017;Gibbons, 2020).
In my sceptical teenage mind, I interpreted this as a deliberate, calculated action inflicted upon children by politicians -who consciously took the decision to do this, knowing full well what would happen. I concluded it was morally wrong and took a vow to take action in order to get justice for my brother and his peers.
But before I could meaningfully contemplate the educational revolution I had just sworn to achieve for my brother, I came to the realisation that these reforms were not exclusive to the SATs. The GCSE, which I was starting that September, had also been subject to similar reforms. During a particularly difficult class one day in June 2016, we endured a rant about how our exams, too, were now tougher, and that the significant efforts and sacrifices invested by previous cohorts would not suffice from us.
That was a significant moment for me. Immediately, my mind began to race as I feared that I, too, would suffer the same fate as my brother just had with his Year 6 SATs -possibly worse! Having seen what my brother went through, my mind produced distressing mental images of myself drowning under piles of paper, weeping. I realised that what I interpreted as a deliberate assault on young people's mental health was marching on. In my mind, my personal assessment of the situation was this -the Government had just attacked my brother and his cohort, and now, they were coming after me. I felt that my fight against the system had just become a fight for self-preservation.
Going into Year 9, the alarmist culture I witnessed that day firmly embedded itself into my school. Significantly more apparent than it had been prior to the exam reforms, young people were being constantly bombarded with dark and dreary prognostications exaggerating the importance of GCSEs. We were told that, if we were unsuccessful in our GCSEs, we would be, and I quote, "on Tesco brand bread and beans…unable to provide for your families" and that our families and friends would abandon us, writing us off as "failures". We were told the only way to avoid this was to consistently sacrifice time in devotion to securing our GCSEs, leading me, at least, to feel like a failure if I dared to take time for myself. It was then prophesied that we would (and should -as it was made out as though there was no other alternative than absolute glory) achieve extraordinary things, and bring to ourselves the laurels of fame. Ordinary accomplishments and occupations were devalued. The consequence of this was that I felt both my prospects narrow and insurmountable pressure being heaped onto me. This was ultimately maladaptive in the endeavour to pass my GCSEs as, whenever I tried to study, my mind would obsessively pour over the prophecies and (what I had been told was) the scale of the task ahead. This generated anxiety that prohibited me from concentrating. My grades began to suffer as a result, ultimately meaning that these prophecies were becoming self-fulfilling.
But the effect of this culture on my grades was not my primary concern. The effect this had on my mental health grossly superseded that in importance. I began to exhibit symptoms of Obsessive Compulsive Disorder (OCD) and panic disorder. The OCD occasionally deprived me of the ability to walk straight. One night, whilst out in public, I ended up walking backwards because I had felt a compulsion to do so, weeping as I went, which was a horrendously demeaning experience. Another occurrence saw me put myself in a situation that was physically dangerous.
I experienced panic attacks for the first time ever, having absolutely no idea what was happening when I suddenly started shaking and feeling nauseous at half past 11 at night as my mother had to help me to the toilet as we thought I was going to be sick -I was literally paralysed by fear.
I knew full well that this had come about due to the stress caused by school. I initially displayed OCD symptoms during a particularly stressful series of end-ofyear exams in Year 9, where I followed the school's advice regarding hard study time, only to end up with mediocre grades and even worse mental health which eventually had physical consequences. The panic attacks started after we were given only two weeks' notice of the same end-of-year exam series in Year 10, which took place against the backdrop of a significant deterioration of the situation on the social front. Due to all of this, I came to interpret education as a threat -a threat to both my mental health and well-being. Due to the OCD, I interpreted it as a threat to my physical safety as well.
A CULTURE OF WELLBEING: WHY WE MUST PUT POSITIVE MENTAL HEALTH AT THE HEART OF OUR SCHOOLS Although I have now thankfully overcome many of these adversities -through a combination of individualised coping strategies (namely meditation) and activism to improve the situation more broadly -I am still haunted by the spectre of trauma. If now, someone were to describe themselves to me as enduring a similar experience to what I had endured, I would, most probably, experience a trauma response that would take me straight back to that night I had the first panic attack. It plagues me to this day. I have been left permanently scarred, never to be fully restored or repaired.
It is almost as if every day will now be a school day in the darkest depths of my subconscious.
Yet I realised something very important -that I was not alone in this struggle. I began to read some of the research cited in the introduction, as well as observing more news stories being printed about young people in similar situations to myself. Eventually, around the time of my entry to Year 10 (autumn 2017), I concluded beyond reasonable doubt that this was indeed a widespread and systemic failure, validating my view that action was necessary and spurring me on even more in a fight for justice. In December 2017, I stood for election as my form's representative to the school's Student Council -and won.
At every Student Council meeting I attended, without fail, I would raise the matter of the prophecies and the nature of communication surrounding exams in discourse between teachers and young people. Yet despite my persistence, every time I raised the issue, my concerns would be dismissed with claims of "Oh, we have to do that because otherwise, students would have no motivation to learn or study!" Not only does such a proposition overlook the fact that learning is innate and instinctive (Gray, 2013), but it also fails to realise that applying such intense pressure is maladaptive for most pupils. Research has consistently shown that when people believe they are being observed and evaluated, their performance in any particular task declines, and those who already have some experience hold an unfair advantage over fresh learners. This effect is particularly acute in academic and intellectual undertakings. Aiello and Douthitt (2001) found that when people are being observed or evaluated learning a difficult skill or thinking creatively, their performance declines as opposed to when they are not.
Due to the school's resistance, Year 10 passed with nothing to show for my tenure on the Student Council. The school had countered me at every turn. When September 2018 arrived and I commenced Year 11 -the final year wherein I would sit my GCSEs -I stood for re-election to the Student Council as the problem was still very much alive. However, with little to show for my first term, I came second in the poll of my form group.
However, the Student Council had shifted to a model of having two representatives from each form group, meaning I did get to sit on the Student Council for a second year despite finishing the election in second place.
Again, I would raise the matter of culture without fail at each meeting until, on one sunny morning in January 2019, at a Student Council meeting, I raised the matter again, referencing specific examples of prognostications given to us by school staff. Due to a disastrous Ofsted report a month prior, it landed on the ears of the staff much differently this time. They shot back aghast, visible horror upon their faces that such words could have been spoken in their school.
This time, it was taken seriously. Following this meeting, teaching staff were advised to cease their use of the prophecies. From here, the school in general began to undertake a cultural shift away from the kind of culture I had endured and towards what I have come to refer to over the years as a culture of well-being. One wherein students are taught the value of rest and hold it in equal regard to workceasing to stigmatise those who rest as scroungers. One wherein all members of the school community remain mindful of the language they employ in communicating with each other during times of pressure. One wherein students are actively encouraged to put their mental health first and to always take adequate time to engage in activities that matter to them, for it is recognised that good mental well-being is a prerequisite to academic success.
As the culture developed, the school initiated better mental health training for all its staff. They increased the support offered to Year 11 students during exam season, facilitating mindfulness workshops and significantly altering the manner in which they communicated about the exams to young people. The contrast in the manner in which the school communicated to Year 11 students about the exams is evident when comparing how they handled the final Friday before the exams in 2017 and how they did it in 2019: • In 2017, they put the pressure on by playing "The Final Countdown" by Europe over the school intercom just before everybody went home. • On the same Friday in 2019, however, they delivered a laid-back assembly reminding Year 11s to take time for themselves over the weekend.
(Although I am keen to stress that this alone is too little too late -healthy communication must be woven into discourse at all stages of education, not just when it comes to the final exams, and support should be offered to the more junior cohorts as well.)
They also established an annual Well-being Week which took place each October, where the final lesson of the week ended 15 minutes early and time was dedicated to resting. Pupils were also given chocolate bars during this time to add to the relaxed atmosphere. Such a thing as this would have been unthinkable under the old culture.
About three years later, the school received an Ofsted inspection. Its previous inspection, conducted in the December of my Year 11, approximately one month prior to the introduction of the culture of well-being, condemned it as inadequate in all areas. It was labelled as one of the worst schools in the whole country.
However, in its most recent inspection -its first since introducing the culture of well-being -its rating for "Personal Development" has risen to "Good", and on the first page of the report, in only the second paragraph, it is noted that pupils appreciate the support they receive from staff, especially in the field of mental health and well-being.
By introducing the culture of well-being, changing course and actively listening to pupils' voices, the school environment significantly improved, demonstrating the impact even a basic shift towards a culture of well-being can have in improving the fortunes of even the most struggling schools.
THE CULTURE OF WELL-BEING:
The definition I have come up with for the culture of well-being is as follows; "A school culture which holds positive mental & physical well-being in equal regard to academic rigour & success, a low-pressure environment where trust is placed in the innate ability of young people to learn and the school undertakes meaningful, effective and co-produced initiatives to improve the wellbeing of all members of the school community, and endeavours to ensure the sociocratic, inclusive and egalitarian governance of the school." Regarding operationalised actions, the culture of well-being is scalable in proportion to the level of ambition of those implementing it. A culture can be changed by altering the manner in which we speak, yet, as outlined in the introduction, cultures can also be impacted by policies and structures. The unhealthy culture in my school became significantly more acute once the reality of the reformed, rigourous GCSEs had set in. The tougher tests made for a tenser environment.
Especially in the context of mental health, it is also important to consider the concept of psychopolitical validity (Prilleltensky, 2003) as an explanation as to why the culture of well-being must also consider systems-level change. The concept suggests that we should evaluate mental health interventions in terms of the extent to which it examines the role of systems and structures because these too wield great influence on mental health. The more the culture of well-being challenges problems embedded in systems, the more psychopolitical validity it would have, implying a greater ability to improve mental health -the ultimate objective for the culture of well-being.
As I wrote my speech for the Rethinking Education conference, I conceptualised a tiers of change model for the culture of well-being, which shows how it can be something quite simple, or it could be a much larger systemic change:
Change Effort Effectiveness
The increasing size of the segments on the graph visualises the degree of change each represents, the level of effort required to implement them, and also the effectiveness of each in achieving our overall objective of improving mental health and implementing a holistic culture of well-being.
I will now proceed to provide a more detailed summary of each tier, what it involves, and how it can be implemented: Tier #1: individuals' actions and words: In its most basic form, a tier-one culture of well-being involves detoxifying communication between members of the school community. Instead of employing the kinds of prophecies I outlined earlier, being honest about how important exams really are, appreciating the sensitive nature of the subject, and not saying anything that will place too much pressure on students.
However, this has to be a two-way street. Teachers, too, are often on the receiving end of the toxic culture within schools. Although this is not fully the fault of the young people, there are things they can do to help implement a culture of well-being in their school. They should refrain from taking out their frustrations on teachers and work harder to restore damaged relationships with their peers, being proactive in conflict resolution by reaching out to friends they have fallen out with and inviting them to make amends, endeavouring to expand their capacity to forgive, so that nobody finds themselves as isolated as I did.
Tier #2: minor systems change: As we move up the pyramid, we begin to shift away from individual actions and towards systems-level reform.
Within tier two, this concerns internal school changes that mainly focus on simple provisions that the school can provide in and of itself, or can find another, local agency to provide, without major top-down reform. The school will amend its own policies to increase the provision of support, both proactive and reactive, for young people's well-being.
This may draw upon some of the learnings from my old school and involve things such as a Wellbeing Week with more free time. It could look like the provision of non-academic enrichment activities and the creation of time for this -perhaps allocating a few sessions in the week to non-academically rigourous activities -especially those which involve giving back to the community and doing genuine, meaningful good. These serve to give pupils a proper break from their studies, giving them some respite. But what it also does is ensure that the personal development enjoyed by young people in education is much more holistic and diverse, and therefore better prepares them for the non-academic world of work which they will eventually progress into.
Also within tier two, schools may make endeavours to provide solutions to bullying and interpersonal conflict. It is harder to propose policy solutions to address this issue, for some people will simply conduct themselves in an unpleasant manner regardless of what is occurring at a systems level. However, there are some actions that the school can take to improve this situation.
Detoxifying communication and nurturing a culture of kindness is one of the more proactive solutions -and this links nicely to tier one. Schools could also proactively offer and facilitate restorative justice mediation sessions in order to encourage healthy dialogue between disputing peers, ensuring this offer is well-publicised.
This would enshrine a means of conflict resolution within the school system. It is here we begin to see the distinction between tiers one and two. Tier one is about individuals altering their behaviour. Tier two is about systems change for the whole school.
Tier #3: significant internal governance reform: There may be some initial scepticism as to how significant internal governance reform links to the culture of well-being and supporting mental health. Tier three relates directly to Prilleltensky's work surrounding psycho-political therapy, as well as the logical assumption that, if given the power to shape their environment, members of the school community would not shape it into something that harms their wellbeing.
It also ensures members of the school community feel a sense of control over their circumstances, easing innate anxieties that arise when we are not in control of our circumstances. Allowing members of the whole school community to hold a significant stake in the governance of the school also helps people to feel as though they have a sense of purpose in life and they have a role to play in the school community. It also helps to demonstrate to the young people that they have skills, talents, and merits valued by society, which enhances their morale and sense of self-confidence, building their ability to acknowledge their assets.
Referring back to the concept of psycho-political therapy, significant internal governance reform can also be a reactive therapy for those young people who may have already been traumatised. If a young person, as I did, is able to see their strife and trauma being applied in a constructive manner to identify problems within the school and prevent other young people from suffering what they had suffered, then they may feel as though there was some purpose in their suffering and it helps them to come to terms with what happened. This is how my story came to a satisfactory conclusion. My lived experience of the toxic culture, and the trauma I suffered within it, was used to inform positive change that prevented other young people from experiencing what I had at school, and in addition to the further work I have done on the issue since has helped me to feel better about what happened.
At a higher level within tier three, this governance reform would draw inspiration from Daniel Greenberg's Sudbury Valley School and its sociocratic school meeting, where all members of the school community come together on one body to govern the school through compromise and universal consent where possible as opposed to winner-takes-it-all voting -although standard democracy can be used to resolve stalemates if needs be.
Tier #4: major systems change: From this point onwards, the changes required to implement each tier become near impossible for one individual school to implement at the local level, and instead require policy changes often at the behest of national government. Yet the link between national policies and the culture of well-being on the ground is prominent enough to warrant consideration being given to how policy changes at such a high level could advance the cause of wellbeing.
Some examples of policies that may support a tier four culture of well-being might be assessment reform, such as undoing the 2015 reforms that made the qualifications I sat more rigourous. From a teacher's perspective, it could look like reducing workloads and improving working conditions in the education sector. Anything which helps to minimise undue pressure.
Tier #5: paradigmatic shift: Tier five would see a fundamental and historic paradigm shift in how we educate young people, taking them out of an environment wherein competition with one another is encouraged, where pupils and teachers alike are placed under strict deadlines and heavy workloads, and actually removing the concept of a mandated curriculum and forcing young people to go through the experience of having to invest great amounts of time and energy in things they have no natural interest in.
Instead, this tier would involve emancipating young people and allowing them to follow their passions, trusting in their educative instincts, and unleashing the learning power of play.
In my view, it would constitute a widespread adoption of the Sudbury Valley model, and making this type of education widely available to all. Essentially, non-fee paying Sudbury schools available over a large geographic range.
DISCUSSION
If all tiers were implemented fully, the culture of well-being would have a significant impact on addressing the youth mental health epidemic. It has already been established that education is a key variable influencing young people's mental health. The first three tiers of the culture of well-being would address the immediate factors within schools that spawn because of the pressure from above, as well as putting both proactive and reactive support measures in place to mitigate the impact of the policies on peoples' mental health. The latter two would remove that pressure from above, addressing the problem at the source.
Of course, it would be wrong to claim that education is the sole driver of poor mental health among young people. The British Youth Council report, for one, highlights several other societal influences on young people's mental health, namely social media and discrimination. But the significant role played by the education system cannot be overlooked or denied, and although the culture of wellbeing is not the magic bullet that will end this crisis, it would certainly make a significant dent in it -if implemented properly and holistically over a large area.
For me personally, the research outlined in the introduction proves to me that many young people had (and are having) similar experiences of education to what I had. It is similar to the evidence which initially drove me to the conclusion in 2017 that what I was experiencing was a widespread problem and spurred me to take action.
A call to action is how I want this paper to be interpreted. It is all well and good having it written down, but this article will only have any meaning if those who are in a position to do so apply its recommendations in the real world.
Anyone who is a part of a school community: young people, teachers, parents ETC, have the power to immediately commence the implementation of tier one within their school by being more considerate with their language and how we interact with each other. They can also take some steps toward tiers two and three.
Anyone and everyone can join in the campaigning and advocacy required for tiers four and five to be implemented. There are a variety of actions one can take to begin working towards these: • Join a campaign group or start working with charities that are passionate about this issue. Sign up for their mailing lists to receive opportunities to take action. • Speak to a local MP about these issues. (Of course, if significant numbers of MPs suddenly started hearing from constituents about one issue, that issue would then be propelled to the top of the political agenda.) • Young people can speak to their local Member of Youth Parliament or Youth Council about this issue to put it on their radar. They could also join a Youth Council. • Seek out and participate in surveys and research that are looking into the issue to help build the evidence base.
Despite the scale of this issue and the challenge it presents, we can be reassured that it is within our influence to work towards addressing the mental health epidemic. There are a variety of options available to us, ranging from all-encompassing systems change to simple actions we take and words we say when interacting with one another.
May we experience a rekindled desire to be kind to each other, and to build a kinder, and therefore more functional, system that works for everyone.
BIOGRAPHY
Andrew Speight served as the Member of Youth Parliament for Blackpool between February 2019 & March 2022, and now serves as the representative for the North West on the Youth Parliament's Steering Group. He also works in a paid capacity to improve education for the whole school community, particularly with regards to the culture of well-being. | 2023-08-03T15:06:08.285Z | 2023-07-28T00:00:00.000 | {
"year": 2023,
"sha1": "43908d77df3b2666c5053f7caacfff0e97d8962b",
"oa_license": "CCBYNC",
"oa_url": "http://www.ubplj.org/index.php/TBJE/article/download/2146/1754",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3341f4424d489a1f4c3d36b1c7064fdb75051727",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": []
} |
203593476 | pes2o/s2orc | v3-fos-license | Multi-scale Attributed Node Embedding
We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram. Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-scale approach (MUSAE). Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes. We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings. Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets.
attributed nodes D and G have the same feature set and their nearest neighbours also exhibit equivalent sets of features, whereas features at higher order neighbourhoods differ. Figure 1b shows that as the order of neighbourhoods considered (r) increases, the product of the adjacency matrix power and the feature matrix becomes less sparse. This suggests that an implicit decomposition method would be computationally beneficial.
Our key contributions are: 1. to introduce the first Skip-gram style embedding algorithms that consider attribute distributions over local neighborhoods, both pooled (AE) and multi-scale (MUSAE), and their counterparts that attribute distinct features to each node (AE-EGO and MUSAE-EGO); 2. to theoretically prove that their embeddings approximately factorize PMI matrices based on the product of an adjacency matrix power and node-feature matrix; 3. to show that popular network embedding methods DeepWalk (Perozzi et al., 2014) and Walklets (Perozzi et al., 2017) are special cases of our AE and MUSAE; 4. we show empirically that AE and MUSAE embeddings enable strong performance at regression, classification, and link prediction tasks for real-world networks (e.g. Wikipedia and Facebook), are computationally scalable and enable transfer learning between networks.
We provide reference implementations of AE and MUSAE, together with the datasets used for evaluation at https://github.com/benedekrozemberczki/MUSAE.
RELATED WORK
Efficient unsupervised learning of node embeddings for large networks has seen unprecedented development in recent years. The current paradigm focuses on learning latent space representations of nodes such that those that share neighbors (Perozzi et al., 2014;Tang et al., 2015;Grover & Leskovec, 2016;Perozzi et al., 2017), structural roles (Ribeiro et al., 2017;Ahmed et al., 2018) or attributes are located close together in the embedding space. Our work falls under the last of these categories as our goal is to learn similar latent representations for nodes with similar sets of features in their neighborhoods, both on a pooled and multi-scale basis.
Neighborhood preserving node embedding procedures place nodes with common first, second and higher order neighbors within close proximity in the embedding space. Recent works in the neighborhood preserving node embedding literature were inspired by the Skip-gram model (Mikolov et al., 2013a;b), which generates word embeddings by implicitly factorizing a shifted pointwise mutual information (PMI) matrix (Levy & Goldberg, 2014) obtained from a text corpus. This procedure inspired DeepWalk (Perozzi et al., 2014), a method which generates truncated random walks over a graph to obtain a "corpus" from which the Skip-gram model generates neighborhood preserving node embeddings. In doing so, DeepWalk implicitly factorizes a PMI matrix, which can be shown, based on the underlying first-order Markov process, to correspond to the mean of a set of normalized adjacency matrix powers up to a given order (Qiu et al., 2018). Such pooling of matrices can be suboptimal since neighbors over increasing path lengths (or scales) are treated equally or according to fixed weightings (Mikolov et al., 2013a;Grover & Leskovec, 2016); whereas it has been found that an optimal weighting may be task or dataset specific (Abu-El-Haija et al., 2018). In contrast, multi-scale node embedding methods such as LINE (Tang et al., 2015), GraRep (Cao et al., 2015) and Walklets (Perozzi et al., 2017) separately learn lower-dimensional node embedding components from each adjacency matrix power and concatenate them to form the full node representation. Such un-pooled representations, comprising distinct but less information at each scale, are found to give higher performance in a number of downstream settings, without increasing the overall number of free parameters (Perozzi et al., 2017).
Attributed node embedding procedures refine ideas from neighborhood based node embeddings to also incorporate node attributes (equivalently, features or labels) (Yang et al., 2015;Liao et al., 2018;Huang et al., 2017;Yang et al., 2018;Yang & Yang, 2018). Similarities between both a node's neighborhood structure and features contribute to determining pairwise proximity in the node embedding space. These models follow quite different strategies to obtain such representations. The most elemental procedure, TADW (Yang et al., 2015), decomposes a convex combination of normalized adjacency matrix powers into a matrix product that includes the feature matrix. Several other models, such as SINE (Zhang et al., 2018) and ASNE (Liao et al., 2018), implicitly factorize a matrix formed by concatenating the feature and adjacency matrices. Other approaches such as TENE (Yang & Yang, 2018), formulate the attributed node embedding task as a joint non-negative matrix factorization problem in which node representations obtained from sub-tasks are used to regularize one another. AANE (Huang et al., 2017) uses a similar network structure based regularization approach, in which a node feature similarity matrix is decomposed using the alternating direction method of multipliers. The method most similar to our own is BANE (Yang et al., 2018), in which the product of a normalized adjacency matrix power and a feature matrix is explicitly factorized to obtain attributed node embeddings. Many other methods exist, but do not consider the attributes of higher order neighborhoods (Yang et al., 2015;Liao et al., 2018;Huang et al., 2017;Zhang et al., 2018;Yang & Yang, 2018).
The relationship between our pooled (AE) and multi-scale (MUSAE) attributed node embedding methods mirrors that between graph convolutional neural networks (GCNNs) and multi-scale GC-NNs. Widely used graph convolutional layers, such as GCN
ATTRIBUTED EMBEDDING MODELS
We now define algorithms to learn node embeddings using the attributes of nearby nodes, that allows both node and attribute embeddings to be learned jointly. The aim is to learn similar embeddings for nodes that occur in neighbourhoods of similar attributes; and similar embeddings for attributes that often occur in similar neighbourhoods of nodes. Let G = (V, L) be an undirected graph of interest where V and L are the sets of vertices and edges (or links) respectively; and let F be the set of all possible node features (i.e. attributes). We define F v ⊆ F as the subset of features belonging to each node v ∈ V. An embedding of nodes is a mapping g : V → R d that assigns a d-dimensional representation g(v) (or simply g v ) to each node v and is fully described by a matrix G ∈ R |V|×d . Similarly, an embedding of the features (to the same latent space) is a mapping h : F → R d with embeddings denoted h(f ) (or simply h f ), and is fully described by a matrix H ∈ R |F|×d .
ATTRIBUTED EMBEDDING
The Attributed Embedding (AE) procedure is described by Algorithm 1. We sample n nodes w 1 , from which to start attributed random walks on G, with probability proportional to their degree (Line 2). From each starting node, a node sequence of length l is sampled over G (Line 3), where sampling follows a first order random walk. For a given window size t, we iterate over each of the first l − t nodes of the sequence termed source nodes w j (Line 4). For each source node, we consider the following t nodes as target nodes (Line 5). For each target node w j+r , we add the tuple (w j , f ) to the corpus D for each target feature f ∈ F wj+r (Lines 6 and 7). We also consider features of the Add tuple (wj, f ) to multiset D. Algorithm 1: AE sampling and training procedure Data: G = (V, L) -Graph to be embedded.
{Fv}V -Set of node feature sets. n -Number of sequence samples. l -Length of sequences. t -Context size. d -Embedding dimension. b -Number of negative samples. Result: Node embeddings g r and feature embeddings h r for r = 1, . . . , t. 1 for i in 1 : n do 2 Pick w1 ∈ V according to P (w1) ∼ deg(w1)/vol(G). Run SGNS on Dr with b negative samples and d t dimensions.
18
Output g r v , ∀v ∈ V, and h r f , ∀f ∈ F = ∪VFv. 19 end Algorithm 2: MUSAE sampling and training procedure source node f ∈ F wj , adding each (w j+r , f ) tuple to D (Lines 9 and 10). Running Skip-gram on D with b negative samples (Line 15) generates the d-dimensional node and feature embeddings.
MULTI-SCALE ATTRIBUTED EMBEDDING
The AE method (Algorithm 1) pools feature sets of neighborhoods at different proximities. Inspired by the performance of (unattributed) multi-scale node embeddings, we adapt the AE algorithm to give multi-scale attributed node embeddings (MUSAE). The embedding component of a node v ∈ V for a specific proximity r ∈ {1, ..., t} is given by a mapping g r : V → R d/t (assuming t divides d). Similarly, the embedding component of feature f ∈ F at proximity r is given by a mapping h r : F → R d/t . Concatenating gives a d-dimensional embedding for each node and feature.
The Multi-Scale Attributed Embedding procedure is described by Algorithm 2. We again sample n starting nodes w 1 with a probability proportional to node degree (Line 2) and, for each, sample a node sequence of length l over G (Line 3) according to either a first or second order random walk. For a given window size t, we iterate over the first l − t (source) nodes w j of the sequence (Line 4) and for each source node we iterate through the t (target) nodes w j+r that follow (Line 5). We again consider each target node feature f ∈ F wj+r , but now add tuples (w j , f ) to a sub-corpus D r → (Lines 6 and 7). We add tuples (w j+r , f ) to another sub-corpus D r ← for each source node feature f ∈ F wj (Lines 9 and 10). Running Skip-gram on each sub-corpus D r = D r → ∪ D r ← with b negative samples (Line 16) output t ( d t )-dimensional node and feature embeddings that are concatenated.
ATTRIBUTED EMBEDDING AS IMPLICIT MATRIX FACTORIZATION
Levy & Goldberg (2014) showed that the loss function of Skip-gram with negative sampling (SGNS) is minimized if the embedding matrices factorize a matrix of pointwise mutual information (PMI) of word co-occurrence statistics. Specifically, for a word dictionary V with |V| = n, SGNS (with b negative samples) outputs two embedding matrices W , C ∈ R d×n such that ∀w, c ∈ V: , #(c) denote counts of word-context pair (w, c), w and c over a corpus D; and word embeddings w w , c c ∈ R d are columns of W and C corresponding to w and c respectively.
|D| as empirical estimates of p(w), p(c) and p(w, c) respectively shows: i.e. an approximate low-rank factorization of a shifted PMI matrix (low rank since typically d n).
Qiu et al. (2018) extended this result to node embedding models that apply SGNS to a "corpus" generated from random walks over the graph. In the case of DeepWalk where random walks are first-order Markov, the joint probability distributions over nodes at different stages of a random walk can be expressed in closed form. A closed form then follows for the factorized PMI matrix. We show that AE and MUSAE implicitly perform analogous matrix factorizations.
Notation: A ∈ R n×n denotes the adjacency matrix and D ∈ R n×n the diagonal degree matrix of a graph G, i.e. D w,w = deg(w) = v A w,v . We denote the volume of G by c = v,w A v,w . We define the binary attribute matrix F ∈ {0, 1} |V|×|F| by F w,f = 1 f ∈Fw , ∀w ∈ V, f ∈ F. For ease of notation, we let P = D −1 A and E = diag(1 DF ), where diag indicates a diagonal matrix.
Interpretation:
Assuming G is ergodic: , w ∈ V is the stationary distribution over nodes, i.e. c −1 D = diag(p(w)); and c −1 A is the stationary joint distribution over consecutive nodes p(w j , w j+1 ). F w,f can be considered a Bernoulli parameter describing the probability p(f |w) of observing a feature f at a node w and so c −1 DF describes the stationary joint distribution p(f, w j ) over nodes and features. Accordingly, P is the matrix of conditional distributions p(w j+1 |w j ); and E is a diagonal matrix proportional to the probability of observing each feature at the stationary distribution p(f ) (note that p(f ) need not sum to 1, whereas p(w) necessarily must).
MULTI-SCALE CASE (MUSAE)
We know that the SGNS aspect of MUSAE (Algorithm 2, Line 17) is minimized when the learned Our aim is to express this factorization in terms of known properties of the graph G and its features.
Lemma 1. The empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps (i) after; or (ii) before node v ∈ V, as given by: Proof. See Appendix.
Lemma 2. Empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps either side of node v ∈ V, given by: Marginalizing gives unbiased estimates of stationary probability distributions of nodes and features: Theorem 1. MUSAE embeddings approximately factorize the node-feature PMI matrix: Proof.
POOLED CASE (AE)
Lemma 3. The empirical statistics of node-feature pairs learned by the AE algorithm give unbiased estimates of mean joint probabilities over different path lengths as follows: . . , t} and so |D s | = t −1 |D|. Combining with Lemma 2, the result follows.
Theorem 2. AE embeddings approximately factorize the pooled node-feature matrix: Proof. The proof is analogous to the proof of Theorem 1.
Remark 1. DeepWalk is a corner case of AE with F = I |V| .
That is, DeepWalk is equivalent to AE if each node has a single unique feature. Thus E = diag(1 DI) = D and, by Theorem 2, DeepWalk's embeddings factorize log c t ( t r=1 P r )D −1 − log b, as previously noted by Qiu et al. (2018).
Remark 2. Walklets is a corner case of MUSAE with
Thus, for r = 1, . . . , t, the embeddings of Walklets factorise log c P r D −1 − log b.
Remark 3. Appending an identity matrix I to the feature matrices F of AE and MUSAE (denoted [F ; I]) adds a unique feature to each node. The resulting algorithms, named AE-EGO and MUSAE-EGO, learn embeddings that, respectively, approximately factorize the node-feature PMI matrices: and
COMPLEXITY ANALYSIS
Under the assumption of a constant number of features per source node and first-order attributed random walk sampling, the corpus generation has a runtime complexity of O(n l t x/y), where x = v∈V |F v | the total number of features across all nodes (including repetition) and y = |V| the number of nodes. Using negative sampling, the optimization runtime of a single asynchronous gradient descent epoch on AE and the joint optimization runtime of MUSAE embeddings is described by O(b d n l t x/y). If one does p truncated walks from each source node, the corpus generation complexity is O(p y l t x) and the model optimization runtime is O(b d p y l t x). Our later runtime experiments in Section 5 will underpin optimization runtime complexity discussed above.
Corpus generation has a memory complexity of O(n l t x/y) while the same when generating p truncated walks per node has a memory complexity of O(p y l t x). Storing the parameters of an AE embedding has a memory complexity of O(y d) and MUSAE embeddings also use O(y d) memory.
EXPERIMENTAL EVALUATION
In order to evaluate the quality of created representations we test the embeddings on supervised downstream tasks such as node classification, transfer learning across networks, regression, and link prediction. Finally, we investigate how changes in the input size affect the runtime. For doing so we utilize social networks and web graphs that we collected from Facebook, Github, Twitch and Wikipedia. The data sources, collection procedures and the datasets themselves are described with great detail in Appendix B. In addition we tested our methods on citation networks widely used for model evaluation (Shchur et al., 2018). Across all experiments we use the same hyperparameter settings of our own model, competing unsupervised methods and graph neural networks -these are respectively listed in Appendices C, E and F.
NODE CLASSIFICATION
We evaluate the node classification performance in two separate scenarios. In the first one we do k-shot learning by using the attributed embedding vectors with logistic regression to predict labels on the Facebook, Github and Twitch Portugal graphs. In the second one we test the predictive performance under a fixed size train-test split to compare against various embedding methods and competitive neural network architectures.
K-SHOT LEARNING
In this experiment we take k randomly selected samples per class, and use the attributed node embeddings to train a logistic regression model with l 2 regularization and predict the labels on the remaining vertices. We repeated the above procedure with seeded splits 100 times to obtain robust comparable results (Shchur et al., 2018). From these we calculated the average of micro averaged F 1 scores to compare our own methods with other unsupervised node embedding procedures. We varied k in order to show the efficacy of the methods -what are the gains when the training set size is increased. These results are plotted on subplots of Figure 2 for the Facebook, Github and Twitch Portugal networks.
Based on these plots it is evident that MUSAE and AE embeddings have little gains in terms of micro F 1 score when additional data points are added to the training set when k is larger than 12. This implies that our method is data efficient. Moreover, MUSAE-EGO and AE-EGO have a slight performance advantage, which means that including the nodes in the attributed random walks helps when a small amount of labeled data is available in the downstream task. Figure 2: Node classification k-shot learning performance as a function of training samples per class evaluated by average micro F 1 scores calculated from a 100 seeded train-test splits.
FIXED RATIO TRAIN-TEST SPLITS
In this series of experiments we created a 100 seeded train test splits of nodes (80% train -20% test) and calculated weighted, micro and macro averaged F 1 scores on the test set to compare our methods to various embedding and graph neural network methods. Across procedures the same random seeds were used to obtain the train-test split this way the performances are directly comparable. We attached these results on the Facebook, Github and Twitch Portugal graphs as Table 5 of Appendix G. In each column red denotes the best performing unsupervised embedding model and blue corresponds to the strongest supervised neural model. We also attached additional supporting results using the same experimental setting with the unsupervised methods on the Cora, Citeseer, and Pubmed graphs as Table 6 of Appendix G.
In terms of micro F 1 score our strongest method outperforms on the Facebook and GitHub networks the best unsupervised method by 1.01% and 0.47% respectively. On the Twitch Portugal network the relative micro F 1 advantage of ASNE over our best method is 1.02%. Supervised node embedding methods outperform our and other unsupervised methods on every dataset for most metrics. In terms of micro F 1 this relative advantage over our best performing model variant is the largest with 4.67% on the Facebook network, and only 0.11% on Twitch Portugal.
One can make four general observations based on our results (i) multi-scale representations can help with the classification tasks compared to pooled ones; (ii) the addition of the nodes in the ego augmented models to the feature sets does not help the performance when a large amount of labeled training data is available; (iii) based on the standard errors supervised neural models do not necessarily have a significant advantage over unsupervised methods (see the results on the Github and Twitch datasets); (iv) attributed node embedding methods that only consider first-order neighbourhoods have a poor performance.
TRANSFER LEARNING ON TWITCH SOCIAL NETWORKS
Neighbourhood based methods such as DeepWalk (Perozzi et al., 2014) are transductive and the function used to create the embedding cannot map nodes that are not connected to the original graph to the latent space. However, vanilla MUSAE and AE are inductive and can easily map nodes to the embedding space if the attributes across the source and target graph are shared. This also means that supervised models trained on the embedding of a source graph are transferable. Importantly those attributed embedding methods such as AANE or ASNE that explicitly use the graph are unable to do this transfer.
Using the disjoint Twitch country level social networks (inter country edges are not present) we did a transfer learning experiment. First, we learn an embedding function given the social network from a country with the standard parameter settings. Second, we train regularized logistic regression on the embedding to predict whether the Twitch user streams explicit content. Third, using the embedding function we map the target graph to the embedding space. Fourth, we use the logistic model to predict the node labels on the target graph. We evaluate the performance by the micro F 1 score based on 10 experimental repetitions. These averages with standard error bars are plotted for the Twitch Germany, England and Spain datasets as target graphs on Figure 3. We added additional results with France, Portugal and Russia being the target country in Appendix H as These results support that MUSAE and AE create features that are transferable across disjoint graphs that share vertex features. Moreover, the transfer of the downstream model is also possible across datasets. There is no clear evidence that either MUSAE or AE gives better results on this specific problem. We also see some evidence that upstream and downstream models that we trained on graphs with more vertices transfer better.
REGRESSION ON WIKIPEDIA GRAPHS
We created embeddings of the Wikipedia webgraphs with all of our methods and the unsupervised baselines. Using a 80% train -20% test split we predict the log of average traffic for each page using an elastic net model. The hyperparameters of the downstream model are available in Appendix D. In Table 7 of Appendix I we report average test R 2 and standard error of the predictive performance over 100 seeded train-test splits. Our key observation are: (i) that MUSAE outperforms all benchmark neighbourhood preserving and attributed node embedding methods, with the strongest MUSAE variant outperforming the best baseline between 2.05% and 10.03% (test R 2 ); (ii) that MUSAE significantly outperforms AE by between 2.49% and 21.64% (test R 2 ); and (iii) the benefit of using the vertices as features (ego augmented model) can improve the performance of embeddings, but appears to be dataset specific phenomenon.
LINK PREDICTION ON WEB GRAPHS AND SOCIAL NETWORKS
The final series of experiments dedicated to the representation quality is about link prediction. We carried out an attenuated graph embedding trial to predict the removed edges from the graph. First, we randomly removed 50% of edges while the connectivity of the graph was not changed. Second, an embedding is created from the attenuated graph. Third, we calculate features for the removed edges and the same number of randomly selected pairs of nodes (negative candidates) with binary operators to create d-dimensional edge features. We use the binary operators applied by Grover & Leskovec (2016). Specifically, we calculated the average, element-wise product, element-wise l 1 norm and the element-wise l 2 norm of vectors. Finally, we created a 100 seeded 80% train -20% test splits and used logistic regression to predict whether an edge exists.
We compared to attributed and neighbourhood based embedding methods and average AUC scores are presented in Tables 8 and 9 of Appendix J. Our results show that Walklets (Perozzi et al., 2017) the multi-scale neighbourhood based embedding method materially outperforms every other method on most of the datasets and attributed embedding methods generally do poorly in terms of AUC compared to neighbourhood based ones.
SCALABILITY
In order to show the efficacy of our algorithms we run a series of experiments on synthetic graphs where we are able to manipulate the input size. Specifically, we look at the effect of changing the number of vertices and features per vertex. Our detailed experimental setup was as follows. Each point in Figure 4 is the mean runtime obtained from 100 experimental runs on Erdos-Renyi graphs. The base graph that we manipulated had 2 11 nodes, 2 3 edges and the same number of unique features per node uniformly selected from a feature set of 2 11 . Our experimental settings were the same as the ones described in Appendix C except for the number of epochs. We only did a single training epoch with asynchronous gradient descent on each graph. We tested the runtime with 1, 2 and 4 cores and included a dashed line as the linear runtime reference in each subfigure. We observe that doubling the average number of features per vertex doubles the runtime of AE and MUSAE. Moreover, the number of cores used during the optimization does not decrease the runtime when the number of unique features per vertex compared to the cardinality of the feature set is large. When we look at the change in the vertex set size we also see a linear behaviour. Doubling the input size simply results in a doubled optimization runtime. In addition, if one interpolates linearly from these results it comes that a network with 1 million nodes, 8 edges per node, 8 unique features per node can be embedded with MUSAE on commodity hardware in less than 5 hours. This interpolation assumes that the standard parameter settings proposed in Appendix C and 4 cores were used for optimization.
DISCUSSION AND CONCLUSION
We investigated attributed node embedding and proposes efficient pooled (AE) and multi-scale (MUSAE) attributed node embedding algorithms with linear runtime. We proved that these algorithms implicitly factorize probability matrices of features appearing in the neighbourhood of nodes. Two widely used neighbourhood preserving node embedding methods Perozzi et al. (2014;2017) are in fact simplified cases of our models. On several datasets (Wikipedia, Facebook, Github, and citation networks) we found that representations learned by our methods, in particular MUSAE, outperform neighbourhood based node embedding methods ( (2018)).
Our proposed embedding models are differentiated from other methods in that they encode feature information from higher order neighborhoods. The most similar previous model BANE (Yang et al., 2018) encodes node attributes from higher order neighbourhoods but has non-linear runtime complexity and the product of adjacency matrix power and feature matrix is decomposed explicitly.
ACKNOWLEDGMENTS
Benedek Rozemberczki and Carl Allen were supported by the Centre for Doctoral Training in Data Science,funded by EPSRC (grant EP/L016427/1) and the University of Edinburgh. A PROOFS Lemma 1. The empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps (i) after; or (ii) before node v ∈ V, as given by: Proof. The proof is analogous to that given for Theorem 2.1 in Qiu et al. (2018). We show that the computed statistics correspond to sequences of random variables with finite expectation, bounded variance and covariances that tend to zero as the separation between variables within the sequence tends to infinity. The Weak Law of Large Numbers (S.N.Bernstein) then guarantees that the sample mean converges to the expectation of the random variable. We first consider the special case n = 1, i.e. we have a single sequence w 1 , ..., w l generated by a random walk (see Algorithm 1). For a particular node-feature pair (w, f ), we let Y i , i ∈ {1, ..., l − t}, be the indicator function for the event w i = w and f ∈ F i+r . Thus, we have: the sample average of the Y i s. We also have: for j > i + r. This allows us to compute the covariance: where 1 is a vector of ones. The difference term (indicated) tends to zero as j − i → ∞ since then p(w j = w|w i+r ) tends to the stationary distribution p(w) = deg(w) c , regardless of w i+r .
Thus, applying the Weak Law of Large Numbers, the sample average converges in probability to the expected value, i.e.: In both cases, the argument readily extends to the general setting where n > 1 with suitably defined indicator functions for each of the n random walks (see Qiu et al. (2018)).
Lemma 2. Empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps either side of node v ∈ V, given by:
B DATASETS AND DESCRIPTIVE STATISTICS
Our method was evaluated on a variety of social networks and web page-page graphs that we collected from openly available API services. In Table 1 we described the graphs with widely used statistics with respect to size, diameter, and level of clustering. We also included the average number of features per vertex and unique feature count in the last columns. These datasets are available with the source code of MUSAE and AE at https://github.com/benedekrozemberczki/ MUSAE.
B.1 FACEBOOK PAGE-PAGE DATASET
This webgraph is a page-page graph of verified Facebook sites. Nodes represent official Facebook pages while the links are mutual likes between sites. Node features are extracted from the site descriptions that the page owners created to summarize the purpose of the site. This graph was collected through the Facebook Graph API in November 2017 and restricted to pages from 4 categories which are defined by Facebook. These categories are: politicians, governmental organizations, television shows and companies. As one can see in Table 1 it is a highly clustered graph with a large diameter. The task related to this dataset is multi-class node classification for the 4 site categories.
B.2 GITHUB WEB AND MACHINE LEARNING DEVELOPERS DATASET
The largest graph used for evaluation is a social network of GitHub developers which we collected from the public API in June 2019. Nodes are developers who have starred at least 10 repositories and edges are mutual follower relationships between them. The vertex features are extracted based on the location, repositories starred, employer and e-mail address. The task related to the graph is binary node classification -one has to predict whether the GitHub user is a web or a machine learning developer. This target feature was derived from the job title of each user. As the descriptive statistics show in Table 1 this is the largest graph that we use for evaluation with the highest sparsity.
B.3 WIKIPEDIA DATASETS
The datasets that we use to perform node level regression are Wikipedia page-page networks collected on three specific topics: chameleons, crocodiles and squirrels. In these networks nodes are articles from the English Wikipedia collected in December 2018, edges are mutual links that exist between pairs of sites. Node features describe the presence of nouns appearing in the articles. For each node we also have the average monthly traffic between October 2017 and November 2018. In the regression tasks used for embedding evaluation the logarithm of average traffic is the target variable. Table 1 shows that these networks are heterogeneous in terms of size, density, and clustering.
B.4 TWITCH DATASETS
These datasets used for node classification and transfer learning are Twitch user-user networks of gamers who stream in a certain language. Nodes are the users themselves and the links are mutual friendships between them. Vertex features are extracted based on the games played and liked, location and streaming habits. Datasets share the same set of node features, this makes transfer learning across networks possible. These social networks were collected in May 2018. The supervised task related to these networks is binary node classification -one has to predict whether a streamer uses explicit language.
C STANDARD HYPERPARAMETER SETTINGS OF OUR EMBEDDING MODELS
In MUSAE and AE models we have a set of parameters that we use for model evaluation. Our parameter settings listed in Table 2 are quite similar to the widely used general settings of random walk sampled implicit factorization machines (Perozzi et al., 2014;Grover & Leskovec, 2016;Ribeiro et al., 2017;Perozzi et al., 2017). Each of our models is augmented with a Doc2Vec (Mikolov et al., 2013a;b) embedding of node features -this is done such way that the overall dimension is still 128.
D HYPERPARAMETER SETTINGS OF THE DOWNSTREAM MODELS
The downstream tasks uses logistic and elastic net regression from Scikit-learn (Pedregosa et al., 2011) for node level classification, regression and link prediction. For the evaluation of every embedding model we use the standard settings of the library except for the regularization and norm mixing parameters. These are described in Table 3. Our purpose was a fair evaluation compared to other unsupervised neighbourhood based and attributed node embedding procedures. Because of this each we tried to use hyperparameter settings that give similar expressive power to the competing methods with respect to target matrix approximation (Perozzi et al., 2014;Grover & Leskovec, 2016;Perozzi et al., 2017) and number of dimensions.
• DeepWalk (Perozzi et al., 2014): We used the hyperparameter settings described in Table 2. While the original DeepWalk model uses hierarchical softmax to speed up calculations we used a negative sampling based implementation. This way DeepWalk can be seen as a special case of Node2Vec (Grover & Leskovec, 2016) when the second-order random walks are equivalent to the firs-order walks.
• LINE 2 (Tang et al., 2015): We created 64 dimensional embeddings based on first and second order proximity and concatenated these together for the downstream tasks. Other hyperparameters are taken from the original work.
• Node2Vec (Grover & Leskovec, 2016): Except for the in-out and return parameters that control the second-order random walk behavior we used the hyperparameter settings described in Table 2. These behavior control parameters were tuned with grid search from the {4, 2, 1, 0.5, 0.25} set using a train-validation split of 80% − 20% within the training set itself.
• Walklets (Perozzi et al., 2017): We used the hyperparameters described in Table 2 except for window size. We set a window size of 4 with individual embedding sizes of 32. This way the overall number of dimensions of the representation remained the same.
• The attributed node embedding methods AANE, ASNE, BANE, TADW, TENE all use the hyperparameters described in the respective papers except for the dimension. We parametrized these methods such way that each of the final embeddings used in the downstream tasks is 128 dimensional.
F HYPERPARAMETER SETTINGS OF COMPETING GRAPH NEURAL NETWORKS
Each model was optimized with the Adam optimizer (Kingma & Ba, 2015) with the standard moving average parameters and the model implementations are sparsity aware modifications based on PyTorch Geometric (Fey & Lenssen, 2019). We needed these modifications in order to accommodate the large number of vertex features -see the last column in Table 1. Except for the GAT model (Veličković et al., 2018) we used ReLU intermediate activation functions (Nair & Hinton, 2010) with a softmax unit in the final layer for classification. The hyperparameters used for the training and regularization of the neural models are listed in Table 4. Except for the APPNP model each baseline uses information up to 2-hop neighbourhoods. The model specific settings when we needed to deviate from the basic settings which are listed in Table 4 were as follows: • Classical GCN (Kipf & Welling, 2017): We used the standard parameter settings described in this section. , 1998). We clustered the graphs into disjoint clusters, and the number of clusters was the same as the number of node classes (e.g. in case of the Facebook page-page network we created 4 clusters). For training we used the earlier described setup.
• APPNP (Klicpera et al., 2019): The top level feed-forward layer had 32 hidden neurons, the teleport probability was set as 0.2 and we used 20 steps for approximate personalized pagerank calculation.
• SGCONV (Wu et al., 2019): We used the 2 nd power of the normalized adjacency matrix for training the classifier. | 2019-09-28T04:13:33.000Z | 2019-09-25T00:00:00.000 | {
"year": 2019,
"sha1": "135334ea7fdef8eef0367e862797cac7dcd232a4",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/comnet/article-pdf/9/2/cnab014/40435146/cnab014.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "5d50adf52da1df9d72ebc273499ef74218f9f63c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
265595899 | pes2o/s2orc | v3-fos-license | Reproducible Research Practices and Barriers to Reproducible Research in Geography: Insights from a Survey
The number of reproduction and replication studies undertaken across the sciences continues to rise, but such studies have not yet become commonplace in geography. Existing attempts to reproduce geographic research suggest that many studies cannot be fully reproduced, or are simply missing components needed to attempt a reproduction. Despite this suggestive evidence, a systematic assessment of geographers’ perceptions of reproducibility and use of reproducible research practices remains absent from the literature, as does an identification of the factors that keep geographers from conducting reproduction studies. We address each of these needs by surveying active geographic researchers selected using probability sampling techniques from a rigorously constructed sampling frame. We identify a clear division in perceptions of reproducibility among geographic subfields. We also find varying levels of familiarity with reproducible research practices and a perceived lack of incentives to attempt and publish reproduction studies. Despite many barriers to reproducibility and divisions between subfields, we also find common foundations for examining and expanding reproducibility in the field. These include interest in publishing transparent and reproducible methods, and in reproducing other researchers’ studies for a variety of motivations including learning, assessing the internal validity of a study, or extending prior work.
R
eproducible research publicly discloses the evidence used to support claims made in prior work, facilitates the independent verification of those claims, and enables the extension of that work by the broader research community (Schmidt 2009;Nosek, Spies, and Motyl 2012;Earp and Trafimow 2015).Following the National Academies of Science, Engineering, and Medicine (NASEM 2019), reproducibility refers to the ability to independently recreate the results of a study using the same materials, procedures, and conditions of analysis.Although reproducibility is not a guarantee of scientific or practical usefulness, it does provide a strong basis for the collective evaluation of ideas.Nonetheless, systematic reviews of published research papers reveal a lack reproducibility (Collberg et al. 2014;Iqbal et al. 2016); furthermore, attempts to reproduce published research papers frequently fail (Chang and Li 2015;Raghupathi, Raghupathi, and Ren 2022).Previous studies consistently link the irreproducibility of research to inadequate recordkeeping, opaque reporting, the inaccessibility of research components, and a lack of incentives to share research details or to attempt reproduction studies (Ranstam et al. 2000;Anderson, Martinson, and De Vries 2007;NASEM 2019).Surveys of researchers find that few researchers are attempting to independently reproduce the work of others (Baker 2016;Boulbes et al. 2018).At the same time, a sizable portion of survey respondents report knowing of instances in which researchers engaged in questionable or biased research practices tied to the publication of irreproducible results (Fanelli 2009;Fraser et al. 2018).
Despite these concerns, the available literature currently provides insufficient evidence to conclusively evaluate the reproducibility of research generally, or disciplinary research specifically.This knowledge gap exists in part because few reproduction studies have been published in many fields of research, which limits the quantity of empirical evidence available to make judgments about reproducibility.Any judgments made based on the currently available set of reproduction studies are likely to be limited in scope because existing reproductions typically focus on re-creating the results of a small number of studies selected based on topical interest or researcher familiarity (Open Science Collaboration 2015;Camerer et al. 2016;Camerer et al. 2018).Another approach to assessing the reproducibility of a field is to draw samples of research papers and check the availability and completeness of the research components required for a reproduction.Assessments of this type have narrowly sampled from conference paper series, specific journals, or disciplinary repositories (Stodden et al. 2016;Byrne 2017;Gundersen and Kjensmo 2018;Stodden, Krafczyk, and Bhaskar 2018).Similarly, surveys of researchers asking participants about their use of reproducible research practices have commonly sampled authors from specific journals, members of professional associations, or conference attendees (Baker 2016;NASEM 2019).Surveys have also commonly failed to systematically report the methodological details (e.g., response rate) needed to assess and address potential bias in survey response.Furthermore, reproduction attempts, assessments, and surveys have all typically focused on evaluating the computational components of studies, such as data and code availability, rather than all aspects of research design and execution.In combination, the small number of reproduction studies, reliance on convenience samples of publications and survey participants, and a tendency to focus on computation constrains the scope and generalizability of reproducibility evaluations.
Reproducibility surveys and reproduction attempts in the geographic literature face these same challenges.The few available reproducibility surveys in geography have relied on convenience samples drawn from specialist conferences and have only focused on computationally intensive forms of geographic research (Ostermann and Granell 2017;N€ ust et al. 2018;Konkol, Kray, and Pfeiffer 2019;Balz and Rocca 2020).The small number of published attempts to reproduce geographic research have similarly focused on the computational reproducibility of conference papers (N€ ust et al. 2018;Ostermann et al. 2021;N€ ust et al. 2023) or on specific topics such as COVID-19 (Paez 2021;Kedron, Bardin, et al. 2022;Holler et al. 2023;Kedron, Bardin, et al. 2023).More recent reproduction attempts by Kedron, Bardin, et al. (2023) show that the factors hindering the re-creation of results and the evaluation of claims likely extend beyond computation into the conceptualization and design of geographic research.
Geographers continue to debate the role of reproduction studies in the discipline (Brunsdon 2016;Singleton, Spielman, and Brunsdon 2016;Goodchild et al. 2021;Kedron et al. 2021;Sui and Kedron 2021;Wainwright 2021;Kedron and Holler 2022), examine the reproducibility of individual studies (Ostermann et al. 2021;N€ ust et al. 2023), and build the infrastructure needed to support reproducible research (N€ ust and Hinz 2019;Yin et al. 2019;Wilson et al. 2021;Kedron, Bardin, et al. 2022).We have yet to systematically assess the use of reproducible research practices across the discipline's diverse research traditions, or identify the factors that have hindered geographers from adopting reproducible research practices and conducting reproduction studies.Without a systematic assessment of these issues, it is unclear which actions geographers should take if they wish to improve the reproducibility of work in the discipline.
To address this gap in our collective knowledge, we surveyed geographic researchers about their understanding of reproducibility, perception of reproducibility in their subfields, familiarity and use of reproducible research practices, and barriers to reproducibility.To support generalization, we designed a sampling frame to capture researchers from across disciplinary subfields and methodological approaches, and draw survey participants from that frame using a probability sampling scheme.In the remainder of this article, we first present the design of our survey, sampling strategy, and analytical approach.We then present our results, focusing on researcher perceptions and use of reproducible research practices, and then analyzing researcher experiences of attempts to reproduce prior work.Finally, we discuss the implications of our survey results and limitations of our work and conclude by proposing where geography might go from here and how the discipline can contribute to reproducibility across the sciences.
Data and Methods
Complete documentation of the procedures, survey instrument, and other materials used in this study are available through the Survey of Reproducibility in Geographic Research project (Kedron, Holler, et al. 2023; see-https://osf.io/5yeq8/)hosted by the Open Science Framework (OSF).The OSF project connects to a GitHub repository that hosts the anonymized data set and code used to create all results and supplemental materials along with a complete history of their development.All of the results presented in this article can be independently reproduced using the materials in that repository.The repository links to an interactive visualization of the survey results, which allows users to examine additional cross-tabulations and statistical summaries of the survey data.We encourage interested readers to critically evaluate and build on these materials.Before the start of data collection, we registered a preanalysis plan for the survey with OSF Registries (Kedron, Holler, and Bardin 2022; see-https://osf.io/6zjcp).The survey was conducted under the approval and supervision of the Arizona State Institutional Review Board (STUDY00014232).
Sampling Frame
Our target population of interest is researchers who have recently published in the field of geography.We followed a four-step procedure to create a sampling frame for our survey that captures this diverse population of researchers and the approaches they use when studying geography.
First, beginning at the publication level, we identified journals indexed as either geography or physical geography by the Web of Science's Journal Citation Reports (Clarivate 2023) that also had a five-year impact factor greater than 1.5.From those journals, we created a database of all articles published between 2017 and 2021.
Second, we used the Arizona State University institutional subscription to Scopus (2023) to extract journal information (e.g., subject area), article information (e.g., citation counts), and author information (e.g., corresponding status) for each publication.Because our intention was to capture individuals actively publishing new geographic research, we retained publications indexed by Scopus as document type ¼ "Article" and removed all other publication types (e.g., editorials) from our article database.We also removed articles with missing authorship information.
Third, we created a list of researchers and their published articles, focusing on corresponding authors for two reasons.First, corresponding authorship is one indicator of the level of involvement an individual had in a given work.Although imperfect, it was the best available indicator in the Scopus database as across journals there is no commonly adopted policy for declarations of author work (e.g., CRediT Statements).Second, Scopus maintains e-mail contact information for all corresponding authors, which gave us a means of contacting researchers in our sampling frame.Scopus also maintains a unique identifier for each author (author-id) across time, which allowed us to identify authors across publications.
Fourth, we determined uniqueness by grouping researchers by their author-id, and we determined the most recent contact information by selecting records associated with the most recent year of publication.For 383 researchers who had two or more distinct e-mail addresses in the latest year of publication, we removed noninstitutional personal e-mail addresses and then selected one of the remaining institutional e-mail addresses.
Applying these criteria yielded a sampling frame of 29,828 researchers.On average, these authors published 2.7 articles in geography journals meeting our criteria between 2017 and 2021.Roughly one third (33.0 percent) were most recently a corresponding author for an article published in a general geography journal.A similar proportion (32.0 percent) were most recently a corresponding author for an article published in an earth sciences journal, and smaller proportions published in the social sciences and cultural geography (20.0 percent and 16.0 percent, respectively).
Survey Instrument
The survey first established eligibility based on age and geographic research activity in the past five years and asked researchers to report their primary subfield and methodology.We asked each participant to assess their familiarity with the term reproducibility and to provide their own definition.We then provided a definition based on NASEM (2019) to establish a common understanding of reproducibility for the remainder of the survey.Specifically, we defined reproducibility as, "whether research results can be re-created by an independent researcher using the same materials, procedures, and conditions of analysis that were used in the original study."Remaining questions assessed familiarity and Reproducible Research in Geography use of reproducible research practices (twenty-two questions), perceptions of the reproducibility of geographic research (two questions), and beliefs about reproducibility with regard to its significance (seventeen questions) and barriers (thirteen questions).For researchers who reported attempting reproductions, we asked them to elaborate on their motivations and outcomes (nine questions).
We developed the survey questions following a review of prior reproducibility surveys (e.g., Fanelli 2009;Baker 2016;Konkol, Kray, and Pfeiffer 2019) and our own reading of recurring issues in the reproducibility literature.We pilot tested the survey instrument with nineteen graduate students and geography faculty with differing levels of experience, disciplinary subfields, and methodological background.After pilot testing, we removed these individuals from our sampling frame to ensure they would not be included in our final sample.
Data Collection
We used a digital form of the tailored design method (Dillman, Smyth, and Christian 2014) to survey geographic researchers between 17 May and 10 June 2022.A simple random sample of 2,000 researchers was drawn without replacement from our sampling frame, and those researchers were invited via e-mail to participate in the online survey.Researchers received their initial invitation on 17 May 2022.Two reminder e-mails were sent to researchers who had not yet completed the survey on 26 May and 31 May 2022.
The online survey was administered through Qualtrics.Participation in the survey was entirely voluntary.Each researcher that opted to participate in the survey was provided with consent documentation approved by institutional review board and linked to the Internet survey instrument.Participants were also given the option to provide an e-mail address for eligibility for one of three prizes of US$90, selected randomly after the data collection period.Participating researchers had the option to exit and reenter the survey and were also able to review and change their answers using a back button as they progressed through the survey.At the end of the data collection period, responses were checked for completeness and coded using the reporting standards of the American Association for Public Opinion Research (AAPOR 2023).Responses were downloaded from Qualtrics, anonymized, and stored in a public, deidentified database in the research compendium.
Analytical Approach
We conducted two statistical analyses of the survey responses.First, we analyzed researcher perspectives on reproducibility following three themes: (1) how geographic researchers define reproducibility, (2) familiarity and experience with reproducible research practices, and (3) perceived barriers to reproducibility.Second, we analyzed researchers' experiences reproducing prior studies including their motivations and experience of successes and barriers.For both analyses, we produced and analyzed descriptive statistical summaries of participant responses to Likert scale questions designed to assess those themes and experiences.We also coded qualitative text responses to selected themes and created quantitative summaries of these themes for each participant.To examine variation among our participants, we cross-tabulated all statistical summaries by disciplinary subfield and methodological approach and compared response frequencies across these subgroups.
Analyzing Researcher Perspectives on Reproducibility
For our first set of analyses, we examined the full set of survey responses.In addition to the examination of statistical summaries of individual Likert scale questions, we created four aggregate measures that summarize participant perceptions and experiences with our four main themes.Complete details about our coding scheme, procedure, and derived data are available in a version-controlled digital compendium that accompanies this publication.The computational code that creates statistical summaries of these variables is similarly available, which makes our entire analysis completely reproducible.
Defining Reproducibility.We coded participants' qualitative definitions of "reproducibility" (1) to assess the similarity between each of the provided definitions and the definition adopted by NASEM (2019), and (2) to determine what participants identified as the motivation for making work reproducible.First, we measured the similarity of each provided definition to the definition adopted by NASEM (2019).NASEM defines reproducible research as having four characteristics-same data, same procedure, same results, and same conditions.To make this comparison, the authors independently coded each respondent definition for the presence or absence of each of the four characteristics.These assessments were then compiled in a single spreadsheet, which was used to identify disagreements in the independent coding.Disagreements in the assignment of codes were resolved through discussion among the three authors.We created an aggregate measure of definition similarity for the final coded response for each participant by counting the presence of each NASEM definition characteristic, resulting in a measure with the domain [0,4].Definitions that received a score of zero did not share any characteristics with the definition provided by NASEM, whereas those that received a score of four included all of the characteristics identified by NASEM.
Second, we also coded each definition to one of four motivations for ensuring the reproducibility of a study: (1) to facilitate the assessment of prior work, (2) to assess experimental research, (3) to improve transparency and facilitate further extension of work, and (4) to improve the transparency and consistency of data collection.We derived this coding from common themes in the responses and our own reading of the reproducibility literature.As earlier, each definition was independently coded by each author before code assignments across authors were compared with disagreements resolved through discussion.
Familiarity and Experience.We measured participant familiarity and experience with five reproducibility-enhancing research practices: (1) the adoption of open source software, (2) the use of research notebooks, (3) data sharing, (4) code and procedure sharing, and (5) research plan preregistration.We assessed familiarity by asking participants to identify whether they were "not at all," "very little," "somewhat," or "to a great extent" familiar with each of the five practices.Participants who identified as being familiar "somewhat" or "to a great extent" with a practice were coded as familiar with that practice.For each participant, we then created an aggregate measure of familiarity with reproducibility-enhancing research practices by counting the number of practices with which they were familiar.This procedure resulted in a familiarity measure with domain [0,5], where zero indicates a lack of familiarity with any of the practices assessed and five indicates familiarity with all of the practices assessed.
We followed a similar procedure to construct an aggregate measure of participant experience using reproducibility-enhancing research practices.We assessed researcher experience with each practice by asking participants to identify whether they "never," "rarely," "some of the time," "most of the time," or "always" used that practice in their research.Participants who reported using a practice most of the time or always were coded as having experience with that practice.To create an aggregate measure of experience for each participant, we then counted the number of practices they regularly used.This procedure created an experience score with domain [0, 5], where zero indicates a lack of experience with any of the practices assessed and five indicates experience with all of the practices assessed.
Barriers.Finally, we constructed a measure of participant perceptions of the barriers that hinder reproducibility.We asked participants to identify how frequently they believe twelve different factors contributed to a lack of reproducibility in their subfield.Participants were asked whether they believed each practice "never," "rarely," "occasionally," or "frequently" contributed to a lack of reproducibility.Participants who responded that a factor occasionally or frequently hindered the reproducibility of research were coded as identifying that factor as a barrier.From those responses, we created an aggregate measure of perceived barriers for each participant by counting the number of factors they identified as barriers.This procedure resulted in a measure of barriers with domain [0,12], where zero indicates a participant identified no barriers to reproducibility.
Analyzing Researcher Reproduction Attempts
For our second set of analyses, we examined only the responses of researchers who reported attempting a reproduction in the past two years to understand what motivated reproduction attempts, how successful those attempts were, and what factors hindered success. Motivations.
To assess what motivated researchers to attempt reproductions, two of the authors independently coded qualitative text responses to the question, "What made you decide to attempt the reproduction(s)?"Each response was categorized as Reproducible Research in Geography one of four types of motivation, which we derived from recurring themes in participants' responses and from our review of the reproducibility literature.The four motivation types were to (1) verify or check published research, (2) learn from published research for extension or teaching, (3) internally check their own research to verify their work or increase the transparency of their work, and (4) replicate a study with new data.After each response was coded independently by the two authors, we identified disagreements in motivation assignments across authors.Disagreements were resolved through a discussion that was moderated by the third author.After a review of the coded responses, we chose to use the first and second motivations as a filter to narrow our sample to participants who attempted reproductions that matched the definition of reproducibility in presented by NASEM.We chose to remove participants reporting that they attempted to reproduce their own work because it is unlikely these respondents would encounter the same barriers as researchers attempting to reproduce the work of others, and because a core component of epistemological function of reproducibility is that it acts as an independent check of prior claims.We chose to remove participants who reported replicating a study because the collection of new data changes the purpose and experience of re-creating a study.
Success and Barriers.After narrowing our sample to participants attempting independent reproductions of the work of others, we analyzed participant responses to a set of questions related to the experience making those attempts.To analyze participant success, we created statistical summaries for a series of questions that asked researchers to identify whether they were able to partially or completely recreate some or all results of the target study.We similarly analyzed barriers to the reproduction of results by creating statistical summaries of participants' ability to access key study artifacts (e.g., data, procedural information, and code).
Results
A total of n ¼ 218 of the authors we contacted completed the online survey with information sufficient for analysis.The contact rate for the survey was 13.9 percent and the cooperation rate was 78.7 percent, yielding an overall response rate of 10.9 percent.The refusal rate was 2.9 percent. 1 Another forty authors started the survey but did not complete enough of the survey to be included in the analysis.
Respondents were predominantly male (65.1 percent) and between the ages of thirty-five and fifty-five (62.4 percent).The majority of respondents were academics, and they were balanced across career levels from graduate students to full professors with no one career level comprising more than 30 percent of the sample.Respondents identified with each of the four major disciplinary subfields-physical geography (29.8 percent), geographic methods and GIScience (28.0 percent), nature and society (10.1 percent), and human geography (30.7 percent)-and three major methodological approaches-quantitative (42.2 percent), mixed methods (39.0 percent), and qualitative (18.3 percent).
Table 1 summarizes how researchers define reproducibility, their familiarity with reproducible research practices and experience using them, and the factors they see as barriers to reproducibility in geography.Table 1 presents the mean and standard deviation of the summary measures we created for each of these four themes.Each row in the table captures one of those four themes.The columns of the table separate the statistical summaries associated with those themes by subfield and methodological approach.For example, the first entry in the overall column of the table indicates that respondents on average included 1.83 of the four components of the NASEM definition in their own definitions of reproducibility, with a standard deviation of 1.12 components.Moving down to the familiarity and experience rows of the same column, these entries indicate that respondents were on average familiar with 3.26 of the five reproducible research practices we survey but indicated having experience using only 1.44 of those same practices.The barriers entry from the same column indicates that respondents identified an average of 8.20 of the 12 factors we surveyed as hindering reproducibility in the discipline.In contrast, the entry summarizing qualitative researchers' perceptions of barriers to reproducibility indicates that these researchers identified an average of 5.97 of 12 barriers to reproducibility.
In aggregate, the data reveal consistent trends in definition, familiarity, experience, and barriers of reproducibility between the subdisciplines and methodological approaches.Respondents who selfidentified as specializing in physical geography and geographic methods consistently reported greater familiarity with reproducibility than those working in nature and society and human geography.Similarly, respondents who identified as primarily using quantitative and mixed-methods approaches consistently report greater familiarity of reproducibility than those using qualitative methods.The following subsections present detailed results for each of these topics, highlighting the principal sources of difference between subfields and methodological approaches.
Researcher Perspectives on Reproducibility
Reproducibility is on the minds of geographic researchers.Nearly all researchers reported being at least somewhat familiar with the term reproducibility (89.0 percent), with half reporting being very familiar with the term (53.6 percent).More pointedly, the majority of survey respondents reported thinking about the reproducibility of their own research (80.7 percent), discussing reproducibility with a colleague (70.6 percent), and questioning the reproducibility of published work (57.3 percent) in the past two years.More than half of the researchers we surveyed (52.8 percent) also reported considering reproducibility while peer reviewing a grant proposal or publication during the same time frame.Researchers, however, estimated that only 50.6 percent of the results published in the discipline were reproducible, albeit with a large standard deviation of 24.7 percent that suggests a great deal of uncertainty about the true value.Few respondents reported attempting to reproduce the work of other researchers (14.7 percent) with fewer still attempting to publish those reproduction studies (6.8 percent).
In total, 58.0 percent of respondents agreed with the statement, "Reproducibility is incompatible with the epistemologies within my subfield," 28.0 percent disagreed with the statement, and 13.0 percent indicated that they did not know.About half of the respondents specializing in human geography (49.3 percent) and nature and society (50.0 percent) indicated that reproducibility was incompatible with the epistemologies of their subfields.Respondents conducting primarily qualitative research were similarly skeptical of the epistemological role of reproducibility in their subfield.Seventy-five percent of qualitative researchers indicated that reproducibility was epistemologically incompatible with their subfield.
Definitions and Importance of Reproducibility.A total of 181 (83.0 percent) of our survey respondents provided an interpretable definition of reproducibility.Geographic researchers provided definitions of reproducibility that explicitly included an average of 1.83 of the four characteristics from the NASEM definition.The availability and use of the same research procedures (80.7 percent) and results (74.0 percent) were the characteristics of reproducibility most frequently identified by researchers.Less than half of respondents explicitly included use of the same data (38.1 percent) or the need to work in the same context (17.7 percent) in their definitions.The pattern of similarity to the NASEM definition and each of its components was consistent across subfields and methodological approaches, with a slightly greater emphasis on data and procedural availability among quantitative and geographic methods and GIScience researchers.The lower inclusion of data and context in definitions might be explained by researchers conceptualizing reproducibility as the formal NASEM definition of replicability, which emphasizes the testing of similar questions and procedures in new contexts with new data.For example, one respondent defined reproducibility as "the extent to which the research design can be replicated in different Reproducible Research in Geography geographical contexts."We observed this alternative definition of reproducibility in 20.4 percent of respondents' definitions.
Researchers' definitions of reproducibility were primarily connected to two epistemic functions.Just over half of respondents (52.5 percent) defined reproducibility as a means of assessing prior work for errors or inconsistencies through comparison of original results to results from an attempted reproduction.These comparisons ranged from rigid bitwise quantitative interpretations, as in the "ability to regenerate exactly the results published based on the data and code provided by the authors," to more flexible interpretations in which "other researchers could use the same or similar methodology without great difficulty and, given similar data, arrive at comparable results."Responses also included definitions with a focus on experimental science, as in "an ability to produce consistent results when an experiment is repeated." Nearly all other researchers (40.9 percent) tied reproducibility to the need for transparency in research so that others could independently expand on prior studies.For example, a quantitative geographer stated, "The methods should provide sufficient information to be able to reproduce the results.In quantitative science this should, at minimum, provide all the equations and algorithms used for any calculation.In the interest of increasing transparency in science, the practice of sharing the code should be encouraged."For others, open science did not necessarily need to result in identical results, reflected in this definition: As a qualitative researcher doing in-depth case study research, my studies cannot be perfectly reproduced.But reproducibility sits in the openness about methods and data collection practices, as well as critical reflection about strengths and weaknesses of my research.When we write about those things in the methods section in our papers and theses, their reproducibility is increased.
The remainder of responses (6.6 percent) emphasized repeatable or reliable data observation over all other dimensions of reproducibility.For example, As a historical geographer, working with qualitative research methods, I understand reproducibility more in terms of sources than of methods.I see reproducible research as being that which makes clear the origin and location of its data.
A physical geographer similarly emphasized data observations: "Data/observations of some variable can be recovered repeatedly by different observers/ methods." Responses to related Likert questions from the full survey sample (n ¼ 218) support the results from the subsample of 181 qualitative definitions analyzed previously.A majority of researchers identified reproducibility as important for validating (75.2 percent) and establishing the credibility (72.5 percent) of research.Respondents also saw reproducing studies as important to reducing the presence of persistent errors in the discipline (77.5 percent) and to increasing trust in research findings (78.5 percent).In parallel with the need for openness and transparency in science, most respondents agreed with the importance of reproducibility for research efficiency (63.3 percent), communication with academics (68.8 percent) and practitioners (64.7 percent), and training students (75.7 percent).
Despite wide recognition of reproducibility as epistemically important, respondents were cautious about drawing conclusions from a single study or reproduction attempt.Only half of the respondents (50.9 percent) agreed that when researchers do not share their data they have less trust in a study.A smaller percentage (41.7 percent) agreed that inability to reproduce a result detracts from the validity of a study, and an even smaller minority agreed that such inability implies that the result is false (26.2 percent).
Qualitative researchers identified reproducibility as playing a much smaller epistemic role compared to the discipline as a whole.A small percentage of qualitative researchers agreed that reproducibility is important for validating research (25.0 percent) or establishing its credibility (20.0 percent).Qualitative researchers similarly placed less emphasis on reproducibility as a means of increasing the accessibility and extensibility of research.Few qualitative researchers had less trust in a study when researchers did not share their data (27.5 percent) or saw reproducibility as important for sharing research with academics (27.5 percent).
The data from our sample of researchers show that there is broad recognition of reproducibility and its importance in geography with three caveats-conflation of reproducibility and replicability, different perspectives from researchers using qualitative methods, and caution about judging the trustworthiness or validity of published research based on success or failure of an attempt to reproduce a study.In this context, are individual researchers aware of the research practices needed to enhance reproducibility, and have these practices already been adopted for use in research?
Familiarity and Experience with Reproducible Research Practices.Geographic researchers were familiar with an average of 3.26 different reproducible research practices, but only reported experience using an average of 1.44 of these practices in their own work.Table 2 presents researcher familiarity and use of five different reproducible research practices.More than half of all researchers reported familiarity with data sharing (86.7 percent), open source software (85.3 percent), field and lab notebooks (67.0 percent), and code sharing (59.2 percent).A far smaller number of researchers reported using these "familiar" practices regularly in their own work, however.Less than half of the researchers surveyed reported sharing their data (44.5 percent), using open source software (38.1 percent), using field or lab notebooks to record their work (40.0 percent), or sharing their code (18.8 percent) most or all of the time.Only a small subset of researchers reported familiarity with the preregistration of research designs and protocols (27.5 percent) or regular use of this practice (2.7 percent).
Researcher familiarity and use of reproducible research practices varied by disciplinary subfield and methodological approach.Researchers who identified as physical geographers or methodologists and GIScientists reported being familiar with one to two more reproducible research practices than human geographers and those focused on nature and society.Researcher practices similarly diverged by subfield, but no subset of researchers reported using on average more than two of these practices regularly in their work.Quantitative and mixed-methods researchers reported familiarity with and use of an average of two more reproducible research practices when compared to qualitative researchers.
Differences in researcher familiarity and use of specific reproducible research practices across subfields and approaches was greatest for practices more typical of quantitative workflows.When compared to qualitative researchers, quantitative and mixedmethods researchers reported greater familiarity with all reproducible research practices.For example, just 12.5 percent of qualitative researchers reported familiarity with code sharing, whereas 81.5 percent of quantitative and 57.7 percent of mixed-methods researchers reported familiarity with the same practice.Even among quantitative and mixed-methods researchers, familiarity with reproducible research practices did not translate into regular use of those 3. Qualitative geographers might be the one group that deviates from this consistent pattern.These researchers identified fewer barriers to reproducibility on average, but with a greater variance that left us unable to distinguish this group from any other.To examine differences in the specific factors that researchers believe hinder reproducibility in the discipline, we divided the twelve factors into three groups-those related to the research environment, the availability of research artifacts, and study-specific characteristics.
Geographic researchers identified the incentive structure of the researcher environment as an important barrier to reproducibility.A majority of geographic researchers identified both the pressure to publish original research (71.5 percent) and insufficient oversight of the research process (71.1 percent) as barriers.A minority of qualitative researchers identified both factors as barriers, but a majority of researchers in all other approaches and subfields identified both factors as barriers to reproducibility.Physical, methods-focused, and quantitative researchers identified these factors as barriers in higher numbers.A minority of geographic researchers (28.4 percent) believe that the fabrication of data, the manipulation of research results, and similar forms of fraud are a cause of irreproducibility in the discipline.This percentage is consistent with concerning results from large surveys and meta-analyses of research on scientific fraud across other scientific disciplines (Fanelli 2009;Baker 2016).
Researchers identified the unavailability of research artifacts (e.g., data) as a second barrier to reproducibility, but the importance placed on different artifacts varied by subfield and methodological approach.A higher percentage of physical and methods-focused researchers identified all five of the artifacts we investigated as common barriers to reproducibility as compared to human and naturesociety researchers.The largest differences between these groups existed in researchers' beliefs about how often the availability of research protocols and code and the use of restricted data or software affected reproducibility.A similar gap existed between qualitative researchers and mixed-methods or quantitative researchers with regard to identifying code availability or the use of restricted data or software as contributing to irreproducibility.A majority of researchers identified the complexity and variability of a system (71.5 percent), researcher positionality (64.2 percent), and chance (62.3 percent) as study-specific factors limiting the reproducibility of geographic research.Minor variations in the emphasis placed on these factors exist across subfields and approaches.A higher percentage of nature-society and physical researchers emphasized the important role that spatial variation and complexity of geographic processes can play when attempting to reproduce geographic research, but this factor was also recognized by researchers across subfields and approaches.A smaller percentage of physical geographers placed emphasis on the impact researcher positionality could have on reproducibility when compared to all other subfields.Positionality acknowledges that knowledge is embedded in power relations and that researchers' social and cultural positions affect their relations with research subjects and materials, thus necessitating declaration of that researcher position to evaluate findings (Pratt 2009;Qin 2016;Holmes 2020).Researchers declare and reflect on their positions to assess how their own identity and history might influence aspects of the research process, such as data collection and interpretation.Qualitative researchers were the group most likely to identify positionality as a barrier to reproducibility (80.0 percent).Differences between the computational environment (computer hardware and software) used to conduct an original study and a reproduction attempt were generally not seen as a factor contributing to a lack of reproducibility in the discipline.Of all subgroups, only a majority of methods-focused and quantitative researchers were concerned with computational environments, reflecting research practices used in their areas of research.
Attempted Reproductions
A total of 102 of the researchers who responded to our survey (46.8 percent) reported attempting a reproduction study during the past two years.Twenty-three of those researchers, however, were reproducing their own research results, and another thirteen were replicating prior studies in new locations.In the end, only thirty-two (14.7 percent) of all respondents reported attempting to reproduce a study originally conducted by another researcher during the past two years.
This subset of thirty-two participants formed the basis for our analysis of researcher practices and experiences when attempting reproductions of the work of others.Reproduction attempts were predominantly made by geographic researchers who selfidentified with the physical geography (43.8 percent) or geographic methods and GIS (37.5 percent) subfields.Respondents attempting reproductions were also focused on quantitative (68.8 percent) and mixed-methods (41.2 percent) approaches.Only eight of the researchers who attempted reproductions reported submitting any of their findings for publication.
Most of the thirty-two researchers who attempted to reproduce a prior study reported at least some success in accessing data and procedures and in reproducing the prior study results.The majority of researchers (87.5 percent) were able to access some of the data used in the original study, but few researchers (12.5 percent) reported access to all of the original data.Researchers also reported the ability to access at least some information about the study procedures (68.8 percent) and computational environment (59.4 percent), but limited ability to access all procedural (9.4 percent) and computational environment information (12.5 percent).
Reproduction attempts might produce results for comparison to some or all of the results in a prior study.A reproduction could be identical by finding the exact same results, or could be partial by finding slightly different results that still support the same conclusions.Nearly all researchers reported at least partially reproducing some results (81.3 percent), but only seven (21.9 percent) reported being able to at least partially reproduce all results.Only three researchers (9.4 percent) were able to identically reproduce all results.
The reproduction attempt rate and success rates we observed are similar to analogous rates reported in other studies of the reproducibility of geographic research.For example, the Konkol, Kray, and Pfeiffer (2019) survey of participants from the European Geosciences Union General Assembly found that 7 percent of respondents reported often or always Reproducible Research in Geography attempting to reproduce the results of other studies.The authors also found rates of reproduction success similar to those identified in our survey.Specifically, the authors found 24 percent of the survey respondents reported being able to often or always reproduce results and 38 percent reported being able to sometimes reproduce results.Access to prior study data and procedural information appears to affect the ability to reproduce prior study results.When researchers had access to some of the data from the original study, they reported being able to at least partially reproduce all results in six of twenty-four instances.That success rate rose to three of four when researchers reported access to all data.Procedural information and code appears to matter as much as data.When researchers had access to some of the procedural information from the original study, they reported being able to at least partially reproduce all results in six of nineteen instances.That success rate rose to three of three when researchers reported access to all procedural information and all code.
The small number of reproduction study attempts reported in the survey results makes it difficult to draw broad conclusions.The results are internally consistent, however, and intuitively support the importance of available data and procedures for the reproducibility of geographic research.
Discussion
Our survey results indicate that geographic researchers are aware of reproducibility and reproducible research practices but have yet to incorporate many of those practices into their own work.We found that few researchers attempt to independently reproduce the work of others, or to publish the reproduction attempts they do undertake.In alignment with the broader reproducibility literature, geographic researchers identify the lack of methodological transparency and the unavailability of data and procedural information as key barriers to reproducibility in the discipline.These findings align with a small survey of conference participants conducted by N€ ust et al. (2018), which found that geographic researchers understood the importance of reproducibility but identified data restrictions and a lack of time as key barriers to making their own work more reproducible.Our results also suggest the need to change the culture of research, publication, and promotion within the discipline.This new culture would recognize and reward both original research that is reproducible and attempts to conduct and publish reproduction studies.On the whole, some awareness of reproducible research practices and the infrastructure to attempt reproductions and publish reproducible work exist within the discipline, but geographers have yet to make either a regular part of disciplinary practice.
Our findings also suggest that geographic researchers do not share a single definition of reproducibility.Although researchers share beliefs about the epistemological functions of independent reproductions, they provide definitions that contain different requirements for similarity across studies in terms of data, procedures, results, and context.Moreover, a subset of researchers define reproducibility as what NASEM (2019) defined as replicability-the ability to obtain consistent results across studies designed to answer the same question, each of which has obtained its own data.The interchangeable use of reproducibility and replicability, or the outright reversal of definitions we observed in our sample, has also been documented across the sciences (Plesser 2017;Barba 2018).Given that geography has no established standard use of either term and that many geographic researchers are also trained in other disciplines, it is likely that researchers at least partially inform their definition of reproducibility using concepts prominent in their cognate fields.
The variation in terminology we observed is important for at least two reasons.First, variation in geographic researchers' understanding of reproducibility reflects the discipline's diverse traditions and ways of knowing.Acknowledging this diversity as a strength of the discipline, productive discussions about reproducibility should consider how reproducible research practice fits into different traditions and what common understanding exists across traditions.Second, if researchers lock into a protracted debate about terminology, the community might hinder a more productive discussion about the epistemological role independent reproductions and open science practices can or should play in the discipline.380 Kedron, Holler, and Bardin Our findings point to potentially productive pathways for such a discussion.For example, qualitative geographers are the subset of respondents that most frequently diverged from respondents using other approaches and working in other subfields.This subset of respondents had much less familiarity and use of reproducible research practices and more frequently disagreed that reproducibility was compatible with their epistemological approach.These differences might also explain their lower rates of reporting barriers to reproducibility.Our qualitative respondents, however, did consistently value particular epistemic functions of reproducibility at higher rates than their disagreement with reproducibility on epistemological grounds suggests.This contradiction suggests that qualitative methodologists and reproducibility researchers have yet to meaningfully engage despite sharing some common values.One commonly held value that could serve as a platform for such an engagement is the shared belief in the importance of transparency and precise communication in research.
Qualitative researchers might also have much to contribute to the reproducibility literature owing to their unique perspective and approach to research.For example, qualitative researchers are more concerned than any other group that research positionality is a barrier to reproducibility, but other groups also recognize the impact researcher position and experience can have on research.Perhaps one way forward is to initiate a conversation that highlights how reproducibility is not an absolute standard or determinant of research quality, but instead a means of clarifying for others what was done in a study and why conclusions were drawn as they were.Even if a researcher believes positionality influences data collection and interpretation, using reproducible research practices to control all variables of research design except researcher position might help convey that position and its impact on study results.In other words, reproducibility could open up possibilities for new research questions about researcher positionality and its implications for the evaluation of qualitative and quantitative results.To our knowledge, there is little explicit discussion in the reproducibility literature about how researcher positionality can or should be recorded and conveyed to other researchers.Such work could also move the conversation about reproducibility away from a current focus on the exact re-creation of numerical results, and back to the practice's deeper function-independently assessing the claims of prior research.
Finally, although our results most directly inform quantitative and computational forms of research, we see a number of ways in which the practices examined in our survey can be used to improve the reproducibility of qualitative research in geography.Work in other disciplines can provide a foundation for these improvements.For example, Aguinis and Solarino (2019) developed twelve transparency criteria qualitative researchers studying management can use to catalog and share their data, methods, and overall approach.Roberts, Dowell, and Nie (2019) similarly introduced a methodology to guide reproducible codebook development for thematic analysis.The challenge will be adapting these approaches to geographic analysis.Take the case of using video and audio recordings to capture and share qualitative data.Combining Roberts, Dowell, and Nie's (2019) codebook methodology with version tracking software, a researcher could create a well-documented and detailed record of the interview coding process.When provided with the original recordings, that iterative record could be used to understand, re-create, and assess the final coding.When used to examine geographic phenomena, however, this approach might raise questions about the preservation of participant anonymity.For example, such recordings might require the redaction of not only participant information, but references made in conversations to particular places and times that could identify participants.These redactions both change the amount of information contained in the data, which could make it less useful to a study, and create the additional need to track redaction procedures.Reproducibility might help communicate the rigor of social science research practices and critical social science might help mediate competing values of reproducibility and protection of research subjects.This work remains limited and challenging, however, requiring creative solutions and improvements to public open science infrastructure.
Limitations
To our knowledge, our work is the first systematic attempt to survey a diverse set of geographic researchers about reproducibility.To draw a reliable and generalizable understanding of this issue, we developed a Reproducible Research in Geography robust sampling frame representative of the diversity of active geographic researchers.Ideally, we would stratify this set of potential respondents into meaningful subgroups based on their knowledge of reproducibility, and then randomly draw participants from these subgroups.If our resulting sample was imbalanced, we would then use a poststratification procedure to balance the response.
We could not follow this approach for two reasons.First, meaningful stratification and poststratification require knowledge of what predicts differences in response.Given the currently limited understanding of reproducibility within geography, prior to our study we could only speculate about the researcher characteristics predictive of different levels of familiarity and experience with reproducible research (e.g., subfield, methodological approach, or career position).We did not have the knowledge needed to identify reliable predictors.In this respect, our survey lays an initial foundation for examining reproducibility in subsequent studies by providing the first discipline-wide measurement of predictive researcher characteristics.Second, meaningful stratification and poststratification require a population-wide census of key predictors of reproducibility.We are not aware of any census of geographic researchers that contains these data, and we believe that conducting such a census would be difficult given the diversity of the field and the fuzzy boundaries between the discipline's subfields.Given the limitations to stratifying or balancing a survey on reproducibility in geography, our study should be viewed as an exploratory analysis with random sampling and a transparent, reproducible methodology for sample frame construction.
Absent stratification, we have taken steps to reduce several forms of potential bias in our survey.We have worked to eliminate exclusion bias by including in our sampling frame all researchers publishing as corresponding authors in a wide range of geography journals over a five-year period.Although we cannot eliminate the possibility of self-selection bias from our survey, we attempted to quantify potential self-selection by calculating and comparing the completion rates across subfields and approaches.Completion rates for all subfields were between 84.0 percent and 87.0 percent, except slightly higher rates for geographic methods and GIS researchers (96.8 percent).Completion rates were 84.2 percent for mixed methods, 87.0 percent for qualitative methods, and 91.1 percent for quantitative methods.These values suggest that self-selection was not a significant issue.Finally, we attempted to mitigate the potential for questionnaire bias, which could be caused by partially basing our survey instrument on prior studies that overrepresent perspectives from the computational and experimental sciences.To address this concern, we incorporated into our survey questions from a parallel review of the reproducibility literature available within geography and a review of critiques of positivist science made by social scientists and human geographers.We also included space for text-based qualitative responses in each survey theme, and pilot tested our instrument with a diverse set of geographers.
In light of our finding that participants often provided definitions of reproducibility that only partially matched the NASEM-based definition used in our survey, we cannot be certain which definition respondents had in mind when answering survey questions.We attempted to preemptively address this concern by repeatedly providing the NASEMbased definition during the survey.Although we cannot directly assess which definition each participant used when responding to our questions, we have attempted to indirectly measure this issue and its potential effect on aggregate participant response.Specifically, we examined whether participants who provided reproducibility definitions that shared one or two characteristics with the NASEM definition answered survey questions differently than those that provided definitions that shared three or four characteristics.We found little difference in the responses of researchers in these two groups.For example, 83.8 percent of participants who provided definitions highly similar to the NASEM definition identified unavailable data as a barrier to reproduction, whereas 92.9 percent of participants with low similarity identified this same factor as a barrier.Participants in the two groups also provided similar estimates of the percentage of reproducible results published in the discipline-39.4percent for the low-similarity group compared to 40.3 percent for the high-similarity group.These levels of similarity were observed across all survey questions.As a final robustness check, we conducted a similar analysis by splitting our sample based on our identification of participant definitions that closely aligned with the NASEM definition of replication.We again found little difference in survey responses between these groups.These checks led us to conclude either that participants used our provided definition when 382 Kedron, Holler, and Bardin answering questions or that differences between our provided definition and those of participants were unlikely to have affected their response to our specific set of survey questions.
Conclusion
In this study, we have provided the first systematic survey of the use of reproducible research practices across geography's diverse research traditions.Our results make clear that geographic researchers are aware of reproducible research practices but lack direct experience using those practices.Academic incentive systems and the inaccessibility of key components of prior research hinder reproducibility, and a small percentage of researchers are attempting to independently reproduce past work.
Arising from the survey results, we see an opportunity for geographers to contribute to the interdisciplinary challenges and debates surrounding reproducibility.There has been a tendency to reduce reproducibility to a matter of sharing computational artifacts (e.g., data and code), and to codify artifact sharing as the narrowed goal of reproducibility through requirements for publishing, funding, or badging.Although these practices might allow independent researchers to more easily reproduce and evaluate some aspects of prior studies, they lose sight of the underlying epistemological functions of reproduction studies.Our results demonstrate that geographic researchers have a more varied understanding of reproducibility.Although the discipline does not agree on the importance of sharing of artifacts for computational reproducibility, there is alignment on the clear, precise, and open communication of research and the use of reproduction studies to evaluate or extend the claims of prior work.This shared understanding provides common ground for a discipline-wide debate about the role reproductions can or might play within different epistemologies and subfields, or in the presence of spatial heterogeneity and unique placebased characteristics.
Our work also creates a foundation for the further empirical investigation of reproducibility within geography and its many disciplinary traditions, and more broadly across the sciences.We have made all the materials used in the development and execution of this research openly available so that others can critique and extend our work.We urge other researchers to reanalyze our data, replicate our study, improve our sampling frame and survey instrument, and progressively create a deeper understanding of questions we only begin to address in this work.One immediate path would be to use our materials to survey geographic researchers about replicability, as our results show that some researchers appear to see a clearer role for replications over exact reproductions in their subfields, whereas others conflate reproducibility and replicability.Disentangling these concepts and connecting them with the epistemological debates presented here is particularly salient in the context of convergence research addressing the most urgent challenges facing humanity, including climate change, global inequality and poverty, global health, and political conflict.
Five years of reproducibility reviews of AGILE and GIScience conference papers conducted by N€ ust et al. (2023) and Ostermann et al. (2021) consistently identified low levels of research
Table 1 .
Descriptive summary of researcher perceptions and experiences with reproducibility Note.Each data cell contains the mean and the standard deviation in parentheses of aggregate measures of researcher definitions, familiarity, experience, and barriers.The domain of each measure is: Definition [0, 4]; Familiarity [0, 5]; Experience [0, 5];Barriers [0; 12].PH ¼ physical geography; MT ¼ GIScience and methods; NS ¼ nature and society; HU ¼ human geography; QN ¼ quantitative; MX ¼ mixed methods; QL ¼ qualitative.
Table 2 .
Researcher familiarity with and use of reproducible research practices Note.Cells report the percentage of respondents reporting being "somewhat" or "very" familiar with a reproducible research practice or using those practices "most of the time" or "always."PH ¼ physical geography; MT ¼ GIScience and methods; NS ¼ nature and society; HU ¼ human geography; QN ¼ quantitative; MX ¼ mixed methods; QL ¼ qualitative.
Table 3 .
Barriers to reproducibility Note.Cells report the percentage of respondents reporting each factor occasionally or frequently contributed to a lack of reproducibility in geographic research.PH ¼ physical geography; MT ¼ GIScience and methods; NS ¼ nature and society; HU ¼ human geography; QN ¼ quantitative; MX ¼ mixed methods; QL ¼ qualitative. | 2023-12-04T16:38:19.451Z | 2023-12-01T00:00:00.000 | {
"year": 2024,
"sha1": "33addc99c928f4f9a2ba6fefea6ef0f90e005b9b",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/24694452.2023.2276115?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "aa298419040dfbb59285ee851ba9de485133b251",
"s2fieldsofstudy": [
"Geography"
],
"extfieldsofstudy": []
} |
13330608 | pes2o/s2orc | v3-fos-license | Performance Evaluation of DDS-Based Middleware over Wireless Channel for Reconfigurable Manufacturing Systems
Reconfigurable manufacturing systems (RMS) are rapidly becoming choice of production and manufacturing industry due to their quick adaptability to the ever-changing market demands while maintaining the quality and cost of the products. Such systems are usually decentralized in their monitoring and control and consist of heterogeneous components. Therefore, need arises for an interface that can mask the heterogeneity and provide smooth communication among these dissimilar components. Data Distribution Service (DDS) is a data-centric middleware standard based on Real-Time Publish/Subscribe (RTPS) protocol that fulfills the job of such interface in distributed systems. In this work, we present the idea of using DDS-based middleware over commonly used wireless channels like Bluetooth and Industrial WiFi to facilitate data communication in distributed control systems. A simulation model is developed to quantify various performance measures like latency, jitter, and throughput and to examine the suitability of aforementioned wireless channels in distributed monitoring and control environments. The model explores various communication scenarios based upon a practical case study. Obtained results serve as an empirical proof of concept that DDS can ensure reliable and timely data communication in firm real-time distributed control systems using common wireless channels and offer extensive control over various aspects of data transmission through its rich set of QoS policies.
Introduction
Use of distributed control systems (DCS) has become quite ubiquitous in a variety of industries like oil refining, petrochemical, food processing, cement production, pharmaceutical, and so forth. Modern controllers have become powerful enough to collect data, make decisions, and issue commands on their own instead of routing the data to a master control unit as in centralized control systems. The most prominent advantages of DCS are their flexibility, agility, adaptability, and no single-point-of-failure. However, this distributed control paradigm raises its own challenges that need to be addressed before optimum benefits could be harvested.
One of these challenges is that DCS not only require I/O communication for every controller but also need horizontal (with other controllers on the same hierarchical level) and vertical (with other devices on different hierarchical levels) communication [1]. Another challenge is the heterogeneity [2] in various components used to constitute the DCS. These components, normally provided by different vendors, vary in their capabilities, data formats, mapping schemes, and I/O interfaces and, therefore, present a rather complex heterogeneous system to deal with.
To facilitate the industrial control communication in DCS and mask the heterogeneity amongst the subsystems, various middleware technologies have been proposed over the last couple of decades. Among these are Web Services, CORBA (Common Object Request Broker Architecture), Java RMI and OPC (OLE for Process Control), and so forth. These technologies can simplify the design significantly and integrate control devices despite their heterogeneity. However, these solutions lose their value in real-time environments 2 International Journal of Distributed Sensor Networks because of their inability to adapt to certain characteristics of real-time process data [1] like periodic messages with data sampled values. Furthermore, they are nondeterministic and do not usually respect strict timelines.
Object Management Group (OMG) developed Data Distribution Service (DDS) as an open and platformindependent middleware standard that uses Real-Time Publish/Subscribe (RTPS) protocol. DDS-based middleware deploys many-to-many communication model and offers extensive control over a large spectrum of Quality of Services (QoS) policies [3]. Although DDS is a relatively new middleware specification, it is gaining substantial attention from researchers as a suitable solution in mission critical industrial automation applications [4][5][6].
Wireless channel seems to be the natural choice as a communication medium in such reconfigurable environments because wired media will severely limit physical reconfiguration of the system components. Assuming that reconfigurable manufacturing systems normally span a smaller area (the assumptions is generally true for small-scale enterprise), a short-range limited-bandwidth wireless channel like Bluetooth, Zigbee, or WiFi could be a suitable option. For this work Bluetooth and Industrial WiFi have been selected as the communication channels and we intend to measure the performance of DDS over these channels for the control data. The choice of these two channels is inspired by the fact that they are more ubiquitous and mature technologies than contenders. They have relatively higher data rates than other commonly used RF based technologies like Zigbee and are able to support UDP/IP and TCP/IP on transport layer. This later ability is particularly important when integrating them with RTPS protocol of DDS-based middleware.
The rest of the paper is organized as follows. Section 2 briefly explains the fundamentals of DDS and Section 3 outlines some of the recent related works to show the popularity of DDS in distributed industrial control applications and the relevance of this work to published literature. In Section 4, application of Industrial WiFi and Bluetooth in distributed control will be discussed. A mathematical model of RTPS paradigm presented in Sections 5 and 6 discusses a case study for experimentation along with performance measures to analyze the performance of DDS. In Section 7 we will explain the experimental setup and QoS values used to evaluate the proposed approach. The results analysed in Sections 8 and 9 conclude the discussion with a hint of potential future work.
Fundamentals of DDS
Middleware technology has been in use for the last four decades or so [8], at first explicitly as stand-alone components and later implicitly as built-in modules, in various roles and shapes. Most of the early models like CORBA [9], OPC [10], Java RMI [11], and Web Services [12] use either client-service communication model or message passing communication paradigm and hence prove somewhat ineffective in real-time environment. They are also platform-dependent. To make middleware more acceptable in real-time and heterogeneous applications, OMG released Data Distribution Service [13] as a platform-independent real-time publish/subscribe middleware standard capable of implementing a broad set of mechanisms to define and manage QoS requirements. Based on these exhaustively elaborative specifications, many implementations are developed that enable various programming language to be combined with commonly used general purpose operating systems. Some open source implementations of DDS include Open DDS and OpenSplice whereas Connext is a proprietary implementation by Real Time Innovation Inc.
DDS employs two levels interfaces which are as follows: (i) DCPS: Data-Centric Publish Subscribe is the lower level of the two interfaces and is responsible for efficient delivery of proper data to the concerned receiver.
(ii) DLRL: Data Local Reconstruction Layer is an optional higher-lever interface, which allows integration of the middleware and the application layer.
DCPS ensures predictability and high performance of the middleware and the efficient use of the system resources. DLRL automatically reconstructs the data based upon the updated values and gives the application an impression as if the data were local. In this way, the middleware becomes able to transmit the data to all participating subscribers as well as update a local copy of the information. DDS offers a great deal of flexibility and scalability to the distributed systems by effectively decoupling the publishers and subscribers from each other. Conceptual Outline and the basic constructs used by DCPS in the information flow are shown in Figure 1. Brief description of each of them is given in the appendix.
Related Work
This section highlights some of the works involving middleware for real-time distributed environment and discusses the suitability of DDS standard in industrial automation scenarios.
International Journal of Distributed Sensor Networks 3 The importance of the middleware technology in distributed systems has been greatly emphasized in [8]. The authors have presented brief and comprehensive history of the middleware and have narrated the evolution of the technology right since its inception in the early 1970s. It was regarded as an inevitable component of any largescale distributed system. In their words, "trying to build a distributed application without middleware is like trying to write a simple application on a personal computer without the operating system." The authors foresee that the technology will become even more indispensable for distributed heterogeneous system as the systems will tend to grow more and more complex in the days to come.
A comprehensive study of trends and developments in wireless manufacturing industry is conducted by Huang et al. in [14]. It has been established that wireless technology in reconfigurable manufacturing can facilitate collecting realtime plant data, improving inventory control and planning, and scheduling and executing the production adaptively as a response to changing customer demands. Their survey has found the application of wireless manufacturing in a broad array of roles like part fabrication, product assembly, Justin-Time (JIT) manufacturing, and reconfigurable production lines among others.
Realizing that the next generation real-time distributed systems will be highly complex high-performance environments consisting of large number of nodes with heterogeneous characteristics, the authors in [15] proposed an approach to reduce the complexity by using iLand which is an enhanced version of middleware for real-time configuration of service-oriented distributed systems [16]. It has been recognized that future real-time reconfigurable distributed systems are expected to offer data-intensive capabilities by means of assimilating the processing power of large number of nodes. The systems are projected to have increased dynamic behaviour as a result of recurrent reconfigurations, for instance. The approach presented in this work involves modelling the reconfiguration of the system as graphs containing all tentative solutions and finding a new valid solution from the complete graph every time the system undergoes reconfiguration. The results show that solution provides phenomenal decrease in the computation time of the reconfiguration process.
Zhang et al. [17] discussed the possibility of an RFID enabled real-time manufacturing with reconfigurable properties. They proposed a framework of reconfigurable information infrastructure for manufacturing companies. The goal is to enable the manufacturer to implement real-time and smooth dual-way connectivity between RFID enabled devices and the application system at enterprise and shop floor level. They modelled a production process as a workflow network with nodes representing the work and edges corresponding to the data and flow of control. The authors claim that their framework can allow wireless manufacturing data collection and reconfiguration in real time. However, the limited range of RFID technology may pose severe limitation on the span and topology of shop floor.
Although DDS is a very powerful and flexible technology, it may prove to be rather complex to fully comprehend particularly for novice users. This issue was spotted and dealt with in [4]. The authors floated the idea of a software component encapsulating the functionality of commonly used industrial automation controllers like PLCs, IPCs, and Robots which can then be used to create any automation application. The role of DDS in this case can simply be as a communication backbone. The paper shows how to map different traffic patterns using DDS entities taking full advantage of DDS QoS policies.
In [18] Al-Madani et al. conducted performance enhancement of limited-bandwidth wireless industrial control system. They carried out experiments to study various performance measures of control data communication using DDS over Bluetooth and LAN. However, there was no experimental or software model provided in this work. So it cannot be determined how the obtained results will be affected by spatial or topological variations in the physical system.
Large-scale mobile networks find their applications in a variety of areas like emergency response, logistics, transportation management, environmental monitoring, and so forth. These systems normally require real-time tracking of all the nodes and some means of interaction among them. The use of DDS-based middleware in such large-scale mobile networks has been studied by the authors in [19] in which they have customized middleware, termed as Scalable Data Distribution Layer (SDDL), derived from DDS specifications for online tracking and monitoring of large-scale mobile vehicle fleet spread over vast geographical area. SDDL uses two communication protocols: RTPS protocol for communication with SDDL core network over wired media and Mobile Reliable UDP for wireless communication between core network and mobile nodes. The results confirm that the proposed middleware supports mobile nodes handover and multicast and broadcast communication models in real time with acceptable round trip time delays.
Finally, the suitability of short-range limited-bandwidth communication channels, like Zigbee and Bluetooth, in industrial applications has been studied in [20]. The author discussed various merits and demerits of both technologies on different grounds. It is established that because of relatively greater data rate and faster active slave channel access, Bluetooth is better than Zigbee for machine-to-machine communication and for ad hoc connectivity between the fixed equipment and mobile devices.
Distributed Control over Wireless Channel
Ethernet is widely used in distributed industrial control as the communication channel due to its high-bandwidth and minimal packet loss. However, wired media are not an option when considering RMS. To avoid unnecessary wiring and make the system more tidy, Industrial WiFi can be used as a wireless substitute for Ethernet; however, it is prone to packet drop with the increase in traffic because it relies upon an access point to transmit the data between communicating nodes. Bluetooth, on the other hand, uses mesh topology that eliminates need of any such device and offers better reliability in terms of data delivery. Though it has comparatively narrower bandwidth, it can still work pretty fine in the given area of application knowing that data-rate requirements in industrial sensing and control applications are often low to intermediate [20]. The RTPS protocol is specifically designed to run in multicast mode over connectionless best-effort transports like UDP/IP; it can also run over connection-oriented reliable transports like TCP/IP. Therefore, integrating DDS with WiFi does not require any tethering or protocol interface between RTPS and WiFi. RTPS runs like "plug and play" over WiFi. With Bluetooth, however, it requires IP to run over Bluetooth so that RTPS can be integrated with UDP/IP or TCP/IP transport protocol [21]. Figure 2 illustrates the layered architecture of DDS over Bluetooth. In the transport layer of DDS, UDP/IP provides the middleware with finegrained control over data transmission and allows it to decide whether the transmission should be reliable or best effort, depending upon the application requirement and underlying network type.
Above UDP/IP layer is Zlib which is a software library that performs data compression. It is an example where RTI DDS developers can implement their very own data compression algorithms to suit their application specific needs. RTPS wire protocol uses a packet header size of 56 bytes which includes timestamps and submessage headers [13]. This metadata, or transmission overhead, can further be reduced by the governing application, though this area has not been greatly explored yet, especially for low bandwidth channels.
The purpose of this work is to empirically show that DDSbased middleware can meet real-time reliable and efficient data transmission requirements in RMS environment over abovementioned wireless channels and sustain practical QoS support in majority of applications. In the remaining part of this section, we present a test case setting corrosponding to distributed real-time control where judicious selection of QoS policies can lead to smooth, reliable, and on-time data transmission amongst various components of the system.
Mathematical Model of RTPS Paradigm
In this subsection, we will briefly discuss the mathematical model of simple publish/subscribe (PS) communication, which is at the heart of DDS standard. Zhai et al. [22] have elaborated the interaction of various entities in PS model with one another using Set Theory. Let we have three actors participating in PS system: publisher, subscriber, and Information Repository. Publishers and subscribers are already explained earlier. Information Repository is responsible for defining acknowledgements. There are three types of objects: notification, subscription, and acknowledgements. Publisher issues notification about its publication and subscriber defines its requirements for subscription. If the notification about a certain publication matches any of the subscription requests, that publication is delivered to the requesting subscriber.
Let be a 6-tuple defined as where is a set of publishers, is a set of subscribers, is a set of Information Repositories, is a set of notifications, is a set of subscriptions, and is a set of acknowledgements. sketches the structure of publish/subscribe system and marks the boundaries of system's state space. The three entities present in the system interact with one another by performing certain action. We define an event as an action that changes system's state. Naturally, PS system behaves as discrete event systems. Therefore, we let = { 1 , 2 , 3 , . . . , , . . .} be a set of, possibly, infinite events that may occur. Every event occurs at a discrete point in time represented by ( ). No two events can occur simultaneously; that is, if ( ) = ( ), then = . Hence, we can only have an ordered sequence of events in which always precedes if and only if ( ) < ( ) given that < .
In very primitive model of a PS system the following types of events can occur.
Publish. A notification is published by publisher.
Notify. Subscriber is notified about publication.
Subscribe. Subscriber activates a subscription.
Unsubscribe. Subscriber revokes an existing subscription.
Acknowledge. Information Repository issues acknowledgement.
Having laid down the bases for nomenclature, now we can mathematically define a publish/subscribe as = ( , ), where describes the structure of the system and determines its behaviour. PS system behaviour can now be modeled as ordered sequence of events which results in change of system states. System state changes when, as an effect of certain event, any of the individual publisher, subscriber, and Information Repository changes its state. For example, when a publisher, , issues a new notification, it changes set of notifications associated with this publisher, ( ), triggering a transition in 's state. This change in ( ) Figure 3: An automated reconfigurable manufacturing system (reproduced with permission from [7]).
ultimately causes change in ( ) which is super set of ( ) and represents set of all notifications issued by all publishers in the system. In the same way the state of a subscriber changes as a result of changes in ( ) and ( ); and any change in ( ) causes change of state of Information Repository .
An Automated RMS
To evaluate the performance of DDS-based middleware in RMS, consider the automated manufacturing system shown in Figure 3. The system is adapted from [7] and consists of three machines represented by 1 , 2 , and 3 ; it has five sensors ( 1 , 2 , 3 , , and ) and four actuators ( 1 , 2 , 3 , and ) for blocking. The manufacturing system is constructed using PLCs. Two types of pallets, Type 1 and Type 2, are input to the system randomly. The route which these pallets may take is decided by a three-position stop, , whose selection depends upon the routing information provided by 1 and 3 sensors.
All three machines publish a set of values pertaining to their operation. These values include state of the machine and various physical quantities like motor speed, pressure, temperature, and so forth. Machines can be instructed to physically reposition themselves using configuration commands sent by the control panel (not shown in the diagram). Besides issuing configuration commands, control panel also monitors machines' data by subscribing to their respective Topics.
Sensors 1, 2, and 3 look for the availability of their respective machines. If a machine is currently operating on some pallet, then these sensors publish FALSE Boolean value to indicate that the machine is currently busy and no more input may be dispatched on this route. Otherwise, they transmit TRUE as soon as the machine is ready for the next input. Input sensor ( ) keeps an eye on input availability to the system. Output sensor ( ) monitors whether a finished product by either workstation has left the system. It helps avoid product collision over the conveyor belt. Actuators 1, 2, and 3 are stop points and remain close as long as corresponding machines are working. They are open to let the finished product pass and let new input get into the workstation. Dispatcher is responsible for forwarding the input (if available) to either of the two routes or hold until one of them is available.
The motors, actuators, and sensors can physically move across the workbench (shown in grey strip) enabling the whole system to transform into a single-, double-, or tripleworkstation system, thus facilitating configurability of the system in both physical and functional sense. As it is assumed that the whole setup covers a relatively smaller space (up to few tens of meters), therefore use of limited-bandwidth wireless channel seems proper in this scenario.
Performance Measures.
Before moving on to the experimentation, it is better to first discuss the performance measures that will determine whether Bluetooth and WiFi can fulfil real-time data communication requirements of the system.
(i) Latency: latency is the time a data packet takes to reach the receiver side. It includes propagation delay plus queuing delay at the receiver side. It can be (2) In firm real-time systems, deadlines are relatively relaxed as compared to hard real-time systems. Although a packet arriving after the deadline may not be of any value, occasional longer delays or even packet loss does not cause the system to fail [23,24].
In this experimentation, every data packet is sent with a time stamp according to publisher's clock. The publisher upon receiving the acknowledgement for the packet calculates the time difference between sending the packet and receiving the acknowledgement and marks the duration as RTT. One way latency is therefore taken as half of RTT.
(ii) Jitter: jitter is the variation in latency. Smaller values of jitter mean that the data will, most of the time, experience almost the same amount of delay during its voyage from sender to receiver. Mathematically, it can be represented as given in Here, is the total number of delay samples and is the mean value of delay samples. Jitter is significant in determining the precision of the system. If a channel shows sufficiently small average latency during a transmission session but exhibits large jitter values, it signifies that some packets take unusually long time to reach the subscriber and any given update cannot be reliably sent within average latency duration.
(iii) Throughput: throughput denotes average rate of successful data transmission over a channel. This data rate does not take only payload into consideration but also includes any protocol overhead. We used (4) to calculate the throughput: Throughput is important in observing channel utilization. High throughput implies that bandwidth resources are property utilized. However, increasing throughput beyond a certain point may cause congestion and subsequently result in packet loss. This in turn may cause longer delays. Therefore, monitoring throughput is pivotal in real-time mission critical applications. Bluetooth offers several frame formats with varying header and payload sizes. The most common format has 126 bits of metadata and up to 2744 bits of payload. However for a single time slot of 625 s (there are 1600 frequency hops per second in Bluetooth) the frame carries only 240 bits as payload, while frame metadata remains the same [25,26].
Experimental Setup
Simulation model corresponding to the scenario depicted in Figure 3 is shown in Figure 4. As can be seen from the figure, the model incorporates one-to-many and many-tomany communication requirements. Each rectangle and each square represent a domain participant (we are assuming that International Journal of Distributed Sensor Networks 7 all the participants are in a single domain). These DDS entities correspond to various types of hardware devices like sensors, actuators, and motors as shown in Figure 3. The QoS used in experimentation are given in Table 1.
The rtiddsgen utility provided by RTI Connext 5.0.0 is used to generate C++ code. Visual Studio 2010 is used to build the code and Wireshark 1.2.3 is employed for traffic monitoring over wireless channels.
QoS Policies Used in Experimentation.
Below, we present a short description of the QoS used for the experimentation and their interdependence on one another.
(i) DURABILITY: durability QoS decides if data should outlive their writing time, that is to say, whether or not data samples should be archived in middleware service after they are written. A VOLATILE type does not care to save any sent data samples on behalf of Data Writers; however, TRANSIENT type maintains record of sent updates in memory and the data is not tied to the lifecycle of Data Writer. It means that data will still be available even if the corresponding Data Writer goes offline. These achieved samples may be delivered to late-joining subscribers who want to know what they missed.
(ii) LATENCY BUDGET: this QoS policy describes maximum acceptable delay between sending and receiving of the data. This is not something carved in the stone, but rather just a guideline to the service. If an update fails to meet this acceptable delay, the service will not raise any flags or discard the packet.
(iii) LIVELINESS: it indicates the mechanism by which the middleware knows if any participating entity is active or has gone offline. Every Data Writer periodically signals its liveliness to all the Data Readers. The signalling period must not exceed liveliness lease duration; otherwise Data Reader assumes that the Data Writer is no more alive.
(iv) RELIABILITY: the RELIABILITY QoS policy specifies the level of reliability that a subscriber can offer or a publisher can request. It has two values, RELIABLE and BEST EFFORT. In RELIABLE mode, the Data Reader must acknowledge the receiving of each and every packet that arrives. Data Writer does not discard any data value that has been transmitted but not yet acknowledged. This approach has slightly negative effect on the latency because the receiving entity must check the integrity and order of the received packet before acknowledging. It also consumes some of the channel bandwidth for acknowledgements. On the other hand, if a packet drop, every once in a while, does not greatly affect the system and the application cares little about the order of the data received, it is better to use BEST EFFORT mode which does not require sending or receiving acknowledgements.
(v) HISTORY: this QoS defines the behaviour of middleware service in case the value of the data changes before it is successfully transferred to the receiver. On sender side, it controls the number of samples that will be kept with Data Writer on behalf of Data Reader. On receiver side, it indicates the number of samples maintained by Data Reader until the subscriber application reads the data.
(vi) RESOURCE LIMIT: it indicates how much resources the middleware may consume to comply with the QoS requirements. Configuration of this QoS must be in accordance with other QoS settings. For example, if RELIABILITY is set to RELIABLE, then Data Writers need some memory space to store the data samples that have been sent but not yet acknowledged. If we set max samples per instance to, say, 1, then Data Writer will not be able to cache enough unacknowledged packet to implement RELIABLE communication. Therefore, to implement RELIABILITY QoS successfully, enough resources must be allocated using RESOURCE LIMIT.
Results and Analysis
The simulation model mimics the behaviour of communicating devices; it generates random data values periodically and publishes them. It also plays role of subscribing components. This simulation model runs on different computer machines connected via Bluetooth or WiFi. The generated values are transmitted over physical channel and actual measurements are taken and recorded for analysis.
8
International Journal of Distributed Sensor Networks
Experiments for Latency and Jitter.
For the latency and jitter test, we used payload of 1024 bytes (which is the maximum payload size in the given scenario). First one publisher and multiple subscriber scenario is examined and latency and jitter are calculated. In each run 10,000 to 50,000 packets are sent; the tests are repeated up to 10 times and average is taken to make the results more precise. Table 2 shows the obtained results.
Based upon the collected data, jitter is calculated using (3). We can see that both latency and jitter increase linearly as the number of subscribers grow. Due to higher bandwidth, WiFi latency is significantly lower than Bluetooth latency, despite the fact that WiFi packets from one node must go to an access point before being routed to the destination node. However, the values of average latency for Bluetooth, even for the worst case, are well within acceptable range for firm realtime requirements (around 40 msec). Figures 5 and 6 show the graphs of latency and jitter corresponding to Table 2, respectively. We also observe that while WiFi jitter is far smaller than Bluetooth jitter for fewer subscribers, the gap between the two tends to shrink as more and more subscribers (and consequently network traffic) join in and packet drop increases. This is because of the fact that WiFi performance at a certain instance depends greatly on the amount of traffic at that given point in time.
Latency and jitter are calculated for many-to-many communication scenario as well. Tests are run to examine the effect of multiple publishers and multiple subscribers on the latency and jitter. As expected, both performance measures have higher values when multiple participants try to transmit the data over a single channel. These results are tabulated in Table 3. Figure 7 shows the results in graphical format. Here again we can see that DDS ensures small enough latency and jitter over both communication channels to accomodate firm real-time requirements of most RMS. However, WiFi precedes Bluetooth in terms of better latency and jitter in every scenario.
In these experiments, the maximum number of subscribers is limited to 8. This is for two reasons: firstly, our case study does not require more than 8 subscribers for any control data and secondly the trend in the performance measures can be easily deduced with these many participants. It is anticipated that with the increase in the number of subscribers latency, jitter, and throughput will follow the same course as obtained in these experiments.
Experiments for Throughput.
Throughput depends upon the size of the data packets sent and the frequency of the transmission. Most of the data size used in the case study is less than 150 bytes. Only configuration commands may extend up to 1 KB. We used this maximum packet size to calculate the throughput of the Bluetooth channel. This time, 100,000 to 160,000 samples of data packets are sent from the publisher side and received on the subscriber side in each iteration. Total time for this communication is noted and throughput is calculated using (4). The experiments are conducted at least 10 times to get more precise results.
We notice from Table 4 and Figure 8 that the throughput for the given packet size is not quite affected by the number of participants currently active. There is very small and random difference in the throughput against various number of subscribers. Bluetooth provides extremely high throughput for 24 Mbps channel (for Bluetooth 3.0 + HS). Though the throughput of WiFi is significantly low, it provides better latency values for the low data-rate communication.
Like many-to-many latency and jitter experiments, we also conducted tests to measure average throughput over Bluetooth and Industrial WiFi in many-to-many mode. Various configurations of publishers and subscribers are examined. Table 5 summarizes the results obtained for both channels. We can observe from Figure 9 that throughput is not significantly affected by the number of participants in many-to-many scenarios as well and once again Bluetooth surpasses WiFi in terms of higher average throughput. It should be noted, however, that tests on WiFi require a very controlled environment to avoid any unnecessary traffic and ensure that only DDS applications are using the channel. This is not the case with Bluetooth because of its point-to-point connection.
Conclusion and Future Work
This work is a first attempt to investigate the suitability of Bluetooth and Industrial WiFi in RMS applications taking full advantage of DDS-based middleware. It is established that most of the distributed industrial control systems involve exchange of simple control parameters and system states and therefore require relatively low data rates. DDS-based middleware can mediate among heterogeneous devices in a typical small-area RMS by offering a data-centric communication paradigm for abstracting their peculiar data representations. The results show that DDS over these wireless channels fulfil real-time data communication requirements of most of the limited-bandwidth small-area control systems. They offer high throughput for small data packets and low latency which is suitable for firm real-time systems. These performance measures in conjunction with inherent security and reliability of these channels make them a safe and reliable choice for most RMS applications. Literature survey reveals that not enough attention has been granted to wireless RMS applications and the role of DDS-based middleware. Results obtained in this work are encouraging and call for further focused research to study the proposed concept in greater depth. For future research work, use of DDS over Zigbee in DCS environment can be an interesting area to investigate. Injecting RTPS data over rather low-bandwidth Zigbee network can be challenging. Research in this area can present proof of concept of Zigbee's suitability in mission critical real-time applications using DDS-based middleware. It is, however, expected that latency and jitter may increase significantly given narrower bandwidth of the channel. The work in this area is currently underway. | 2018-04-03T00:37:14.078Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "8aabf0fbe70d601e7261e56cca807abf392b72c3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2015/863123",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5682ba5cc1eaeb850bcd4004accacfc884019aee",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
45845771 | pes2o/s2orc | v3-fos-license | Spontaneous sublingual hematoma due to warfarin: An emergency presenting to the dermatologist
Sir, A 47-year-old man presented to us with a painless red swelling of his tongue causing difficulty in speech and swallowing for 2 days. He had been on oral warfarin (4 mg/day) for atrial fibrillation for 6 months before this. His international normalized ratio (INR) was monitored weekly with a target of 2–3. There was no history of recent trauma or external bleeding. Physical examination revealed a tense, tender, red submucosal hematoma involving the floor of the mouth and ventral lingual surface bilaterally [Figures 1 and 2]. The tongue was pushed slightly upward and the patient could protrude his tongue only with difficulty. Vital parameters were normal and systemic examination was non-contributory.
Spontaneous sublingual Spontaneous sublingual hematoma due to warfarin: An hematoma due to warfarin: An emergency presenting to the emergency presenting to the dermatologist dermatologist Sir, A 47-year-old man presented to us with a painless red swelling of his tongue causing difficulty in speech and swallowing for 2 days. He had been on oral warfarin (4 mg/day) for atrial fibrillation for 6 months before this. His international normalized ratio (INR) was monitored weekly with a target of 2-3. There was no history of recent trauma or external bleeding. Physical examination revealed a tense, tender, red submucosal hematoma involving the floor of the mouth and ventral lingual surface bilaterally [ Figures 1 and 2]. The tongue was pushed slightly upward and the patient could protrude his tongue only with difficulty. Vital parameters were normal and systemic examination was non-contributory.
Flexible endoscopic examination revealed that there was no extension of the swelling into the pharynx, laryngeal mobility was normal and the airway was not compromised. Laboratory tests showed a hemoglobin level of 12.3 g/dl with normal leukocyte (6000/mm 3 ) and platelet (1.4 lacs/mm 3 ) counts. C-reactive protein and erythrocyte sedimentation rate were not raised. However, the INR at presentation was high (4.8).
Since there were no signs of impending airway compromise, he was managed conservatively with a single dose of vitamin K (5 mg intravenously) and 5 units of fresh frozen plasma.
Warfarin was discontinued, and the INR returned to normal within 48 h. The hematoma also decreased in size with improved tongue mobility within a couple of days. The patient was then put on dabigatran (150 mg orally, twice daily), a direct thrombin inhibitor which is less often associated with bleeding and does not require INR monitoring. The patient is still under our follow-up and has not had any further bleeding episodes or embolic manifestations.
Warfarin is frequently used for the prevention of embolic events. [1] Bleeding complications of warfarin have typically been described in the genitourinary and gastrointestinal tracts, the skin, the central nervous system, the nose (epistaxis), the penis (priapism) and the retroperitoneum. [1][2][3][4] Sublingual hematoma is rare but potentially fatal complication of oral warfarin therapy. [2,3,5] There are reports of postoperative deaths following spontaneous sublingual hematomas from anticoagulation. [3,5] There are also case reports of airway obstruction from spontaneous sublingual hematomas secondary to oral anticoagulation. [3,5] It is imperative to differentiate this condition from infectious processes such as Ludwig's angina as they are managed differently. [2] Securing the airway should be the main concern and prompt reversal of anticoagulation with close observation is required. [1][2][3][4] In the absence of airway compromise necessitating an artificial airway, medical therapy with reversal of the coagulopathy with vitamin K, fresh frozen plasma or factor concentrates remains the mainstay [2] With an expanding hematoma, elevation of the tongue and floor of mouth can cause airway obstruction. In these cases, laryngoscopic intubation is difficult. Early definitive airway stabilization should be the priority with rapid sequence intubation. If rapid sequence intubation fails, emergency cricothyroidotomy or tracheostomy may be performed for definitive airway stabilization in the emergency department. [6] Any patient on oral anticoagulation who comes with a sore throat or swelling of the tongue should be evaluated carefully because these symptoms may herald acute airway obstruction. Patients and their relatives should be educated about the side effects of these drugs. Since patients often initially seek a dermatological opinion for oral mucosal disorders, dermatologists may encounter sublingual hematomas in their practice with the increasing use of anticoagulation therapy. Dermatologists may therefore play a role in the prompt recognition of a sublingual hematoma by distinguishing it from other similar-looking conditions (Ludwig's angina, traumatic swelling, vascular malformation, hemorrhagic mucocele) and by timely appropriate referral, they might prevent airway compromise.
Financial support and sponsorship
Nil.
Confl icts of interest
There are no conflicts of interest. | 2018-04-03T06:11:49.038Z | 2016-07-01T00:00:00.000 | {
"year": 2016,
"sha1": "19b6f28f6885bf538b6c655551728e9007dda3c6",
"oa_license": null,
"oa_url": "https://doi.org/10.4103/0378-6323.181469",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "a6a154d83fb8de600c4c61f4aa32abccf7ecf83e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
155789672 | pes2o/s2orc | v3-fos-license | Impact of the model of long‐term follow‐up care on adherence to guideline‐recommended surveillance among survivors of adolescent and young adult cancers
Abstract Purpose Adolescent and young adult cancer survivors require lifelong healthcare to address the late effects of therapy. We examined the impact of different provider models of long‐term follow‐up (LTFU) care on adherence to recommended surveillance. Methods We conducted a retrospective cohort study using administrative health databases in Ontario, Canada. Five‐year survivors were identified from IMPACT, a database of patients aged 15–20.9 years at diagnosis of six cancers between 1992 and 2010. We defined three models of LTFU care hierarchically: specialized survivor clinics (SCCs), general cancer clinics (GCCs), and family physician (FP). We assessed adherence to the Children's Oncology Group surveillance guidelines for cardiomyopathy and breast cancer. Multistate models assessed adherence transitions and impacts of LTFU attendance. Results A total of 1574 survivors were followed for a mean of 9.2 years (range 4.3–13.9 years) from index (5‐year survival). The highest level of LTFU attended in the first 2‐years post‐index was a GCC (47%); only 16.7% attended a SCC. By the end of study, 72% no longer attended any of the models of care and only 2% still attended an SCC. Among 188 survivors requiring breast cancer surveillance, 6.9% were adherent to their first required surveillance testing. Attendance at a SCC in the previous year and higher cumulative FP or GCC visits increased the rate of subsequently becoming adherent. Among 857 survivors requiring cardiomyopathy surveillance, 11% were adherent at study entry. Each subsequent SCC visit led to an 11.3% (95% CI: 1.05–1.18) increase in the rate of becoming adherent. Conclusion LTFU attendance and surveillance adherence are sub‐optimal. SCC follow‐up is associated with greater adherence, but few survivors receive such care, and this proportion diminished over time. Interventions are needed to improve LTFU attendance and promote surveillance adherence.
| INTRODUCTION
Although cancer is the leading cause of disease-related death among adolescents and young adults (AYA) aged 15-29, 1 improvements in therapy and supportive care have resulted in over 80% of AYA diagnosed with cancer becoming longterm survivors. 2 Survivors are at an elevated risk for developing chronic physical and psychological morbidities (late effects) that can impact both the quality and duration of their life. 3,4 As a result, lifelong healthcare focused on each survivor's specific risks has been advocated. 5 Organizations such as the North American Children's Oncology Group (COG) have published surveillance guidelines for the late effects of cancer therapy 6 such as subsequent malignancies and cardiac dysfunction.
Several models have been proposed for the delivery of long-term follow-up care (LTFU) to survivors of AYA cancers. These vary by provider (e.g., oncologist, primary-care physician such as a family doctor or internist, nurse practitioner) and location (specialized survivor clinic vs. general cancer clinic vs. family physician office). 7 The optimal LTFU model has not yet been established. 8 While family physicians are more accessible to most survivors, they frequently lack specialized knowledge and comfort relevant to this population. 9 In Ontario, Canada, the Ministry of Health and Long-term Care funds a network of specialized multidisciplinary survivor clinics intended to provide lifelong risk-based care to cancer survivors diagnosed prior to age 18 years. However, access is conditional on having been treated at a pediatric institution. Since AYA diagnosed with cancer can receive therapy at a pediatric center, adult cancer center, or community hospital, these specialized survivor clinics are not accessible to all AYA survivors. Survivors in Ontario who do not attend a specialized clinic may receive LTFU care from an oncologist in a general cancer clinic in an adult cancer center or community hospital, from their FP, or have no LTFU care at all. Information on the impact of LTFU care models on adherence to surveillance in AYA survivors is minimal; some prior work in survivors of childhood cancers in Ontario has suggested that attendance at SCCs had a strong association with adherence to cardiomyopathy screening compared to no attendance. 10 To investigate LTFU attendance and its relationship with adherence to surveillance further, we linked provincial cancer registries to administrative health databases in Ontario to determine the models of LTFU care accessed by this population of AYA cancer survivors and to understand whether location of survivor care is related to adherence to recommended surveillance.
| METHODS
This retrospective cohort study was approved by Research Ethics Boards at the Hospital for Sick Children and Sunnybrook Health Sciences Centre. The cohort was identified from the Initiative to Maximize Progress in Adolescent and Young Adult Cancer Therapy (IMPACT), an Ontario provincial AYA cohort 11 of patients aged 15-21 years at diagnosis of one of six prevalent AYA cancers (leukemia, Hodgkin Lymphoma, non-Hodgkin lymphoma, testicular cancer, bone sarcoma, or soft-tissue sarcoma) between 1992 and 2010. Eligible survivors had survived at least 5 years from initial cancer diagnosis. Survivors identified in IMPACT were linked to provincial health administrative databases, described in Table S1 (Online Only) using a unique, encoded identifier. These databases are housed at ICES, a non-for-profit research institute that holds an array of Ontario's health-related data. Data sharing is not applicable to this paper as no datasets were generated or analyzed for this current study.
| Follow-up period
Survivors were followed from an index date, defined as 5 years from their primary cancer diagnosis. If survivors experienced a subsequent malignant neoplasm [SMN], relapse, or progression within 5 years of diagnosis, the index date adherent at study entry. Each subsequent SCC visit led to an 11.3% (95% CI: 1.05-1.18) increase in the rate of becoming adherent. Conclusion: LTFU attendance and surveillance adherence are sub-optimal. SCC follow-up is associated with greater adherence, but few survivors receive such care, and this proportion diminished over time. Interventions are needed to improve LTFU attendance and promote surveillance adherence.
K E Y W O R D S
adolescent and young adult, cancer survivor, follow-up care, surveillance, survivorship was re-defined as 5 years after the most recent event. The occurrence of relapse or progression of disease or development of an SMN was captured by IMPACT. Survivors were observed from index until the earliest of the end of study (31 December 2016), death, or any cancer event occurring more than 5 years from the initial cancer diagnosis.
| Exposure
We classified LTFU care into three categories: (i) Specialized survivor clinics--one of five SCCs for adult survivors of childhood cancer; (ii) General cancer clinic--care at an adult cancer center or community hospital by an oncologist/hematologist; and (iii) Family Physician--visits to a primary care physician were considered as follow-up if they consisted of a full history and physical examination, as defined in previous studies, 12,13 using the Ontario Health Insurance Plan (OHIP) billing codes listed in Appendix S1 (online only). OHIP, a database of all physician billing claims in Ontario since 1991, was used to identify and track the utilization of physician services at each of the levels of LTFU care.
| Outcome measures
Adherence to surveillance among AYA survivors was defined according to the COG guidelines (Version 4.0). Survivors at risk for cardiomyopathy or breast cancer based on chemotherapy and radiation exposures 6 were previously captured and calculated in IMPACT. The criteria for defining risk, as well as the recommended frequency of surveillance testing, are presented in Table 1. OHIP billing codes (listed in Appendix S1, online only) identified the occurrences of breast imaging (mammogram, breast MRI, and ultrasound) and echocardiograms. Periods of follow-up were created based on each survivor's recommended surveillance according to the COG. As of 2003, when the COG follow-up guidelines were created, breast cancer surveillance was recommended annually beginning at the later of age of 25 or 8 years after therapy or diagnosis for females that received chest radiation. Cardiac surveillance was recommended every 1, 2, or 5 years depending on the risk. Baseline adherence to breast cancer surveillance and cardiomyopathy was defined as 1 year preceding the date each survivor required surveillance according to the COG guidelines. The relationship between each category of LTFU and adherence was analyzed using the following variables: LTFU attendance in the previous year, and cumulative attendance at each level of care over time.
| Covariates
Baseline patient characteristics included sex and age at diagnosis. Treatment-related information included primary diagnosis, location of first cancer treatment (pediatric cancer center vs. regional cancer center vs. adult community hospital), receipt of chemotherapy, radiation or hematopoietic stem-cell transplant (each classified as "yes"/"no"), and occurrences of SMN, relapse, or progression of disease. Socioeconomic status (SES) was divided into quintiles of neighborhood deprivation using ONMARG (a database that quantifies SES using neighborhood data on residential instability, material deprivation, dependency, and ethnic concentration 14 ) and rurality was categorized into urban or rural. Distances to the closest specialized survivor clinic and/ or general cancer clinic were calculated using a straight line distance from each survivor's residence. SES, rurality, and distance variables were captured for all survivors at index and updated annually to be incorporated into the regression models as time-varying covariates.
| Statistical analysis
All analyses were performed with SAS for Unix (Version 9.3). Continuous variables were reported using mean and standard deviation, while dichotomous variables were reported as counts and percentages. Annual periods of follow-up were created for each individual starting at the index date until the earliest of a censoring event or the end of the study in order to assess a crude representation of LTFU care visits to each model. A multistate modeling framework was developed to examine adherence to surveillance over time. [15][16][17] Since survivor adherence status can vary widely over the course of follow-up, a multistate model was used to better reflect the natural back and forth transitions of survivors between adherence and nonadherence. We designated survivors as being at risk for breast cancer and/or cardiomyopathy using their cancer treatment data and the COG guidelines for screening and analyzed each at-risk group separately. Consistent with prior work examining screening adherence, 18-20 the multistate model consisted of three states: State 1: adherent; State 2: non-adherent; and State 3: dead, relapse, or SMN (after index). States 1 and 2 were non-absorbing states as transitions could be made back and forth between them. State 3 was an absorbing state as no further transitions were possible after this point. We modeled the instantaneous rate/intensity of transition from one state to another, and determined factors associated with each transition rate. 21 Under this 3-state model, univariable regression for each transition rate was first conducted for all predictor variables. Backward selection with p value cut-off <0.1 was used to include variables in the multivariable regression for each transition rate, where LTFU care was retained as the main exposure during the backward selection process.
| RESULTS
The cohort consisted of 1574 AYA cancer survivors, 508 (32.3%) treated for their cancer at a pediatric center and 1066 (67.7%) treated at an adult cancer center or community hospital. Median follow-up time from index was 9.2 years (range 4.3-13.9 years). Approximately two thirds of the cohort was male (62.4%), with an equal distribution of patients diagnosed in each age group between 15 and 20.9 years of age. Baseline characteristics for all survivors are summarized in Table 2.
| Frequency of LTFU care attendance
To assess LTFU attendance hierarchically, the proportion of survivors accessing each level of care within each 2-year period since index was calculated. Among all survivors irrespective of their initial cancer treatment location, the highest level of care within the first 2 years after index was a general cancer clinic among 47.3%, followed by a specialized survivor clinic (16.7%) and a FP (9.3%). The remaining 26.7% had no identified survivor care during this period (Figure 1). At the end of follow-up, a median of 8.7 years from index (range 4.3-13.9), the highest level of care with the preceding 2 years was a general cancer clinic in 6%, a specialized survivor clinic in 2%, and a FP in 20%. Seventy-two percent were receiving no follow-up care.
| Breast cancer surveillance
There were 188 women who required breast cancer screening according to the COG guidelines. At baseline (up to 1 year prior to the date each survivor required), 13 (6.9%) survivors were adherent and 175 (93.1%) were non-adherent. Of those not adherent at baseline, 84/175 (48.0%) remained non-adherent throughout. None of the 13 survivor's adherent at baseline remained adherent throughout follow-up. Multivariable predictors of changing adherence states are presented in Table 3 (
| Cardiomyopathy surveillance
An analysis of cardiomyopathy surveillance was conducted for survivors requiring annual or biennial imaging. Eight hundred and fifty-seven survivors required such screening, of whom 94 (11.0%) were adherent at baseline. Over the course of the study, 15/94 (16.0%) adherent at baseline remained adherent throughout, while 598/763 (78.3%) who were nonadherent at baseline remained non-adherent. Table 4 presents the results of the multivariable analysis of transition state predictors for survivors at risk for cardiomyopathy, requiring surveillance annually or biennially (Tables S6 and S7 display univariable results). Survivors who received anthracycline chemotherapy (with or without radiation to the chest) had a higher rate (RR 1.75, 95% CI 1.21-2.54, p<0.003) of remaining adherent compared to survivors who received radiation to the chest but no anthracyclines. Moreover, survivors who had a specialized survivor clinic visit in the previous year (RR 1.60, 95% CI 1.02-2.52, p<0.042) showed a greater rate of becoming adherent from a previously non-adherent state. Cumulative counts of prior specialized survivor clinic attendance had the largest impact on the rate of becoming adherent (RR 1.11, 95% CI 1.05-1.18), with cumulative visits to general cancer clinics (RR 1.03, 95% CI 1.00-1.05) and FPs (RR 1.07, 95% CI 1.01-1.13) having significant but smaller impacts.
| DISCUSSION
In this population-based study of the 1574 survivors of AYA cancer, almost three-quarters had no regular source of LTFU care by the end of the study, an average of 9 years from entering survivorship. Moreover, a concerningly low proportion of survivors was adherent to guideline-recommended surveillance for late effects of cancer therapy. By the end of the study, only half of at-risk survivors were adherent to breast cancer surveillance. Sixteen percent of those that required annual echocardiography and 48% of those who required biennial echocardiography were adherent. These low adherence rates indicate that most survivors are not being monitored appropriately for late effects, decreasing their chances of earlier detection and improved outcome. The most common location of follow-up care in the first 2 years after entering survivorship was at a general cancer clinic, suggesting that most 5-year survivors who remain in active care are still initially engaged with the clinic where they received their cancer therapy. Survivor preference is a critical factor in determining where follow-up care is received. A recent Swiss study revealed that AYA survivors rated follow-up care from the medical oncologist who provided initial therapy higher than all other models of care, including attendance at a multidisciplinary survivor clinic, visits to a general practitioner, or follow-up by telephone/ questionnaires. 22 Similarly, a US study demonstrated that the majority of AYA cancer survivors preferred to receive LTFU care from their primary oncologist with whom they already had a close relationship. 23 However, GCCs usually care for a mix of on-treatment and off-treatment patients, may focus less on monitoring for late effects of therapy, and frequently do not have a multidisciplinary team with specific expertise in survivorship. These observations are supported by our data which showed that even though survivors were more likely to attend GCCs, specialized clinic attendance lead to greater adherence to breast and cardiac surveillance guidelines.
Among the survivors in our study, FP visits accounted for the highest proportion of LTFU care attendance over time. However, without being provided with appropriate information about a survivor's prior treatment, future risks, and recommended surveillance, FPs may not tailor their history, physical exam, and counseling to a survivor's prior cancer. A report from the North America Childhood Cancer Survivor Study (CCSS) revealed that only 17.8% of survivors who saw a FP reported receiving care that included advice on how to reduce their risk for late effects or the discussion/ordering of screening tests. 23 A survey of 1124 FPs across the United States and Canada revealed that only 33%, 27%, and 23% of respondents felt comfortable caring for survivors of the Hodgkin Lymphoma, ALL, or osteosarcoma, respectively. Furthermore, only 16% and 10% could correctly identify the appropriate guideline-recommended surveillance for survivors at risk for breast cancer and cardiomyopathy, respectively. 9 Many FPs express discomfort caring for AYA cancer survivors 9 and as a result, appropriate referrals may not be made for surveillance according to the guideline recommendations.
Despite the location or provider of follow-up care, rates of surveillance were low for both breast cancer and cardiac screening. The CCSS recently reported on 8522 survivors and demonstrated poor adherence to surveillance among survivors at high risk for breast, skin, colorectal cancer, and cardiac disease. 24 In our study, the majority of survivors did not have a regular source of LTFU care; however, among survivors who attended one of the models of LTFU care, cardiomyopathy surveillance was less likely among survivors receiving care in a general cancer clinic or from a FP compared to care from a specialized clinic. Although adherence to surveillance recommendations was generally low irrespective of LTFU attendance, it is notable that cumulative general cancer clinic and FP visits were associated with a 3% and 7% greater probability per visit of becoming adherent, respectively, while cumulative specialized survivor clinic attendance resulted in an 11% greater probability per visit of becoming adherent. These findings are consistent with a prior study of adult survivors of childhood cancer in Ontario which showed that survivors who attended a specialized clinic had a 10.6 times greater rate of adherence to annual guideline-recommended screening for cardiomyopathy when compared to no attendance. 10 Unfortunately, although all types of follow-up care have a positive impact on surveillance, low attendance rates translate into majority survivors not being surveilled appropriately. In our study, only one third of the cohort was eligible to F I G U R E 1 Proportion of patients accessing long-term follow-up care care per period of follow-up (highest level of care shown)/LTFU = specialized survivor care, GCC = general cancer clinic, FP = family physician attend a specialized survivor clinic since these are restricted to AYA treated at a pediatric cancer center. Despite having access to a specialized survivor clinic, few survivors attended this model of survivor care. Prior research has shown that other modifiers of attendance include distance to a specialized clinic (as these are all located in large urban centers), age at initial cancer diagnosis, and location of initial cancer therapy. 25 Barriers to appropriate LTFU care, regardless of where such care is provided, include lack of knowledge, cost, wishing to move on with life, competing life responsibilities, lower education levels and lower perceived levels of social support. 26,27 A study conducted in 2016 revealed that AYA cancer survivors have a more positive perception of their health compared to healthy controls, 28 a potential contributing factor to not seeking regular health care. Future work should focus on patient education of risks and surveillance, as well the empowerment to seek regular care as a survivor.
There is no perfect LTFU model. Our results demonstrate that sustained attendance at specialized clinics or GCCs decreases over time. A deeper understanding of the elements of SCCs that positively drive surveillance, and application of these elements to other care models that survivors are more likely to attend, such as a FP, may be beneficial to improving LTFU care. Research has shown that despite gaps in knowledge, FPs are generally willing to care for the childhood cancer survivor population if given specific tools such as patient-specific letters, survivor-care plans, or working in collaboration with a cancer center. 9 Our findings should be interpreted in the context of several limitations. First, we focused on a cohort located in a single province in Canada, which may limit the generalizability of our results, since not all jurisdictions have the same hierarchy of models available. However, SCCs exist across Canada, as well as in many countries (although not all countries provide universal access to health care, and health insurance status is likely an important determinant of health care access 29 ). Second, our young AYA cohort was aged 15-21 years of age at diagnosis, while AYA has been variably defined in the literature to include adults up to the age of 39. However, the young age of our cohort encompasses many life transitions such as graduation, moving away from home, and transferring to adult care providers, suggesting that this is a particularly vulnerable group of AYA. Lastly, it was not possible to know exactly what occurred during each captured LTFU visit and whether this was representative of risk-based care. Particularly for visits to a general cancer clinic or FP, we could not be sure that the content of the visits focused on the survivor's prior cancer and their risk for late effects.
The results from this study, in conjunction with prior literature, add clarity to our understanding of LTFU care in AYA cancer survivors as well as the impact of LTFU care on adherence to guideline-recommended surveillance. The understanding that adherence to guideline recommended surveillance is sub-optimal and a majority of survivors have no regular source of follow-up care by the end of study, provides information on opportunities to improve the lifelong care of survivors of AYA cancer. | 2019-05-17T13:33:51.806Z | 2018-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "6f254d6e60f5308deb4a8bd14a93f963eec0c22b",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.4058",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "353cd1a43fd783dcc5425d32bd98b83ede518354",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219785846 | pes2o/s2orc | v3-fos-license | Equations of working lines in packed columns with regard to longitudinal diffusion
The analysis of differential equations of material balance with mass transfer taking into account longitudinal diffusion on both phases in a packed absorber and a distillation column is carried out. It is proposed according to the known algorithms for the calculation of absorption processes to calculate the working lines separately for two cases: 1) solid phase moves in the mode with longitudinal diffusion and dispersion in the ideal displacement mode; 2) continuous phase moves in the ideal displacement mode and dispersion mode with longitudinal diffusion, and then by summing the ordinates of the resulting working lines, to its overall significance to him and the equilibrium line on a standard methodology to calculate the height and diameter of the column.
Introduction
Methods for calculating mass transfer processes (absorption, rectification, extraction) in packing columns taking into account longitudinal diffusion, when one of the phases, solid or dispersed, moves in the mode of ideal displacement, and the second -with longitudinal diffusion, are known and used in the design of mass transfer packing columns [1][2][3][4].
In the textbook [1], the conclusions of the formulas of the working lines of mass transfer processes are given taking into account longitudinal diffusion along the continuous phase, and its influence is taken into account by the correction parameter introduced into the mass transfer equation, which reduces the mass transfer rate. Also, the conclusions of the equations of the working lines are given [2], but already specifically for the absorption process. The article [3] discusses the features of mass transfer processes of ion exchange taking into account longitudinal diffusion over the continuous phase, but the method of calculating the height of the nozzle or layer of ion exchanger is carried out through the number of transfer units without switching to the mass transfer coefficient. Papers [5][6][7][8][9] consider the features of mass transfer processes of rectification, absorption, ultrafiltration and ion exchange taking into account longitudinal diffusion over the continuous phase, but the method for calculating the height of the nozzle or ion exchanger layers is given through the number of transfer units without going to the mass transfer coefficient.
Known methods for calculating plate-shaped distillation columns are based on a model of ideal vapor phase displacement and perfect mixing in a boiling solution, and packed ones on ideal displacement in both phases. In a number of works on absorption, extraction, adsorption and ion exchange, longitudinal mixing is taken into account, which is based on the diffusion model of the continuous phase flow structure, however, this calculation is based on the correction coefficient introduced into the formula for VSPID-2019 Journal of Physics: Conference Series 1553 (2020) 012020 IOP Publishing doi:10.1088/1742-6596/1553/1/012020 2 the dependence of mass transfer coefficient on mass transfer coefficients or on the Peclet number of longitudinal diffusion also introduced into the mass transfer equation, but not taking into account the influence of longitudinal diffusion on the equations of the working lines. The problem of the claimed topic is that the influence of longitudinal diffusion on the equations of the working lines is not taken into account in standard calculations that correspond to ideal displacement.
The development of new technical complexes based on the obtained experimental data will help to upgrade existing industrial designs, which will have higher productivity and energy efficiency. It will be possible to develop new promising solutions in heat and mass transfer processes.
From the middle of the last century, in describing the structure of the flow of chemical reactors, deviations from ideality began to be taken into account by various other models: cellular, combined, combining in series and parallel connection of zones of ideal displacement and mixing. But the most accurate description of the real flow structure is given by the model with the diffusion structure of the flow or back-mixing flow. Gradually, this model began to be used in heat and mass transfer processes (heat exchangers, absorbers, adsorbers and ion-exchange columns with a fixed and moving sorbent bed, extractors, drum and fluidized bed dryers, ultrafiltration apparatuses and reverse osmosis apparatuses and others). However, in practice, this model is practically not used in the description of rectification processes, where heat and mass transfer processes occur simultaneously.
The aim of the work is to simulate the working lines in mass transfer processes, taking into account longitudinal diffusion simultaneously in the continuous and dispersed phases, as well as to assess their influence on the technological parameters and geometric dimensions of the columns.
Methodology
The complexity arises in the case when the motion of both phases (solid and dispersed) is described by a one-parameter diffusion model with different modes, the Pecle numbers: for solid Pe с and dispersed phases Pe d . In this case, the structure of the dispersed phase flow is usually closer to the ideal displacement, so: Graphical interpolation of material flows is shown in figure 1. The equation of the working line on the gas phase for the absorption process taking into account the longitudinal diffusion is written as [10] (the left part of the figure1): with boundary conditions of Danckwerts [11]:
Results and discussions
From the graphs (figure2) it can be seen that the longitudinal diffusion in both phases: first, reduces the concentration of the extracted substance in the gas at the inlet from у 0 to у n and increases this concentration at the inlet of the liquid from х 0 to х n , which reduces the driving force at the inlet. Secondly, the concavity of both working lines 3 and 4 in the height of the absorbent further reduces the driving force. Bringing the working line to equilibrium. Now the need to increase the height of the nozzle is mainly due to the small time of phase contact in the volume of the nozzle, equivalent to the theoretical plate, when the transition is not achieved and the so-called kinetic curve is built to the left and above the equilibrium line, and the working line remains the same. Simultaneous consideration of the longitudinal diffusion of the gas and liquid phases, leads to an even greater reduction of local driving forces in the absorption process, compared with the longitudinal diffusion of only one phase (liquid -absorbent or gas). In calculations, this leads to an increase in the height of the packing column. In addition, a jump in the concentration at the inlet of the gas phase can lead to its value у0, less than the equilibrium concentration (point III on the graph becomes lower than point IV, figure 2), which will require an increase in the flow rate of the absorbent and the diameter of the column to move point III to the left.
An increase in the concentration at the inlet of the absorbent from хn to х0 can lead to its value greater than хk the intersection of the working line 5 of the equilibrium line 1 ( figure 2). This will require either a decrease in the concentration of хn, a deeper purification of the absorbent during desorption in the column, or an increase in pressure to bring the equilibrium line closer to the abscissa axis. It is clearly seen that the number of theoretical plates increases from 2.7 to 4.6 that is 1.7 times, which can be explained by a decrease in the efficiency of the plate due to longitudinal diffusion: Η = 2.7 / 4.6 = 0.587.
This leads to the need to increase the height of the nozzle in the column by 70% compared to the typical calculation, when both phases are moving in the mode of ideal displacement. Similar results can be obtained by calculating the number of transfer units by the formula (8)
Conclusion
The developed mathematical model, taking into account longitudinal diffusion, for the adsorption and rectification process, more accurately describes the behaviour of the flow structure in the mass transfer column. Work lines were constructed in the processes of mass transfer taking into account longitudinal diffusion simultaneously in the continuous and dispersed phases, and their influence on the technological parameters and geometric dimensions of the columns was evaluated.
Longitudinal diffusion reduces the local and average driving forces of mass transfer processes in both parts of the mass transfer column due to a jump in concentrations at the inlet along the vapour and liquid phases, and also leads to a transition from linear equations of working lines to nonlinear.
All of the above shows that a deviation from the ideal displacement, both in the liquid and in the gas phases, leads to: increase the consumption of absorbent, sputum and also leads increase the height and diameter of the absorber and distillation column. | 2020-06-04T09:12:34.088Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "b4ac675706cb0eb8458f2ea7f2b627603fac50e8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1553/1/012020",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e1348051c878783ee4b820a8d9f079f4ae13de86",
"s2fieldsofstudy": [
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
6264875 | pes2o/s2orc | v3-fos-license | Simulation and validation of the ruminal digestion of carbohydrates in cattle from kinetic parameters obtained by in vitro gas production technique
This study aimed to validate the estimates of the ruminal degradation of total carbohydrates (TC), ruminal and total digestion of fibrous carbohydrates (FC) and microbial nitrogen flow in the abomasum evaluated by in vitro gas production technique (IVGP). Six ruminally and abomasally cannulated steers arranged in a double 3 × 3 latin square were used to measure described parameters with indigestible neutral detergent fiber (INDF) utilization as marker. Total and fibrous carbohydrates degraded in the rumen were estimated through digestion rates obtained for fibrous (FC) and non fibrous carbohydrates (NFC) using in vitro gas production technique, corrected for its respective ruminal and postruminal passage rates. The estimation of the total digestion of FC was done by the sum of ruminal and post-ruminal digestion of these compounds. The microbial nitrogen flow in the abomasum was estimated by the calculating the microbial efficiency of bacteria that ferment FC and NFC, utilizing the microbial growth rate obtained by the ruminal digestion rate for carbohydrate fractions in IVGP. The utilization of the in vitro gas production technique allows obtaining accurate estimates of the ruminal digestion of total carbohydrates, total and ruminal digestion of fiber carbohydrates and microbial protein flow in the abomasum.
Introduction
The knowledge of the feeds' nutritive value is primary on the process of diet formulation, as well as in the selection and improvement of plants designated to ruminants feeding.The forage nutritive value determination need of studies which evaluate the intake, digestibility and nutrients metabolism in animals.Thus, ruminant feed evaluations imply on conduction of many digestion experiments with cannulated animals on one or more sections of the gastrointestinal tract, in order to quantify the digestion, as well as to understand where it happens.However, these studies are expensive, laborious and with a long period of work, which justify the development of lab methods that are easy, accurate and with low costs to estimate the nutritive values (Pell & Schofield, 1993).
Although there are a number of methods which estimate digestibility and ruminal degradation of feeds, including biological methods and equations, there is a constant difficulty in predicting the NDF or fibrous carbohydrates digestion, which is influenced by chemical, physical and anatomic characteristics of the plant cell wall (Jung & Deetz, 1993;Wilson, 1994), besides being dramatically affected by variations in the digesta passage rate of the gastrointestinal tract (Mertens, 1994).
Among the biological methods, the in situ incubation has been routinely used for this propose (Nocek, 1988); however, peculiar size and bags' porosity variations, as well as the necessity of surgically preparing animals, increase the expectation for other technique developments.In this respect, the in vitro gas production technique deserves attention, since it is not subject to the influences above cited and additionally is inexpensive, nondestructive, precise, fast and enables the estimation of the digestion rate of the non fibrous carbohydrates (Pell & Schofield, 1993).
The current systems used to estimate nutrient requirement and diet formulations for ruminants require kinetic parameters of nutrients degradation (protein and carbohydrates).However, for the practical use of the estimates of nutrients' digestion rate obtained by biological methods, the accuracy in predicting the event measured in the animal must be verified of different dietary conditions, through a process called validation (Mertens, 1976;Pell & Schofield, 1993).R. Bras. Zootec., v.40, n.9, p.1984-1990, 2011 Therefore, the present study aimed to verify the accuracy of predictions for rumen digestion of fibrous and total carbohydrates, total digestion of the carbohydrates and the flow of microbial nitrogen in the abomasum by the use of digestion rate obtained by the in vitro gas production technique.
Material and Methods
The study involved three stages: the first consisted of the ruminal and total nutrient digestion determination obtained in bovine fitted on rumen and abomasum, which were fed diets based on silage corn, elephant-grass silage and Tifton 85 grass hay (Cabral et al., 2004); the second stage was done to obtain estimates of the digestion rates of carbohydrate fractions by in vitro gas production technique, and the last one involved the simulation of ruminal and total digestion predicted values of the total and fibrous carbohydrates from kinetics parameters obtained in stage two, along with the validation of these estimates by comparing them with those measured in animals (Cabral et al., 2006;2008).
The study with cattle was carried out at the Laboratório Animal of the Departamento de Zootecnia of the Universidade Federal de Viçosa, Minas Gerais, for evaluation of diets based on silage corn (Zea mays, L.), elephant-grass silage (Pennisettum purpureum, Schum) cv.Cameroon or Tifton 85 grass hay (Cynodon spp.), suppemented with 10% of soybean meal.The Tifton 85 grass hay was additionally supplied with 0.5% of urea:ammonia sulfate mixture (9:1) in order to maintain the same crude protein percentage in diets (Table 1).
Six ruminal and abomasum cannulated crossbred cattle with average initial body weight of 351 kg were distributed in a double 3 × 3 Latin square design, whose periods lasted 16 days each: 10 d for adaptation to diets, and the final six for feces and abomasum sampling.The apparent intestinal digestibility of nutrients was measured by the difference between the total and rumen digestibility.
Animals were weighed at the beginning and end of each experimental period, housed in covered individual pens with concrete floor, and fed once a day (7 h 30 min).Daily orts were quantified to fit and evaluate the intake, allowing approximately 5% of orts in dry matter basis.
The sample collections of feces and abomasal digesta were taken every 26 hours starting at 8 h on the 11 th day and finishing at 18 h of the last day of each experimental period.Feces, abomasal and orts samples were dried in forced ventilation oven at 55 o C for 72 h.At the end of the process, a composite sample was prepared for each period and treatment for further chemical analysis.
Fecal excretion and flow of abomasal dry matter were obtained from INDF use as marker after 144 h of in vitro incubation (Cochran et al., 1986), and utilized for the estimation of ruminal and total nutrients digestion, respectively.
The abomasal flow and fecal dry matter excretion (g/animal/day) were estimated by the ratio between the amount of consumed marker (iNDF) and its concentration on abomasal digesta and feces, in the same order.The ruminal and total apparent digestion were obtained by the difference between dry matter intake and amounts flown to the abomasum and present in the fecal excretion, respectively.The post ruminal digestion of dry matter was estimated by deducting the ruminal digestion from total apparent digestion.
The rates of digestion of fibrous and non fibrous carbohydrates in each ingredient of diets included the roughage and soybean meal (Table 2), which were obtained according to Cabral et al. (2004).The microbial nitrogen flow in the abomasum was estimated from the microbial efficiency calculated by the equation PIRT ( 1965): 1/Y = M/μ + 1/Ymax, where Y = estimated microbial efficiency (g cell.g digested carbohydrate -1 ), M = maintenance requirement; μ = microbial growth rate considered proportional to the degradation of carbohydrates rates, and Ymax.= Theoretical maximum yield.The values of M and Ymax utilized in this study were those suggested by Russell et al. (1992), where M ranges from 0.05 to 0.15 g carbohydrate/g cell per hour for microorganisms fermenting structural and nonstructural carbohydrates, respectively, and Ymax is 0.4 g cell/g carbohydrate.Once obtained for each population, microbial efficiency was multiplied by its previously estimate ruminal degradation, obtaining thereby the dry matter microbial flow in the abomasum, which, when multiplied by the average percentage of nitrogen (18%) of isolates obtained by Cabral et al. (2008), allowed the calculation of the nitrogen microbial flow in the abomasum.
The validation of the predictions concerning the total carbohydrates degradability, total and ruminal digestion of carbohydrates (kg/day), nitrogen microbial flow in the abomasum (kg/day) by the in vitro gas production technique were carried out regardless of treatment, by adjusting the simple linear regression (full model) of predicted values relative to that observed.Estimates of regression parameters were tested under the hypotheses: H 0 Thus, for each feed, the in vitro rumen degradation rates of fiber and non fiber carbohydrates were obtained.Integrated in the total diet provided to the animals, and before adjustments for respective passages rates of each fraction estimated for roughage and concentrates according Cannas & Van Soest (2000), they allowed the obtainment of the ruminal degradation.
The predictions of ruminal digestion of carbohydrates of total (RDTC) and fibrous carbohydrates (RDFC) were obtained by the following equation: RDTC (kg/day) = ingested CNF * kd/(kd + kp) + ingested FC * kd/(kd + kp), where: RDTC = ruminal degradability of total carbohydrates, NFC = non-fiber carbohydrates; Kd = ruminal degradation rate estimated for each fraction through the in vitro gas production technique and, kp = rumen passage rate of each fraction .
The total digestion of fibrous carbohydrates was estimated by the sum of ruminal digestion (RDFC) and intestinal digestion, from the following equation: TDFC (kg/day) = FCE + KDI */(KDI + kpi), where FCE = fibrous carbohydrates which escape from digestion, KDI = rate of intestinal digestion of fiber carbohydrates, considered as 90% of the rumen digestion rate, and kpi = rate of intestinal passage of fibrous carbohydrates, whose value used was 0.125 h -1 and corresponds to the average observed by Detmann et al. (2001).Although Ulyatt et al. (1974), cited by Mertens & Ely (1979), have suggested that the activity of microbial cells in the large intestine is equal or greater than that observed in the rumen, partly attributed to the increased susceptibility to digestion of hemicellulose due to acidity of the abomasal, Bailey & MaCraee (1970) and Hungate (1966) suggested that the enzymatic activity in this compartment is smaller than the rumen.Thus, according to Mertens & Ely (1979), the digestion rate of fibrous carbohydrates in the large intestine was assumed to be 90% of the observed in rumen, since the fiber that escapes the rumen would be more resistant to digestion.
Results and Discussion
The means to predict the values of total carbohydrates' ruminal degradation, total and ruminal digestion of fibrous carbohydrates and microbial nitrogen flow in the abomasum were similar to those obtained with the animals (Table 3).The estimated passage rates are within the limits found in the national literature (Detmann et al., 2001).
For ruminal degradation of fibrous carbohydrates, there were no differences (P>0.05) between predicted and observed values for both intercept β 0 (H 0 is accepted (a) ) and the slope (H 0 is accepted (b) : β 1 = 1) (Table 4), which indicates that predicted values were accurate with the observed values.In Figure 1-a, we can observe the high frequency of points along the equation line Y = X, which indicates the accuracy of estimates (Figure 1).
In contrast, Vieira et al. (2000) observed underestimation for fibrous carbohydrates' digestion in cattle kept under grazing conditions, which can be attributed to flaws at obtaining estimates of degradation kinetics parameters from in vitro gravimetric methods for fibrous carbohydrates, as well as the errors from estimate of passage rates.Thus, (b) : Table 4 -Estimative of the regression parameters between predicted and observed values for the analyzed variables given that these authors obtained values for digesta passage rate by chromium mordant fiber use, the passage rate of marked particles can be higher than that observed for fiber of the non-marked food, since the extraction with neutral detergent can alter the physical cell wall structure, and the bind with chromium may increase the particles density.
According to Mertens (1994), variations in the passage rate of rumen digesta have a high effect on the prediction of digestion of fibrous carbohydrates, which have slower digestion rate, and therefore are affected by little changes in the ruminal passage rate and their estimates.
For the ruminal degradation of total carbohydrates, the intercept was different from the parametric zero value, which leads to rejection of the hypothesis H 0 (a) : there is a constant bias of 0.37 kg / day, while the angular coefficient does not differ from one, accepting the hypothesis H 0 (b) : β 1 = 1.It means that there was no difference between the predicted and observed values for that variables.
A close relationship between predicted and observed values for total carbohydrates degraded in the rumen was observed (Figure 1 Estimated fibrous carbohydrates digested in the total digestive tract were also accurate from digestion rates obtained using the in vitro gas production technique, when compared with values measured in animals.Although there is a constant tendency of 0.37 kg/day for this variable, because the intercept was statistically different from parametric value zero, the regression slope did not differ from the parametric value 1 (Table 4).Considering that estimates of fibrous carbohydrates digested in the rumen were accurate, it could be said that their estimates of NFC digested in the intestine were too.Since the digestion of these compounds in the total digestive tract is the sum of ruminal and intestine digestion, it indicates that the assumptions (Mertens & Ely, 1979) for its calculation have biological significance.
Assuming that the availability of digestible carbohydrates in the rumen is the main limiting factor for ruminal microbial growth and hence to the microbial protein flow to the abomasum, estimated values for these compounds were used to predict the microbial nitrogen (Nmic) flow in the abomasum.The use of carbohydrate digested in the rumen resulted in accurate estimates of microbial N in the abomasum (Tables 3 and 4); there were no differences (P<0.05) between the intercept and the slope of parametric values 0 and 1, respectively, in contrast to the observed by Vieira et al. (2000).The intercept value of -14.17 g of microbial nitrogen is close to the -12 g found by Russell et al. (1992).
Although the passage rates were not measured directly in animals in this experiment (estimated values were used), the results presented enable, at first, the use of in vitro gas production technique to estimate rumen kinetic parameters of carbohydrates degradation in foods.Such estimates accurately predicted the digestion of total fibrous carbohydrates.Moreover, these digestion rates allowed estimating the contribution of microbial protein in the intestines and, thus, supplemental protein escape may result in better responses to dietary adequacy.
In this study three different diets with different values for ruminal carbohydrates degradation were investigated, ranging from 1.77 to 3.07 kg.animal -1 .day - (Table 3), for diets based on elephant grass silage and corn silage, respectively.Curiously, the use of the kinetic parameters from in vitro gas production technique allowed obtaining accurate values for the analyzed variables, showing the sensibility of the system at predicting variations in the energy (carbohydrates) availability in the rumen.Detmann et al. (2005) evaluated the accuracy of the estimates using in vitro gas production technique and observed overestimation for total carbohydrates' degradability.However, considering that the abovementioned authors used grazing steers and the forage intake was estimated and not measured, mistakes associated to prediction of forage intake could be affecting the estimate of total carbohydrate digested in the rumen.
Additionally, these authors obtained the estimates of digestion rates for carbohydrate fractions from gas production curves for 120 hours, which can lead to a mistaken interpretation of results.Because part of measured gases in long incubation periods may reflect autolysis and recycling of microorganisms cells compounds and nor food constituents, such as the carbohydrates, according by Cone & Van Gelder (1999), the use of data obtained from long periods of incubation could lead to and underestimates of carbohydrates digestion rates, which do not reflect the real values for foods and diets.
Although experiments with other dietary conditions are necessary, the validation of these estimates may greatly contribute to diet formulation, animal performance prediction and feeds evaluation, reducing thus, the cost and labor involved in animal experimentation.
Considering that carbohydrates are the major constituents of feeds used in ruminant feeding, it could be stated that these are the main diets' compounds for accurately predicting the ruminal and total digestion, especially for fibrous carbohydrates.These last compounds correspond to the largest proportion of energy in tropical forages.The last compounds correspond to largest proportion of energy in tropical forages.However, because they present incomplete availability in gastrointestinal tract of herbivores are responsible for the lowest use of the energy in diets based in these roughages.Due to the intrinsic nature of the cell wall of these forages, related to their chemical, physical and anatomical characteristics, the relationship of chemical composition and availability of this fraction is variable, which hampers the better understanding of the factors that limit their digestion.
Considering also that carbohydrates represent the main primary energy source for microbial growth in the rumen (Russell et al., 1992), which is the primary source of amino acids for ruminants, it could be understood that digestion rates obtained by in vitro gas production technique enables accurate estimation of the microbial protein flow in the abomasum.
The NRC (2001) proposed the estimation of total digestible nutrients (TDN) of feed from its chemical composition and suggests that NDF digestibility can be estimated by in vitro incubation during 48 hours.However, variations in the digestion rate of this fraction, on the indigestible fiber proportion among feeds and on the digesta passage rates influenced by different dietary conditions and animal physiological status limit the use of these estimates.
The use of kinetic parameters could be more accurate to estimate total NDF digestion because it takes into account the inherent characteristics of fiber (digestion rate) and passage rate, which is the main factor that affects the fiber availability in the gastrointestinal tract.This fact would allow better estimation of available energy of diets, since fibrous compounds are the main source of dietary energy for ruminants under tropical conditions, in addition to being responsible for energy availability variation.
Conclusions
The use of estimates of digestion rates by the in vitro gas production technique allows an accurate obtainment of total carbohydrates and fiber digestion in the rumen, as well as the microbial nitrogen flow to abomasums.
: β 1 = 1.When the null hypothesis was not rejected by the test, the occurrence of similarity between the predicted and observed values was considered.Conversely, when there was rejection of the null hypothesis, a new regression equation was drawn, removing the parameter for the intercept (reduced model) and estimating the global estimates addiction as: B = (β -1)*100; where B = addiction global estimates (%) and β = estimate of the slope of the fitted equation without consideration of the intercept parameter (reduced model).For all statistical procedures used, α = 0.05, using the statistical package SAS (2001).
Figure 1 -
Figure 1 -Relation between predicted and observed values and your respective ordinary residue for ruminal degradation of (a) fibrous (RDFC) and (b) total carbohydrates (RDTC).
-b), since most of points are near to the equation line Y = X.It indicates the existence of a linear relationship between predicted and observed values.The distance of the predicted values from the line Y = X can indicate the occurrence of under or overestimated values for the ruminal digestion of carbohydrates, which weren't evident for these variable.
Table 1 -
Chemical composition of the experimental diets (% DM)
Table 2 -
Average values for the total carbohydrates fractions and their respective digestion rates estimated for the feeds used in experimental diets
Table 3 -
Observed and estimated values for the ruminal and total digestion of the total and fibrous carbohydrates (kg/animal/day), microbial nitrogen flow in the abomasum (g/day) and passage digesta rate (h -1 ) for experimental diets * Significant difference (P<0.05) for H 0 (a) : β 0 = 0 and H 0 | 2017-10-03T15:59:04.301Z | 2011-09-01T00:00:00.000 | {
"year": 2011,
"sha1": "fe8d4222f452673bec536a76fe1104f85b9cbdbc",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbz/a/jbtd3dFrbVmM3FWjJnkSvQP/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fe8d4222f452673bec536a76fe1104f85b9cbdbc",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
251616881 | pes2o/s2orc | v3-fos-license | Implementing Mitigations for Improving Societal Acceptance of Urban Air Mobility
: The continuous development of technical innovations provides the opportunity to create new economic markets and a wealth of new services. However, these innovations sometimes raise concerns, notably in terms of societal, safety, and environmental impacts. This is the case for services related to the operation of unmanned aerial vehicles (UAV), which are emerging rapidly. Unmanned aerial vehicles, also called drones, date back to the first third of the twentieth century in aviation industry, when they were mostly used for military purposes. Nowadays, drones of various types and sizes are used for many purposes, such as precision agriculture, search and rescue missions, aerial photography, shipping and delivery, etc. Starting to operate in areas with low population density, drones are now looking for business in urban and suburban areas, in what is called urban air mobility (UAM). However, this rapid growth of the drone industry creates psychological fear of the unknown in some parts of society. Reducing this fear will play an important role in public acceptance of drone operations in urban areas. This paper presents the main concerns of society with regard to drone operations, as already captured in some public surveys, and proposes a list of mitigation measures to reduce these concerns. The proposed list is then analyzed, and its applicability to individual, urban, very large demonstration flights is explained, using the feedback from the CORUS-XUAM project. CORUS-XUAM will organize a set of very large drone flight demonstrations across seven European countries to investigate how to safely integrate drone operations into airspace with the support of the U-space.
Introduction
Drones are flying machines ranging from insect-sized flapping crafts to large airplanes the size of a commercial airline jet [1]. Their capabilities are also wide-ranging: some drones are capable of flying for only a few minutes, while others can fly for days at a time. The applications of drones are also diverse. While the initial applications of drones were mainly for military purposes, and later for recreational purposes, drones are used today in many civil applications and in public spaces. Some of the most common commercial applications and uses for drones include agriculture (crop spraying, crop monitoring, etc.), live streaming events, emergency response, search and rescue, firefighting, disaster zone mapping, mapping and surveying, and artificial intelligence applications [2][3][4][5]. More recently, the societal utility of drones has been further enhanced in the management of the global COVID-19 pandemic, with use cases such as aerial spraying of public areas to disinfect streets, the surveillance of public spaces, and monitoring local authorities during lockdowns and quarantine [6].
The 2016 European Drones Outlook Study [7] forecasts a promising economical growth fostered by the emerging drone market. Unmanned aircraft will be part of everyday life in most of the economic sectors, as shown by the size of the bullets in Figure 1, but will have a greater impact on air travel, utilities, entertainment and media, logistics, and agriculture. Indeed, the number of drones flying in the European airspace is expected to increase from a few thousand to several hundred thousand by 2050, most notably in government and commercial activities. The annual economic benefit could exceed EUR 10 billion by 2035 in Europe and create 100,000 new direct jobs to support drone-related operations. An example of this growth is illustrated by the agricultural sector, where authors estimate that 150,000 drones will be operated by 2035. The same is true in the fields of utilities and security, where around 60,000 unmanned aircraft will be used to assist in natural disaster management or traffic control, among other tasks. However, despite the multiple operational services and the huge potential economic benefits of the drone industry, this relatively new technology will not really take off until the societal concerns associated with its widespread deployment are properly addressed.
As in the early days of aviation, safety will remain the main factor that will influence public acceptance of drones, especially as, unlike conventional commercial and general aviation, drones will often operate over moderately to densely populated areas and at lower altitudes. Visible to the naked eye, civil drone operations will raise questions about their nature and the risk they may represent for the populations and installations overflown. Noise pollution generated by drones will have to be contained to acceptable levels depending on the time of day and the frequency of operations. In addition, other societal and environmental impacts on the population, fauna, and flora will also have to be anticipated and mitigated.
Aware of these societal and environmental challenges, the CORUS-XUAM project has undertaken a review of surveys that covers the public acceptance of drones, and has initiated the identification of possible mitigation measures. The aim of this work is to address public concerns before the UAM business spread over cities, and have mitigation measures in place to facilitate a seamless acceptation of drones in our urban skies.
Public Surveys about Drones
The surveys reviewed are from various organizations (air traffic service providers, industry, research, universities, airspace security agencies) and countries (Australia, Germany, Brazil, USA, China, Korea, etc.), and were conducted between 2015 and 2021, 2018 being the inflection point in which drones were considered for the first time as new-entrant vehicles to share urban transport.
In 2015, Clothier et al. [8] studied the risk perception and the public acceptance of drones in Australia. The objectives of this study were to investigate whether the public perceives the risks of drones differently to that of conventionally piloted aircraft, to provide guidance for setting safety requirements for drones, and to understand how the terminology used to describe the technology influences how the public perceives the risk. In this research, it was found that that terminology had a minimal effect on public perceptions.
However, this may change as more information about the drone technology and risks and benefits of their usage becomes available to the public.
In 2016, the Office of Inspector General of the United States Postal Service published a report [9] on the public perception of drone delivery in the United States. This report refers to an online survey that was administered, in June 2016, to a sample of 18-75-year-old residents in all 50 states and the District of Columbia to understand the current state of public opinion on drone delivery for potential customers. The survey showed, among other things, that most Americans like the concept of drone delivery rather than dislike it, but that many have yet to make up their minds. Different groups have different levels of interest in drone delivery. Drone malfunctions was the main concern of the public, but other concerns included misuse, privacy, potential damage, and nuisance.
In 2017, Lidynia et al. [10] conducted a survey of 200 people, laypersons, and active users living in Germany about their acceptance and perceived barriers for drones. The survey questions were about the general evaluation of civil drone technology, barriers, demography, and further user factors. The survey results show that user diversity strongly influences the acceptance of drones and perceived barriers. Active drone pilots were more concerned by a risk of possible accidents, while laypeople were more concerned about the violation of their privacy (the routes that drones should and should not be allowed to use).
In 2018, an online survey from NATS [11], the UK airspace service provider, showed that drone acceptance can range from 45% when seen as a generic technology tool, but raises to 80% when they are used in emergency situations. A deep market study conducted by NASA [12] forecasts that, in the coming years, there will be numerous markets in which the drones will have a stake.
As a novelty, additional operations, such as passenger transport by unmanned aircraft, or "air taxis", are expected to grow exponentially. Air taxis operations will reduce the travel times of part of the commuting traffic to city centers and contribute to decongesting ground transport by up to 25% . Urban air mobility (UAM) is emerging as the new concept for drones future business. In the US, the concept will later be extended to also include manned electrical vehicles with vertical take-off and landing capabilities, known as eVTOL, under the new term advanced air mobility (AAM). The paper shows that the acceptance level raises to 55% with the development of new safety technologies, the improvement in air flow network, and automation of the flights.
In 2019, Airbus conducted also a survey [13] about the public perception of UAM. The Airbus survey covered four cities/countries around the world, Los Angeles, Mexico City, New Zealand, and Switzerland, and collected 1540 responses. Results revealed that 44.5% of respondents supported or strongly supported UAM and that 41.4% of respondents thought UAM was safe to very safe. This suggests that the initial perception of UAM is quite positive.
The same year, a meta-analysis from Legere of former US public surveys [14] and the DLR survey to 832 German citizens [15] showed acceptance levels of 60% and 49%, respectively. The meta-analysis focused on the different acceptance levels per mission, with public missions having higher acceptance than private/commercial uses. The German survey provides results about major public concerns. The most important one was the misuse of drones for crime (91%) and the violation of privacy (86%). Both surveys refer to generic (small) drones involved in missions such as police surveillance or search and rescue.
In 2020, Tan et al. [16] surveyed the opinion of more than 1000 citizens from Singapore. Delivery drones and passenger vehicles were considered to have an average acceptance of 62%.
In 2021, an EASA survey [17] obtained the highest acceptance (83%) for the UAM composed of passenger electrical vehicles, not necessarily unmanned, cargo drones, and also surveillance drones. Special emphasis was given to the different types of passenger vehicles, and also to concerns related to the environment.
In addition, surveys [18][19][20] had a main focus on the analysis of the demand of the future UAM services. Questions were addressed to the public as potential customers. Kloss and Riedel surveyed almost 5000 people from Brazil, China, Germany, India, Poland, and the US. Acceptance was measured for different missions (six using eVTOL and four using cargo-drones) and they found out that only 27.3% of the people declared themselves willing to try passenger drones, mostly for commuting, business trips, or travels to/from airport). On the contrary, the willingness to use cargo drones, paying twice or more of today's cost, was 57.8%.
More positive were the responses from the Lundqvist survey. This survey was conducted on almost 500 people from five EU regions (in Holland, England, Spain, Croatia, and Poland). Respondents were mainly connected to drone operators or their business. The general positive attitude towards drones was up to approximately 70%. Specific questions about concerns included safety, environment, and privacy issues. Finally, the Park and Joo survey was conducted in South Korea on more than 1000 citizens plus 44 experts. The willingness to use UAM (both passenger and cargo) was 47%, and decreased as the automation of the vehicles increased.
In Figure 2, the surveys are visualized according to their main focus, such as public acceptance (blue) or market analysis (green). The number of surveys for each year from 2015 to 2021 can be seen on the vertical axis. The different types of drones (surveillance, cargo, and passenger) covered by each questionnaire are also indicated by a picture. A growing interest in passenger drones can be observed starting from 2018 to 2021. Conversely, the interest in surveillance drones has been decreasing. This may be a reason why privacy concerns have been decreasing over the years in these surveys, while noise and environmental concerns have increased.
As an overall metric, the level of acceptances of drones and of urban air mobility are shown in Figure 3. Each bar represents one survey, and they are sorted by year to try to show any trend across time. Again, the color indicates the final focus of the survey: blue for public acceptance and green for market analysis.
As can be seen, the public acceptance has no clear trend over the years but reached the maximum in six years of 83% in 2021 (EASA survey). However, the other surveys of the same year had very different results. The way questions are proposed in these surveys partly explains these differences. In the EASA survey [17], with the highest acceptance value, the question was the "general attitude towards urban air mobility". In the Park and Joo survey [18], also conducted in 2021, but from a market analysis perspective, the question that obtained 47% of positive responses was about the "public's willingness to use UAM in its initial phase". This shows how difficult it is to compare survey results. Surveys, in general, have a first set of questions to classify the public according to their age, gender, and economic status, but also their knowledge about drones, so that the answers can be further studied by groups. Typically, females, elders, and less-educated people have a slightly lower acceptance of drones than the other groups. On the contrary, experts in the field are generally more concerned about safety than laypersons.
Most surveys are also usually accompanied by a scenario of drone usage, and in the market study surveys, the scenarios include a forecast about the cost of the services. Many unknowns are yet to be unveiled: Will safety increase or decrease? Will the projected drone service costs/times be achievable? Will drones generate the expected economical growth? For the moment, only predictions can be provided when conducting surveys, whereas the survey results show clearly that the costs of drone services, as well as the time saved, have a high impact on responses. Drone operations related to health and welfare always have a high level of acceptance, while leisure or business related to leisure are always the least accepted drone operations.
In most recent surveys, we found that quantitative data are obtained from the questionnaire responses, while qualitative information is obtained from a set of persons who are interviewed separately, and whose responses are analyzed with more detail. Typically, this set of responses, referred to as experts, is used to validate and interpret the responses of the questionnaire. However, expert answers usually point towards a positive attitude to drones, as confirmed during the first CORUS-XUAM stakeholder workshop. This workshop analyzed the most critical elements related to UAS/UAM operations along with possible solutions that could enable a sustainable and accepted expansion of drone operations in and around the European cities. In particular, the fifth day of the workshop was dedicated to the analysis of societal impact of drone operations and possible mitigation measures. The responses to the questions in Table 1 showed a high acceptance rate among the 66 workshop participants, as in the surveys analyzed.
Options (Multiple Choice)
I am a potential passenger of a taxi-drone 63% I am a potential client of a delivery-drone 89% Although public opinions vary with time/country, trends seem to show that between half and three-fourths of the public accepts the deployment of business-related drone operations.
In addition to acceptance, most surveys include questions about public concerns, but they do not use an equivalent set of concerns or the same terminology. To highlight this fact, we used word clouds to process the surveys addressing "public concerns" (see Figure 4). In word clouds, the most frequently used terms within a document are displayed in larger font size.
As can be seen in these word clouds, public concerns related to drone operations are mostly focused on safety, environment, privacy, and noise. Terms such as animals, visual, and waste are classified as an environmental concern, while others, such as risk and danger, are considered as safety concerns. In addition, we see terms related to the economy, (i.e., cost and liability), or to other topics, such as regulation or ethics. In the CORUS-XUAM, the workshop participants were asked to select the top three concerns for them, and the results are shown in Table 2. As most of them came from the aviation sector, it is not surprising to see that the safety concern was selected as the major concern.
Options (Single Choice)
Safety/(Cyber)Security 59% Environmental impact (noise, emissions, visual,. . .) 33% Privacy 9% It is worth mentioning some specific issues that yield "not in my backyard" responses. The location of vertiports is a good example. People are open to the concept, but would not be happy to have one near their home or office.
The full list of societal concerns that can be identified by the CORUS-XUAM project is as follows: As the environmental area has many items, and noise and privacy are highly mentioned as concerns, we treat them separately in the following sections. While analyzing each societal concern, we also hint at possible mitigation measures.
Materials and Methods
The procedure followed for defining the mitigation measures and analyzing them is summarized in Figure 5. First, the main societal concerns were extracted from the surveys. Aspects related to safety, the economy, the environment and noise were the result of this first step, as depicted in the figure. Next, the societal concerns were analyzed during several brainstorming sessions. For each concern, we determined possible actions that could help to minimize its negative perception. The result was a list of mitigation measures, in which each item is an individual action that can mitigate one or several concerns. Finally, the list of mitigation measures was analyzed to draw conclusions. As part of this process, this list was presented and discussed in the CORUS-XUAM workshop. The majority of the participants felt that it was a good start (details can be seen in Table 3) but it was still incomplete. During the debate, new potential actions were proposed and added to the existing ones.
Options (Single Choice) Split
Is a good starting point 85% Has important omissions 11% Is exhaustive and complete 4% Once the list of mitigation actions is completed, the analysis was performed using the double classification process illustrated in Figure 6. With the workflow moving from the inside to the outside, we started collecting public concerns and then proposed actions to mitigate such concerns, and finally applied two overlapping classifications: first providing a category to each action and then a level that measures the required effort to implement that action. In more detail, the analysis starts by categorizing each mitigation measure according to the scope in which it can be applied. We established four different scopes, or categories, as follows: • Regulation and policy. This category contains the mitigations that should be part of a regulation made by the authorities. Figure 7 shows some examples of mitigation for each category. Note that simply rewording a mitigation slightly can move it from one category to another. For instance, "setting up countermeasures to criminal/illegal use of drones" was categorized under "tools and technologies", but rephrasing it to "make mandatory the use of countermeasures ..." would have categorized it under "regulation and policy". In addition to the category, we assigned each mitigation a second classification in three levels, "easy", "medium", and "difficult", according to its ease of implementation in terms of resources and time. Figure 8 shows some mitigation examples for each of the three levels of ease of implementation. For example, the mitigation "creating an independent authority to investigate accidents/incidents/complaints related to drone operations" is considered to be difficult to implement at the moment because it requires a high level of agreement between stakeholders. In particular, this mitigation measure shall involve regulation bodies which have to follow a lengthy period of legal procedures. In contrast, the mitigation "limit minimum altitude" is an operational action that is easy to implement.
Scoring of the Mitigations
Given the long list of mitigation measures, we needed a method to rank them from highest to lowest priority. The prioritization process uses a scoring value generated from a dynamic table. The dynamic table is created by reversing the rows and columns of the table used to generate the mitigation measures.
The process for scoring each individual mitigation is the result of adding up the applicability values of that mitigation measure in each and every concern.
Indeed, one mitigation that reduces visual impact may have a negative effect on the safety of the surrounding traffic, but at the same time may be neutral for natural life and for privacy. For this reason, we crossed each mitigation with each concern on the long list of concerns presented in Section 2 and set +1 for a positive impact, −1 for a negative impact, and 0 for a neutral.
This is similar to the process used in the safety operational risk assessment (SORA) methodology [21]. The final sum of the values of a mitigation provides a numerical proxy of the impact of its applicability. The higher the number, the more positive the impact.
CORUS-XUAM Mitigations Subset
As a final step for this work, we selected a subset of mitigation measures, mainly from the operational/ConOps category (but not all) that are applicable to very large (drone flight) demonstrations (VLDs) being prepared within the CORUS-XUAM project.
Very-large-scale demonstration (VLD) activities will be at the heart of CORUS-XUAM and will support the integrated operations of UAS/UAM and manned aircraft and the advanced forms of interaction through digital data exchange supported by integrated and advanced U-space services in urban, suburban, and intercity scenarios, as well as in and near ATM-controlled airspace and airports. The VLDs will focus on different types of missions such as passenger transport, delivery, emergency response, and surveillance. The VLDs will use different U-space deployment architectures and state-of-the-art technologies. They will take into account the coordination between ATC and U-space, including interaction with ATCOs and pilots. The VLDs will combine eVTOLs flights with other traffic, and operations in the CTRs of major airports. Vertiport procedures, separation, and data services will also be demonstrated [22].
The mitigation measures proposed to be tested during VLDs are mainly those that can be implemented by the U-space service providers or any other partner involved. As the VLDs are in the planning phase, at the time of writing this paper, each VLD responds differently to the proposed list of mitigation measures, depending on its mission and capacity.
Full Mitigation List: Categories, Ease of Implementation, and Top 10 Scored
The full list of social acceptance mitigation measures identified after the CORUS-XUAM brainstorming sessions [23] is presented in Appendix A. Figure 9 shows the percentage of categorization of the mitigation measures according to the scope in which they can be applied. The categories are explained in detail in Section 3. A categorization of the ease of implementation of each mitigation measure was established to analyze those that could be implemented and achieved with the current technologies and regulations. Figure 10 shows the percentages of the ease of implementation of the full list of mitigation measures. The aim was to identify and analyze the possible mitigation measures that could be implemented quickly. The list of prioritized Top 10 mitigation measures and the concerns that can be improved upon are presented in Table 4. Figure 11 shows the ease of implementation of the ones that mitigate a bigger number of concerns; for example, mitigation "M1-limit minimum altitude" is thought to support six different concerns and be easy to implement. It can be observed that more than 70% of the mitigation measures are found to be achievable in a short or medium timeframe, either because the necessary applied science exists today or because the required technologies are under development. However, 31% of the mitigation measures are still considered complex to implement, which means that there is still a long way to go in the research and development of new technologies and the regulations that make these mitigations possible. Work with eco-friendly drones(re-cycling parts) The concerns addressed: Emissions impact, recycling, impact of climate change, economic viability
M9
Ensure that the cost of drone services commensurate with the value of the activity The concerns addressed: Cost of services, competency, jobs, economic viability
M10
Developing a risk and safety culture in the drone industry The concerns addressed: Competency, jobs, economic viability, demand 4.1.2. Partial Mitigation List Applicable to VLDs: Categories, Ease of Implementation, and Top 10 Scored The mitigation measures were selected by considering their applicability to VLDs. This partial mitigation list applicable to VLDs is in Appendix B.
In Figure 12, the percentages of categorization of the mitigation measure are shown. As can be seen in this figure, the first category, "regulation and policy", accounts for almost 43% of the mitigation categories that are applicable to VLDs. However, the fourth category, "tools and technologies", accounts for only 9.5%. In Figure 13, the percentages of the ease of implementation considering the partial mitigation list applicable to VLDs are shown. In this figure, it can be seen that 57% of these mitigation measures can be implemented quickly. Only 10% of the partial mitigation measures are considered difficult to implement. Figure 14 and Table 5 show that most of the top 10 scored mitigation measures that are applicable to VLDs are considered to be easy to implement. Only the mitigation measure "ensure that electronic devices on drones (cameras, sensors, etc.) cannot be used to infringe on privacy" is considered to be hard to implement in a short time.
V1
Identify strategic location for vertiports.
V3
Establish no-fly zones for drones.
V4
Fly direct routes to avoid unnecessary path extension and minimize the time in the air.
V6
Use different methods (such as advanced encryption standards or regular cyberattack tests) to improve the security of communications in the U-space system.
V7
Public engagement activities about drone technology and operations.
V8
Disseminate the environmental benefits of drones (quantification of emission savings.)
V9
General aviation pilots' engagement in activities about UAM.
V10
Ensure that electronic devices on drones (cameras, sensors, etc.) cannot be used to infringe on privacy.
Discussion
The application of actions to mitigate risks is the basis of the SORA methodology [21]. For instance, to reduce the energy of a falling drone, a common mitigation is the addition of a parachute. While the parachute will, in general, improve safety, it may also introduce new risks and failures, such as an undesired deployment of the parachute. We have to understand that any well-intentioned action may indirectly introduce malicious effects as well.
In the case of the proposed social concern mitigation list, we found a number of contradictory effects.
For instance, we proposed a number of mitigations in relation to the flight trajectory and noise-limiting hover time, flying direct routes, and so on-but also using alternate paths, avoiding certain areas, and limiting speeds. Although they are helpful for reducing the noise on the ground, it is not possible to apply all of them at the same time. Trade-offs need to be elaborated to avoid long route deviations due to protected zones. Other route characteristics, such as altitude, time of day, and maximum capacity, play important roles in the abatement of noise. They should all be taken into account together when selecting the best mitigation strategy for drone operations.
Another example is the location of vertiports. For safety reasons, vertiports should be located in isolated areas, with few air and ground risks, but for economical reasons they should be close to transportation hubs (persons and/or freight). Moreover, the high traffic density of a vertiport can generate a nuisance for neighbors. Using a building ceiling could mitigate this nuisance, but at the cost of increasing the flight risk. A split of opinions is very clearly shown in Table 6, obtained based on the workshop attendees' opinions. It seems clear that more research is needed to further develop this and some of the proposed mitigations. A number of mitigations have been classified as "tools and technologies". Based on research on clean energy sources, artificial intelligence or new materials are key to reducing societal concerns. A drone with low-noise propellers may be inaudible at 10-15 m of height, thus increasing the minimum altitude noise mitigation. It is especially relevant to note the object avoidance technologies, currently based on near-infrared or ultrasound sensors, that only work at low speeds. Future developments can help to avoid unexpected encounters (i.e., killing birds) at any speed.
Most societal concerns cannot be purely measured objectively. Human perception is highly subjective. A clear example is the experiment reported in the EASA survey [17] about noise. In a lab, a number of people were requested to order a list of sounds according to what they considered a nuisance. While all sounds were played all at the same volume (80 decibels, which is higher than a vacuum cleaner), responses penalized unknown noise sources more than other, known ones. As the public becomes informed and used to drone characteristic noise, this human factor will change. Moreover, according to [24], the noise of a VA-X4 taxi drone flying at 300 m produces a noise of 43 decibels, a loudness between the noise of a quiet urban night (40 dBA) and that of light urban traffic (50 dBA). A very interesting review of drone noise emissions and noise effects on humans can be found in [25]. Furthermore, the effects of drone noise with natural life, especially in birds, seems to be a growing societal concern [EASAsurvey], but scientific studies show that certain frequencies, such as drone high-frequency noise, are not audible to most birds [26].
A number of proposed mitigations can be adopted in future regulations. However, the role of governments must go beyond the regulatory aspects. Actions are needed to disseminate the benefits of drones as environmentally friendly vehicles, with a capacity for the fast transport of people and goods to be used in emergency situations and a motor of a new economic growth cycle. Simultaneously, the initial support in terms of infrastructure to be deployed (i.e., U-space) are actions needed to foster the new era of transport using drones. The development of this infrastructure still requires decisions about U-space airspace organization to be made. This is also a hot topic for research, as there does not appear to be any consensus based on the expert responses of Table 7. Another aspect that governments and authorities should face is the fairness in regard to the access to airspace. Transparency is a tool of fairness as well as a strategy for mitigating citizen concerns about privacy, according to the responses to the workshop poll shown in Table 8. Table 8. To which point do you agree with the following sentence: The ability of citizens to obtain information about drone flights in their vicinity would resolve privacy concerns.
Options (Multiple Choice)
Strongly agree 13% Agree 30% Slightly agree 35% Slightly disagree 7% Disagree 13% Strongly disagree 2% On exchange, drone operators shall carefully monitor safety levels to be fully compliant with the regulations. With this paper, we hope to provide them with ideas to help them to improve the social acceptance of drone operations (and thus increase business), especially in urban environments. The authors' aim is to convince drone operators to be eager to apply the most convenient mitigations to their operations, including dissemination actions and collaboration with researchers of new environmentally friendly technologies for drones.
Conclusions
Many governments believe that drone-related business can provide a competitive advantage for developing their country and are taking political and economic measures to foster drone business and urban air mobility. Expectations about drones are to be widely adopted by citizens in urban areas, once some issues are addressed and resolved. The most important ones are safety and societal issues.
Safety issues are largely anticipated in traditional aviation, to reduce risks to airspace users and people, assets, and facilities on the ground. Safety is achieved through thoughtful airspace design, robust and certified industrial processes, and the use of operational mitigation measures, all of them supported by an international regulation. Societal issues are sometimes overlooked before deployment.
This paper proposes to address societal issues similar to safety risks by anticipating and reducing risks (public concerns) prior to deployment. Social acceptance can be facilitated by ensuring mitigation measures that prevent the negative impact of drones on citizens and on the environment. Public concerns are identified, and actions that mitigate them shall be implemented well in advance of urban air mobility widespread deployment. The paper presents the main concerns of society with regard to drone operations, as already captured in some public surveys, and proposes a list of mitigation measures to reduce these concerns. The proposed list was then analyzed and its applicability to individual, very large demonstration urban flights is explained, using the framework of the CORUS-XUAM project. The proposed mitigation measures do not only concern drone operators but also regulators, educational bodies, other airspace stakeholders, infrastructure providers, technology and software developers, and research centers.
Future work includes the analysis of the application of mitigations in the six very large demonstrations of CORUS-XUAM to understand their impact and to fully consolidate the mitigation list proposed in this paper. It also includes new measures and scientific work to provide more detailed data to some mitigation measures, such as the perception of noise in the ground, which will help to suggest limits on altitude and speed. Future observations are needed to understand the interaction with birds. The analysis of images captured during flights will be useful to estimate threats to privacy. Additionally, further work is needed to develop a comprehensive list of mitigation measures, identify regulatory gaps, propose suitable infrastructure deployment, or influence pilot training in the future. Social concerns need to be anticipated and mitigated in advance if urban air mobility is to become an accepted part of a modern, efficient, environmentally friendly, and competitive future mobility.
Mitigation Action Areas
General aviation pilots' engagement in activities about UAM fairness, safety, economy Public engagement activities about drone technology and operations transparency Disseminate the environmental benefits of drones and disseminate results (emission savings) transparency, economy Disseminate the mobility and economic benefits of drones transparency, economy Table A3. List of mitigation actions applicable to drones that reduce the social concerns of demonstration flights.
Mitigation Action Areas
Work with eco-friendly drones (recycled parts) environment Study of drone technologies to prevent encounters with birds environment, economy Measure noise at different altitudes/spots noise Ensure that electronic devices on drones (cameras, sensors, etc.) cannot be used to infringe on privacy privacy Limit type/positions of cameras privacy Table A4. List of other mitigation actions applicable to drones that reduce the social concerns of demonstration flights.
Topic Mitigation Action Areas
Vertiport Identify a strategic location for vertiports safety, noise, economy Vertiport Design optimized arrival and departure operations environment, economy, noise Security Use different methods (e.g., advance encryption standards or regular cyberattack tests) to improve the security of communications at the U-space system (cyber)security
Security
Have a U-space service capable of detecting any deviant behavior of a drone safety, security Security Strictly limit the access of third parties to video recordings during and after a drone mission privacy | 2022-01-20T16:06:49.978Z | 2022-01-18T00:00:00.000 | {
"year": 2022,
"sha1": "26c3b451dc2b2c27b5334fd0ff7b77a86a04e234",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-446X/6/2/28/pdf?version=1642509036",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ec6cddd618076bd4045fcb6f568a8ebee9fee8bb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54193474 | pes2o/s2orc | v3-fos-license | Perceptual-motor contributors to the association between developmental coordination disorder and academic performance : North-West Child Health , Integrated with Learning and Development study
Developmental coordination disorder (DCD) is characterised by deficits in the acquisition and execution of motor skills that can be identified at a young age and which has a negative impact on academic achievement and everyday activities (American Psychiatric Association [APA], 2013). DCD can only be diagnosed in the absence of any other neurological or intellectual disabilities (APA 2013). DCD is reported in 5% – 6% of 5to 11-year-old children (APA 2013), with boys more likely than girls to have this condition (Asonitou et al. 2012; Lingam et al. 2009).
Introduction
Developmental coordination disorder (DCD) is characterised by deficits in the acquisition and execution of motor skills that can be identified at a young age and which has a negative impact on academic achievement and everyday activities (American Psychiatric Association [APA], 2013).DCD can only be diagnosed in the absence of any other neurological or intellectual disabilities (APA 2013).DCD is reported in 5% -6% of 5-to 11-year-old children (APA 2013), with boys more likely than girls to have this condition (Asonitou et al. 2012;Lingam et al. 2009).
Several researchers report a link between motor problems and academic skills (Alloway & Temple 2007;Schoemaker et al. 2001;Sortor & Kulp 2003;Westendorp et al. 2011).Some explanations for this occurrence are given from a neuropsychological perspective.Motor and cognitive functions are coupled, because they use the same brain structures (Diamond 2000;Westendorp et al. 2011).Furthermore, Diamond (2000:49) reports that the cerebellum is involved in both motor and cognitive functions, while the prefrontal cortex plays an important role in motor and cognitive functioning as well as through the strong neural connections between these two brain areas.Dysfunction in any of these brain structures or neural pathways may therefore express itself in Background: Children with developmental coordination disorder (DCD) portray motor coordination and perceptual difficulties which can hamper daily activity and academic task execution.
Aim: This study examined the association between DCD and academic performance, and explored which perceptual and motor coordination skills had the largest contribution to academic performance.
Setting: Ten-year-old children (N = 221, 10.05 years + 0.41 standard deviation) who formed part of the North-West Child Health, Integrated with Learning and Development (NW-CHILD) longitudinal study in South Africa were randomly selected to participate.motor problems as well as in cognitive problems.A second argument is that motor and cognitive development follows the same timetable with an accelerated development between 5 and 10 years.If any dysfunction may occur in these brain structures, it could possibly lead to motor and cognitive problems.Furthermore, motor and cognitive functions share several common underlining processes, for example sequencing (Hartman et al. 2010;Westendorp et al. 2011), and monitoring and planning (Roebers & Kauer 2009;Westendorp et al. 2011) which might also account for the co-occurrence of these problems.
Academic problems in children with DCD are associated with visual integration skills (Bonifacci 2004), visual functioning (Coetzee & Pienaar 2011;Goldstand, Koslowe & Parush 2005), visual processing skills (Goldstand et al. 2005) and convergence skills (Morad et al. 2002).Pienaar, Barhorst and Twisk (2013:377) report that the mastering of mathematics in South African Grade 1 learners depends on visual-perceptual skills where good visual-motor skills, visuospatial orientation and visual discrimination are required for the successful mastering of the foundational mathematical concepts.Movement execution depends on the ability to detect and analyse visual information (vision) received from the eyes (Botha 2013).Visual perception and motor coordination are part of an umbrella term, namely visual-motor integration skills (Beery & Buktenica 1997) which is described as the ability to integrate visualperceptual skills with fine motor coordination (Beery & Buktenica 1997;Cheatum & Hammond 2000).Motor coordination influences the effective co-operation of body parts to produce smooth body movement, while visual perception (Schoemaker et al. 2001) is an acquired process by which useful visual information is obtained through effective conversion of images (Bezrukikh & Terebova 2009;Cheatum & Hammond 2000).
Children with DCD's visual-motor perception and spatial orientation are described as inhibited and their ability to make quick motor adjustments are hence affected (APA 2013).Purcell et al. (2012:304) are of the opinion that the ability of children with DCD to process movement demands is influenced by poor visual-motor integration skills.Wellplanned, coordinated and conscious movement can only take place when strength control is applied correctly, when motor planning occurs in fine detail and when continuous adjustments are made correctly by the neurological feedback system (Botha 2013).Clarification of different subsystems, including sensory and perceptual systems that contribute to general coordination, development of motor coordination and motor coordination difficulties is therefore of importance when assessing DCD (Cermak & Larkin 2002).A variety of test components can be required to approach the general construct of coordination, including tasks measuring neurodevelopmental function, tasks classified on the basis of interaction with the environment and fine and gross motor tasks (Cermak & Larkin 2002).Researchers (Dwyer, Baur & Hardy 2009;Riethmuller, Jones & Okely 2009) highlight in this regard that many body systems, including sensory, musculoskeletal and neurological systems are being incorporated during a child's motor skill development and are therefore different but important underlying constructs of coordinated performance in children.
Gross motor skills are highlighted by various researchers as an important contributor to later cognitive achievement (Lopes et al. 2013;Piek et al. 2008;Son & Meisels 2006).Skills, generally executed in the classroom such as copying activities and skills that must be performed at speed, appear to be weaker in children with poor motor coordination because of slower inhibition of dominant responses (Michel et al. 2011).Proper muscle functioning also influences writing tasks such as quality of handwriting and writing speed (Malloy-Miller, Polatajko & Anstett 1995;Schwellnus et al. 2012).Perceptualmotor skills also have an influence on various scholastic tasks where spatial orientation, for example, is important for a clear understanding of number lines (Gunderson et al. 2012) and plays an important role in general mathematical abilities (Van Lill 2011).Perceptual skills are reported to have the greatest impact in the earlier school years or more importantly contribute to the earlier stages of mathematics (Geary, Hamson & Hoard 2000).
Relationships between DCD and visual-motor integration skills (Bonifacci 2004;Cheng et al. 2014;Tsai, Wilson & Wu 2008), and between academic achievement and visual-motor integration skills are also reported (Goldstand et al. 2005;Kulp 1999;Pieters et al. 2012;Van Hartingsveldt et al. 2014).The findings reported by Alloway and Temple (2007:483) indicate that children with DCD display deficiencies with regard to working memory; thus implying that these children's literacy and numeracy skills will be influenced negatively.Children with DCD have poor executive functioning abilities (Zhu, Tang & Shi 2012) which can negatively impact their motor development (Leonard et al. 2015;Schurink et al. 2012) and academic achievement (Thorell et al. 2013).Handwriting speed depends on the maturity of visual-motor integration as well as visual information processing and memory (Tseng & Chow 2000).The mastering of visual-motor abilities (Memisevic & Sinanovic 2013), visual working memory (Lepach, Pauls & Petermann 2015) and motor skills (Schurink et al. 2012:730) relies on effective executive functioning.
Thus, it appears from research findings that motor deficits in children with DCD based on different subsystems such as sensory, perceptual and neurodevelopmental function that contribute to the construct of general coordination, would impact negatively on their academic achievement.Research by Carlson, Rowe and Curby (2013:527) also indicates that visual-spatial integration keeps on developing and plays an important role in academic achievement.A lack of knowledge was identified from the literature, as no South African studies were found regarding this possible relationship which necessitates further research in this regard, especially on South African children.The objective of this study is therefore to determine whether an association exists between visual perception, motor coordination and visual-motor integration skills and academic achievement in 10-year-old children with DCD, and which of these perceptual and motor coordination skills have the largest contribution to academic performance.
Method Study design
This study was based on a cross-sectional cohort that was part of a stratified and randomised longitudinal study design (NW-Child Health, Integrated with Learning and Development), which covered a period of 6 years (2010)(2011)(2012)(2013)(2014)(2015)(2016).The North-West Child Health, Integrated with learning and Development (NW-CHILD) study included baseline measurements and two follow-up test opportunities during this period (2013)(2014)(2015)(2016).In order to determine the sample for the baseline measurements in 2010, a list of schools in the North West Province was obtained from the Department of Basic Education (DoBE), where after stratification was done according to school regions and school types (quintile 1-5).A list of schools in the North West Province, which were grouped into eight education districts that each represented 12-22 regions, each with approximately (minimum 12, maximum 47), were used.Four regions and were then randomly selected with regard to population density and school status.Within each region, five schools were selected where each of these schools represented the particular quintiles (Quintile 1 -schools in poor economic areas to Quintile 5 -schools in good economic areas).The DoBE in each province used a poverty classification to classify schools in different quintiles.This poverty classification was obtained from the National Census data which included income, dependant ratios and levels of literacy (Pauw 2005).Quintile 1-3 schools represented children from low socio-economic environments where Quintile 1 and 2 schools are released from paying school fees, while Quintile 4-5 schools represented learners from higher socio-economic schools (Pauw 2005).
Study population
Participants from five schools in one of the four school districts, who were part of the 2013 first follow-up measurements of the NW-CHILD study, formed part of the study.Boys and girls who were part of the 2010 study in their Grade 1 year were again tested during their Grade 4 year (in some cases in their Grade 3-year because of retention).The group included 221 Grades 3 (n = 55) and four participants (n = 166) who were part of the Zeerust district.The mean age for this group was 10.05 years, with a standard deviation (SD) of 0. 4.07% (n = 9) from other ethnic groups.The Movement Assessment Battery for Children, second edition (MABC-2) was used as the measuring instrument for the study to determine DCD, and all DSM-5 criteria were applied in identifying the DCD group.Children were identified with moderate-to severe-DCD when they fell below the 16th percentile after completing the MABC-2 test and if they experienced learning related problems as indicated in the second DSM-5 criteria (APA 2013).If a child fell below the 16th percentile (MABC-2 test) and obtained less than 39% in two or more of the six compulsive academic learning areas, they were hence included in the DCD group.Children with serious neurological or intellectual disability were excluded using information obtained from the schools.
Data collection Movement Assessment Battery for Children, second edition
The MABC-2 is a test battery that focuses on the identification of impaired motor function in children between the ages of 3 and 16 (Henderson, Sugden & Barnett 2007).This measurement instrument applies to three age groups, namely 3-to 6-year-olds, 7-to 10-year-olds and 11-to 16-year-olds.For the purposes of this study, only the tests applicable to the age band that includes 7-to 10-year-old age children were used.Within each age group, eight subitems were divided into the following three coordination subsections: manual dexterity, aiming-and-catching and balance.The manual dexterity subtest consisted of three sub-items, while there were two aiming-and-catching subitems and three balance sub-items.Each activity was demonstrated by the test recorder prior to a trial effort and two formal test efforts.The second test effort was only executed if the participant failed the first attempt or if the activity was not executed within the specific time limit set for his or her age group on the record form.It took between 20 and 40 min to complete the test.Each sub-item's raw score was converted to an item standard score.These item standard scores were then added up to obtain an overall standard score and percentile for each subdivision.Finally, the total test score (sum of all eight items' standard scores) was converted to an overall standard score and percentile.A higher standard score indicated a better overall performance.The overall test percentile was categorised according to different DCD statuses.Participants with percentiles at or below five or between the 5th and 16th percentile were grouped into the moderate-to severe-DCD group.Any values at or above the 16th percentile placed a participant in the typical or without-DCD category.The total test scores that classified participants in different DCD categories were as follows: a test score of less than or equal 67 indicated moderate to severe-DCD, while a score of more than 67 placed the participant in the typical or without-DCD category.The MABC-2 is a valid test to reliably identify children with and without motor deficits.Reported validity values ranged between r = 0.84 (Tan, Parker & Larkin 2001) and r = 0.6 to r = 0.9 (Croce, Horvat & McCarthy 2001).
Beery-Buktenica Developmental Test of Visual-Motor Integration, 4th edition
The Beery-Buktenica Developmental Test of Visual-Motor Integration, 4th edition (VMI-4), is a measurement instrument that assesses visual-motor integration skills and consists of two additional subtests, namely visual perception and motor coordination (Beery & Buktenica 1997).The two additional tests focus on visual perception and motor coordination where the latter shave a particular focus on hand control.The purpose of the VMI-4 is the early identification of children with impaired visual-motor integration (VMI) as well as determining the extent to which an individual's visual and motor abilities can be integrated.The extent of the concurrence between visual perception and fine motor movement is included in this measuring instrument.Poor results in the VMI-4 can be ascribed to the inability to integrate visualperceptual (VP) and motor skills and not necessarily to insufficient skills.The VMI-4 is a reliable test battery with a validity of r = 0.92, r = 0.91 and r = 0.89.The VMI test consisted of drawing three trial shapes and 24 increasingly complex geometric shapes.Each participant was expected to draw a geometric shape with a pencil (without using an eraser) and only one attempt per shape was allowed.This test had to be completed within the set time or was stopped after three consecutive mistakes.It took approximately 10-15 min to complete and could be executed individually or in a group.The visual-perceptual (VMI-VP) additional test required each participant to identify the correct corresponding shape of each item in a series of 27 geometric shapes.This test was executed individually.It took three minutes to complete the test and the test was stopped after 3 minutes or after three consecutive mistakes were made.The motor coordination (VMI-MC) additional test required the copying of a geometric shape during which the participant must have drawn it as correctly as possible while remaining within the given lines.This test could be executed in a group or individually and the execution was stopped after 5 minutes had passed.The criteria for the allocation of points in the VMI-4 were as follows: A '0' was allocated for incorrect figures and a '1' for correct figures.The test was stopped when the time has elapsed or after three consecutive mistakes were made, except in the MC subtest, which was only stopped after the allocated time had elapsed.Data in this test were collected consecutively in three categories in the following order: VMI, visual perception and motor coordination.The raw scores were converted to standard scores and thereafter to percentile values.The standard scores were used to divide the participants into five different categories, ranging from very low (40-67), below average (68-82), average (83-117), above average (118-132) to very high (133-160) (Beery & Buktenica 1997).
Academic achievement
Academic school grading progress reports of the June midyear assessments reflected the grading code of each of the six required learning areas of the particular participants in Grades 3 and 4 as well as the grade point average score of the six learning areas were obtained from each of the participating schools in 2013.The six compulsory learning areas were stipulated in the DoBE's South Africa Curriculum and Assessment Policy Statement (2014) and include the following learning areas: mathematics, home language, second language, natural sciences (NS), social sciences (SC) and life orientation (LO).Language learning in Grade 4 included all the official languages in South Africa, namely Afrikaans, English, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, SiSwati, Tshivenda and Xitsonga.Home language referred to one or two languages offered at home language level (for the applicable school or district), while second language referred to a language which is not a mother tongue, but which is used for certain communicative functions in a society or in the classroom (CAPS 2014).National assessment guidelines for the Annual National Assessment (ANA), which is written in September of each year and which is compiled at a national level and written by all South African learners in maths and language skills as a requirement of the DoBE (DE 2016), required that learners had to be evaluated in the ANA assessment on knowledge that had to be assimilated during the first three quarters of the school year from January to December.These ANA results of each participant were also made available by the Department of Basic Education (2014) of the North West Province for each participating school.Academic performance in each learning area was coded into categories according to official grading codes in the CAPS which were then expressed as follows: A '7' is allocated for exceptional achievement (80% -100%); a '6' for meritorious achievement (70% -79%); a '5' for considerable achievement (60% -69%); a '4' for sufficient achievement (50% -59%); a '3' for average achievement (40% -49%); a '2' for basic achievement (30% -39%); and a '1' is allocated when the learning outcome was 'not achieved' (0% -29%).The June mid-year assessments were compiled and scored by the applicable teachers where the ANA tests were compiled nationally by the DoBE but marked by the teachers therefore making it a more objective, representative and comparative assessment of learners' academic achievements.
Data analysis
The Statistica for Windows 2015 (StatSoft 2015) was used to analyse the data.The data were analysed descriptively by using means (M), SD and minimum and maximum values (StatSoft 2015).Bivariate Spearman Rank order correlation coefficients (r) were used to analyse correlations and the following guidelines were used to determine the strength of significant associations in practice: r ≈ 0.1 indicated a small practical significance, r ≈ 0.3 a moderate significance and r ≥ 0.5 a large practical significance (Cohen 1988).In addition, a stepwise regression analysis was performed for exploratory purposes to determine the predictor variables that showed the largest unique contribution to the total variance in academic performance.Academic achievement in six learning areas as well as the grade point average of these six learning areas were used as the outcome variables in the regression analysis, while different constructs, which http://www.sajce.co.zaOpen Access influence general coordination such as perceptual-motor abilities (VMI, visual perception and motor coordination) and motor coordination functioning (manual dexterity, aiming-and-catching and stability), were used as the predictor variables.Two regression analyses were performed: one on the performance in learning areas that were assessed nationally and another one on the academic performance during the mid-year school assessments.
Ethical consideration
Ethical approval (based on the Helsinki guidelines) for the execution of the NW-CHILD study was obtained from the Ethics Committee of the North-West University (No. NW-00070-09-A1).Permission was also obtained from the Department of Basic Education of the North West Province, South Africa.Principals of the various identified schools also had to provide permission for the collection of data during school hours.The parents and/or legal guardians of all the Grade 3 and 4 participants who participated in the study had to provide parental permission for them to participate in the study.Data were collected by the senior researchers and postgraduate students with qualifications in Human Movement Sciences, specialising in Kinderkinetics.If a participant's mother tongue was not English or Afrikaans, trained interpreters were used to explain the test instructions to the participant
Results
Descriptive and demographic information of the group can be found in Tables 1 and 2. DCD was identified in 47 (21.27%; 23 boys and 24 girls) of the 221 participants of which 14 (6.33%; seven boys and seven girls) were classified with severe-DCD (at or below the 5th percentile).One hundred and seventy-four participants (78.73%) fell above the 16th percentile, showing no signs of DCD and will serve as the reference group referred to as a group without-DCD or typical children.Table 2 reports the number of participants per learning area as well as their academic achievements during the June mid-year school assessment and the September ANA.
Table 3 reports descriptive values of the group as well as significant differences between the DCD group (< 16th percentile) and the without-DCD group as obtained in each of the outcome variables which included all the subsections of the VMI-4 and the MABC-2.The mean values in the DCD group were poorer in each of these subsections of both the VMI-4 and the MABC-2, compared to those in the without-DCD group.Statistical (p ≤ 0.05) and medium to large practical (d ≥ 0.8) significant differences are found for all the VMI-4 and MABC-2 variables.The visual perception mean standard score (79.17) of the DCD group is also categorised as below average (68-82), compared to the visual perception of the without-DCD group (84.57) that fell in the average category (83-117) (Beery & Buktenica 1997) which have clinical relevance.
Relationships were further analysed with Spearman Rank order correlations (Table 4) between the three VMI-4 variables (VMI, visual perception and motor coordination), the three MABC-2 variables (manual dexterity, object manipulation and balance) as well as the academic achievements obtained in the mid-year school assessment and the National assessment (ANA) in the without-DCD group (n = 174).Visual perception showed the strongest correlations with all the learning areas during the mid-year school assessments and also correlated the highest with the ANA results for Afrikaans, English and mathematics in the group.Motor coordination also reflected correlations, but showed only a small strength of association with most of the learning areas, while a relationship indicating moderate practical significance was established with Afrikaans (r = 0.32).Visual-motor integration reflected the smallest number of correlations with the different academic learning areas all of which indicated a small practical significance.
Table 5 displays a similar analysis for children with DCD (n = 47).Manual dexterity and visual perception, and the total MABC score showed clear relationships with most of the learning areas, indicating medium to strong effect sizes.The strongest correlations were found between manual dexterity and most of the learning areas.The highest relationships in both the june school (r = 0.42) and ANA (r = 0.39) assessments were found between maths and manual dexterity.
As these correlations indicated that the outcome variables are all related to academic performance in children with and without-DCD, a forward step-by-step regression analysis was performed to determine the unique percentage variance explained contribution (ΔR 2 ) of best predictors for academic achievement in the group (Table 6).This analysis determined which of the three perceptual-motor constructs and the three motor coordination constructs had the biggest contribution to the variance that are explained in academic performance by these variables.The results obtained for the six compulsory learning areas and the grade point average of these six learning areas in the mid-year June school assessment as well as the National assessment (ANA), which only assess mathematics, home language and first additional language, are both displayed in Table 6.Visual perception explained the largest unique contribution of the total variance of all six perceptual-motor and coordination predictor variables in both regression analyses (Table 6) with the exception in Afrikaans (June) and English (ANA) where motor coordination showed a higher contribution to the total variance.Visual perception had the highest unique contribution to the total variance in all the other learning areas as well as in the grade point average.A total percentage variance of 18.17% was explained by these predictor variables in the grade point average where 16.36% was contributed by visual perception, while motor coordination (0.42%) and manual dexterity (0.95%) made additional small contributions.The ANA results, which were based on standardised papers drawn up nationally where teachers' subjective influence could hence not play a role, are also displayed in Table 6.The contribution of visual perception was even higher in two of the learning areas that were assessed.In Mathematics, visual perception explained 22.04% of the total percentage variance of 23.11%, while in Afrikaans visual perception contributed to 15.93% of the 23.65%.Manual dexterity and balance also proved to be contributors, although very small percentages of the total percentage variance were explained by these predictor variables.It appeared from these results that visual perception rather than visual-motor integration served as a significant contributor to academic achievement.
Discussion
This study examined the association between DCD, perceptual and motor coordination skills and academic achievement in 10-year-old children, The results firstly confirmed an overall relationship between visual perception and academic achievement of children, irrespective of whether they had coordination problems (DCD) (Table 4), while moderate to strong correlations were also established between DCD, manual dexterity, visual perception and academic achievement (Table 5).The study also verified that of all the different perceptual and coordination constructs that were measured, visual perception had the highest correlation with mathematical achievement in typical children, emphasising the importance of this relationship in practice (Table 4).Kulp et al. (2004:161) confirmed similar relationships between visual perception and mathematical skills, specifically in activities that involved visual memory.Pienaar et al. (2013:377) also reported a stronger relationship between basic mathematics, reading and writing literacy, VMI and visual perception abilities compared to the relationship that they found between these basic literacy skills, motor proficiency and motor coordination skills in school beginners.Skills that are assessed in the Grade 3 and 4 National Mathematical exam paper of South African children include, among others, calculation ability, knowledge of monetary and measurement units, figure patterns, factors, rounding off, fractions and identification of two-and three-dimensional shapes (ANA 2013).Various perceptual-motor skills serve as the underlying foundation for the successful execution of these skills.Perceptualmotor skills also underlie a variety of abilities like balance, laterality, spatial relationships, ocular motor control, crosslateral integration and body awareness (Auxter, Pyfer & Huettig 2010).Cheatum and Hammond (2000:210) report that body awareness, laterality and direction play a significant role in the effective development of a body scheme which, furthermore, overflows into the development of spatial orientation.Furthermore, Gunderson et al. (2012Gunderson et al. ( :1238) ) and Van Lill (2011:22) report that good spatial orientation contributes to a better understanding of the number line because of improved knowledge of the linear spatial representation of figures.Richardson, Hunt and Richardson (2014:754) support the above opinions by indicating that spatial orientation is a significant predictor of mathematical achievement and that visual-spatial function serves as an additional predictor.Pieters et al. (2012:503) highlight that fine motor activities like sorting and visual-perceptual activities are needed to form an adequate mental representation of numerical concepts.Problems with VMI such as copying a figure can be affected by problems with visual perception for instance perceiving a circle or motor skills such as drawing lines or by integrating both (Sortor & Kulp 2003).Furthermore, the study of Pieters et al. (2012:499, 503), on 7-to 9-year-old children with mathematical learning problems confirmed a relationship between visual perception and procedural calculation such as borrowing and transferring skills in maths.These findings confirm that visual perception and motor performance are closely linked (Gibson 1979) which, again, can influence performance in academic areas such as maths.
The results in Table 3 confirmed significantly poorer VMI, visual perception, motor coordination, manual dexterity, aiming-and-catching and balance skills in children with DCD which is also confirmed by other studies.The findings of Tsai et al. (2008:662) suggest a general immaturity of brain networks that support complex visual-spatial processing in children with DCD.A South African study on children with DCD confirmed relationships between ocular muscle control and moderate to serious DCD in 6-to 7-year-old children (Coetzee & Pienaar 2011).Furthermore, Tsai et al. (2008:663) and Cheng et al. (2014Cheng et al. ( :2177) ) reported poorer visual-perceptual skills, while Schoemaker et al. (2001:130) reported distinctive poorer visual closure and position in space perception skills in children with DCD compared to their typically developing peers.Our findings confirmed statistical as well as practically significant lower visual-perceptual skills in children with DCD which partially agree with the findings of Bonifacci (2004:164) who reported significant VMI differences (p < 0.014).
Our overall results are further confirmed by other studies that have reported that children with DCD display poorer academic abilities (Asonitou et al. 2012;Missiuna, Rivard & Pollock 2004).Visual-motor integration and visual perception skills are indicated as role players in academic performance of children with DCD (Goldstand et al. 2005;Kulp 1999;Pieters et al. 2012;Van Hartingsveldt et al. 2014).Son and Meisels (2006:763) confirmed a relationship between visual-motor skills, gross motor skills and later reading, and especially mathematical achievement.Alloway and Temple (2007:483) states that learners with DCD have a poorer working memory which impacts directly on performance of literacy and numeracy skills.The child with DCD might also struggle with alignment and spacing of columns and numbers in mathematical questions which contribute to slower copying of figures or calculations from the board (Missiuna et al. 2004).Alloway and Temple (2007:483) reported that children with DCD performed poorly in literacy and numeracy assessments, while Sortor and Kulp (2003:761) found that motor coordination had a positive correlation with academic achievements.Morad et al. (2002:119) highlighted convergence skills, while Goldstand et al. (2005:383) add visual processing and visual functioning skills as role players in their academic achievement.Tsai et al. (2008:663) report clinically significant deficits in different but interrelated visual-perceptual abilities in children with severe-DCD.Therefore, the present study confirms the above findings and confirms that visual perception correlates with DCD and academic achievement.A possible reason for this could be that there is a close link between VMI and motor skills.Problems with VMI, as seen in manual dexterity, could be affected by delays in visual perception and motor skills (Sortor & Kulp 2003).
Impaired manual function is also reported in children with DCD (Bieber et al. 2016) and linked to poorer academic skills among them which also confirmed our findings regarding poorer manual dexterity in the DCD group.Luo et al. (2007) report significant relationships between fine motor skills and mathematics, stating that this relationship can be used to determine mathematical performance over time.In addition, Morales et al. (2011:411) report that fine motor skills have a stronger relationship with academic achievement than gross motor skills, and together with age, serves as a significant predictor of academic performance.Carlson et al. (2013:527) is of the opinion that the correlation between fine motor skills and academic achievement can also be ascribed to visualspatial integration and not necessarily to poor visual-motor coordination.
Furthermore, our results established that visual perception was the largest unique contributor of all the different perceptual and coordination predictor variables that were analysed in explaining academic achievement in children, irrespective of motor difficulties (Table 6).Shin, Park and Park (2009:622) report a close interplay between 9-and 13-year-old children's visual skills (especially accommodation) and their academic achievement, and state that binocular dysfunction has a negative effect on reading, mathematics, science and social science (Shin et al. 2009).However, motor coordination or hand control served to a lesser degree as contributors, especially in language-related learning areas (Table 6).Motor coordination and manual dexterity might reflect on the fine muscle control abilities of the group, as well-functioning fine motor muscles and control can influence the quality of handwriting tasks and writing speed (Malloy-Miller et al. 1995;Schwellnus et al. 2012).Tseng and Chow (2000:87) also confirmed a significant relationship between VMI skills and writing skills.It was also interesting to note that motor coordination (especially hand skills as assessed in the VMI-4) had a significant and bigger role to play in especially language learning areas.It may be that some similarities in underlying processes are shared, as handwriting skills are considered to be an integral part of language-expressive skills.
This research had limitations that need to be acknowledged.The DCD group (n = 47), which were identified from the overall group, was too small to be able to explore the differences and relationships that was investigated as extensive as what was intended.This group also included participants with moderate-and severe-DCD which could have influenced the strength of the associations that were established with academic performance.The participants also represent only one of nine regions in South Africa, influencing the generalisation of the results.Further studies, using a larger group and also representative of other parts of South Africa, is therefore recommended to confirm the above results.A fuller understanding of the contribution of the individual functional constructs of visual perception to academic performance is also still needed, as this relationship between academic achievement and underlying visual-perceptual-motor factors was only superficially investigated in this study.However, the strong link that was established between visual perceptual abilities and academic performance, necessitates more indepth research into the role of different but interrelated visualperceptual abilities in academic performance of children, especially those with DCD.The influences of factors such as poor socio-economic environments in this relationship should also be taken into account, as earlier studies indicated that it might account for or mediate relationships.The development and testing of interventions for young children with DCD with these deficits are also recommended.
Conclusion and limitations
Inferior VMI, visual perception, motor coordination, manual dexterity, aiming-and-catching and balance were found in children with DCD which showed statistical, practical and even clinical relevance.Visual perception made a significant contribution to academic performance of Grade 4 learners, irrespective of whether children had coordination problems such as DCD.Visual perception also correlated highly with mathematical achievement in children with DCD as well as in children without-DCD and therefore seems to play a significant role in the child's ability to understand the perceptual concepts that are needed to execute mathematical tasks effectively at the age of 10 years.However, coordination problems as a predictor of academic performance in typical children did not increase this difficulty, while manual dexterity did not only show a higher association to academic performance in children with DCD compared to typical children, but also in comparison with visual perception.It is assumed that if a young child's motor skills are atomised, they will need less energy to concentrate on performing these skills, especially fine motor tasks during the early school years and can instead concentrate on the academic tasks at hand.Gross motor control and visual perception develops during early childhood and both should be well developed by 10 years.As motor proficiency and visual perception are both developmental processes, exposure to relevant learning environments are thus necessary and educators and policymakers should hence provide adequate resources and opportunities to improve development of these skills.Intervention strategies should also be in place for those children who are at risk for academic failure based on these developmental deficits.Children should therefore be exposed to stimulating environments from a very young age where age-appropriate development of perceptual-motor skills can establish a firm foundation for later academic skills and especially mathematical achievement.Activities that can contribute to the improvement of visual perception abilities and mathematical achievement may include obstacle courses (spatial orientation), different animal walks (motor planning) and activities that include different colours, shapes and dimensions.Sufficient training and empowerment of teachers in the development of these skills are also essential.Our results highlight that all children, irrespective of whether they have motor coordination problems, need to participate in such development programs at a young age.Visual-motor integration, visual perception, motor coordination, manual dexterity, aiming-and-catching and balance skills of children with motor difficulties such as DCD should also receive remedial attention from an early age.
This study increased our understanding of the level of the relationship between perceptual and coordination factors in academic performance of children with DCD and also in typically developing children.Furthermore, it increases our understanding of the importance of these factors in academic performance of children.
TABLE 2 :
Academic marks during the June mid-year assessment and the Annual National Assessment.
N, number of participants; M, mean values; SD, standard deviation; Min, minimum values; Max, maximum values; ANA, Annual National Assessment.
TABLE 3 :
Descriptive statistics and significance of differences in the Visual-Motor Integration-4 score and Movement Assessment Battery for Children, second edition scores in the developmental coordination disorder (DCD) and without-DCD groups.
TABLE 4 :
Spearman Rank order correlations between Visual-Motor Integration-4 and Movement Assessment Battery for Children, second edition variables and academic achievement in the June mid-year and the national (Annual National Assessment) assessments in children without developmental coordination disorder (n = 174).
TABLE 5 :
Spearman Rank order correlations between Visual-Motor Integration-4 and Movement Assessment Battery for Children, second edition variables and academic achievement in the June mid-year and national (Annual National Assessment) assessments in children with moderate-to severe developmental coordination disorder (< 16th percentile) (n = 47).
TABLE 6 :
Unique percentage variance explained contribution (ΔR 2 ) of best predictors for each learning area as given by a forward step-by-step regression analysis for the mid-year June school assessment and the Annual National Assessment.: Bold values indicate the highest % variance for each learning area.Afr., Afrikaans; Eng., English; Tswana., Setswana; Math., Mathematics; LO, life orientation; NS, natural sciences; SS, social sciences; GPA, grade point average; VP, visual perception; MC, motor coordination; VMI, visual-motor integration. Note | 2018-11-27T02:06:22.807Z | 2018-09-20T00:00:00.000 | {
"year": 2018,
"sha1": "10a620f902cb38a9dcb35e66c5bba1c935634af1",
"oa_license": "CCBY",
"oa_url": "https://sajce.co.za/index.php/sajce/article/download/562/761",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "10a620f902cb38a9dcb35e66c5bba1c935634af1",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
} |
249188962 | pes2o/s2orc | v3-fos-license | A Rare Case of Leptomeningeal Carcinomatosis Secondary to Metastatic Non-Small Cell Lung Carcinoma
Leptomeningeal carcinomatosis is a rare complication of metastatic systemic malignancy, with lung cancer being the most common cause. We present a case of a 75-year-old man with a past medical history of right non-small cell lung carcinoma and ischemic stroke who presented with a persistent headache and swallowing difficulties. On evaluation, the patient was initially diagnosed with a subacute infarct of the right posterior frontal lobe following magnetic resonance imaging (MRI). The patient’s headache and dysphagia worsened, increasing the possibility of brain metastasis. The patient underwent cerebrospinal fluid analysis including cytology and multiple MRI studies with no obvious explanation for the symptoms. The patient eventually developed multiple cranial nerve palsies, and a diagnosis of leptomeningeal carcinomatosis was made with neuroradiology consultation for the MRI.
Introduction
Leptomeningeal carcinomatosis (LMC) is an infrequent complication of malignancy that results following the dissemination of the tumor cells into the subarachnoid space. Lung cancer is the most common systemic malignancy for metastasis followed by gastric cancer, breast cancer, malignant lymphoma, and malignant melanoma [1]. Around 10%-26% of patients with lung cancer ultimately develop leptomeningeal metastasis [2]. As a result of frequent multifocality, clinical signs and symptoms depending on the site of involvement are often non-specific. However, typical findings include cranial nerve palsy, raised intracranial pressure, or meningeal irritation leading to diplopia and facial weakness, changes in hearing, and headache, respectively [3]. Magnetic resonance imaging (MRI) of the brain and cerebrospinal fluid (CSF) analysis are the standard for diagnosis of the condition [4,5]. However, Straathof et al. [6] concluded that in the absence of a gold standard test for diagnosis, CSF cytology had a sensitivity and specificity for diagnosing the condition to be 75% and 100%, respectively, whereas with gadolinium MRI, the sensitivity and specificity for diagnosing the condition were 76% and 77%, respectively. The sensitivity of enhanced MRI was equivalent to that of CSF analysis and the specificity of CSF examination was higher than that of enhanced MRI, being 100% and 77%, respectively [6]. The patient developed leptomeningeal metastasis in the present case, which was not demonstrated following a series of multiple MRIs and CSF cytology.
Case Presentation
A 75-year-old man with a past medical history of stage IIIA adenocarcinoma of the right lung (treated with chemotherapy/resection in 2009 and radiotherapy for local recurrence in 2019), adenocarcinoma of the rectum (treated with resection and colo-colonic anastomosis in 2009), and lacunar infarct of the left thalamus (March 2017) presented on April 13, 2020, with persistent right-sided headache and difficulty swallowing for two to three months. The patient described headache as a constant pressure-like sensation localizing to the right parietotemporal region that was somewhat relieved with ibuprofen. There was no history of falls or injuries, jaw claudication, or weakness in the shoulders or legs. He had an extensive outpatient workup and was seen by an ophthalmologist, an otolaryngologist, and a dentist without any identifiable cause except for suspected temporal mandibular dysfunction that was unresponsive to treatment with nonsteroidal anti-inflammatory drugs and physical therapy. MRI of the head with and without contrast performed a week ago for similar complaints showed a small 4-mm rounded focus of bright signal on diffusion-weighted imaging with surrounding peripheral cortical enhancement, a probable small subacute right posterior frontal lobe cortical infarct (arterial or venous in a setting of cortical vein thrombosis) rather than treated metastasis. The patient was started on aspirin and atorvastatin and referred to neurology outpatient with a plan for a repeat MRI of the head in four weeks. The patient also had difficulty swallowing both solids and liquids with occasional coughing episodes for a month. He had lost about 7 lbs in a month. The patient had multiple admissions for dysphagia with poor appetite and underwent extensive workup by gastroenterology, neurology, and otolaryngology without a mechanical explanation.
On physical examinations, the patient did not have significant deficits of cranial nerves (CNs) I-XI, no focal neurologic abnormalities, and preserved strength and sensation in the bilateral face and upper and lower extremities. The patient was admitted with a diagnosis of subacute stroke of the right posterior frontal lobe cortical infarct. The patient underwent computed tomography angiography of the head and neck, which did not show any hemodynamically significant stenosis. A transthoracic echocardiogram showed no obvious regional wall abnormalities with a left ventricular ejection fraction of 56%, evidence of probable patent foramen ovale (PFO) with a right to left shunting across interatrial septum on agitated saline injection, mild mitral regurgitation, and tricuspid regurgitation. The cardiology team recommended managing the PFO conservatively. The stroke was determined to be cryptogenic. The cardiology team recommended a 30-day event monitor with consideration for a loop recorder, a repeat MRI with contrast in four weeks, aspirin, and atorvastatin, and a follow-up with outpatient vascular neurology. The patient was evaluated by a speech and language pathologist with suspicion of the cause of mild oropharyngeal dysphagia that was related to subacute stroke and appeared to be at low to moderate risk for postprandial aspiration. The patient was discharged on April 15, 2020, on gabapentin for headaches and prednisolone for concern with giant cell arthritis, and was advised to follow up with ophthalmology as an outpatient. The patient was readmitted on April 24, 2020, due to difficulty in swallowing, regurgitation, and inability to swallow his medications. Otolaryngology performed a laryngoscopy, which did not reveal any pathology with no nasal/laryngeal/pharyngeal mass, normal vocal fold movement, and mild inter-arytenoid edema and dysphagia possibly related to right frontal cerebrovascular accident. Fluoroscopic swallowing function with video showed no evidence of cervical esophageal mass, web, or diverticulum. The patient tolerated pureed feeds and hence was discharged again on April 27, 2020.
The patient was readmitted on May 5, 2020, with complaints of worsening headache in the right frontoparietal region, occasional blurry vision, perioral numbness, and worsening dysphagia. On physical examination, the patient had right upper eyelid ptosis. The patient had mild right-sided facial droop, flattening of the right nasolabial fold, and inability to close the right eye completely, which were suggestive of CNs III, V, and VII palsy. He had preserved strength and sensation in bilateral upper and lower extremities. Basic metabolic profile and complete blood count with differential count were unremarkable. Thyroid-stimulating hormone (TSH) levels were 1.51. Voltage-gated calcium channel antibody, anti-muscle specific kinase (anti-MuSK) antibodies, LRP4 (low-density lipoprotein receptor protein) autoantibody, paraneoplastic panel, angiotensinogen converting enzyme (ACE) levels, Lyme antibody, SSA(Ro)/SSB(La) autoantibodies, creatine kinase, creatine phosphokinase, anti-lupus antibody, lupus anticoagulant, antidouble-stranded DNA antibodies, anti-neutrophil cytoplasmic antibody (ANCA), lupus anticoagulant antibody, anti-centromere Ab, rheumatoid factor, human immunodeficiency virus (HIV), and ganglioside antibody were negative. MRI of the head with and without contrast showed diffusion hyperintense right frontal lobe lesion and associated findings that were not significantly changed from April 09, 2020. The stability for nearly a month increased the odds that this represented metastasis. A magnetic resonance venogram performed for suspected venous sinus thrombosis was negative. MRI of the cervical spine was negative for cord signal abnormality, pathological enhancement, or osseous metastatic disease. The patient underwent a lumbar puncture, which was negative for cytology, cryptococcal antigen, meningitisencephalitis panel by polymerase chain reaction (PCR), flow cytometry, Venereal Disease Research Laboratory (VDRL), enterovirus PCR, herpesvirus 6 PCR, varicella-zoster PCR, herpes simplex virus PCR, cytomegalovirus PCR, West Nile virus PCR, and acid-fast bacilli stain and culture. Temporal artery biopsy was negative for giant cell arteritis. The patient was started on carbamazepine for suspected trigeminal neuralgia. The patient's neurologic deficits continued to worsen with decreased oral intake and malnutrition. The patient was discharged home on May 15, 2020, with home health care.
The patient was readmitted once more with failure to thrive secondary to persistent dysphagia on May 18, 2020. The patient had persistent CNs III, V, and VII palsy. A repeat MRI was performed for worsening dysphagia, persistent headache, and multiple CN palsy without notable cause. MRI of the head with and without contrast showed a 12-mm enhancing focus superiorly in the right precentral cortex with associated FLAIR hyperintensity ( Figure 1) and a tiny focus of high diffusion signal similar to that found on May 6, 2020, a 9-mm enhancing focus inferiorly in the right cerebellar hemisphere that was difficult to perceive on previous MRI due to posterior fossa artifacts but in retrospect was probably present on both examinations, and a focal FLAIR hyperintensity superior medial in the left frontal lobe without enhancement that was probably a small area of gliosis. Neuroradiology consultation for the MRI showed symmetric seventh/eighth nerve enhancement with some slightly nodular enhancement of trigeminal nerve findings suggestive of leptomeningeal metastasis (Figure 2). The patient was started on steroids and was planned for palliative whole-brain radiation as an outpatient and discharged home. The patient passed away shortly after at his home.
Discussion
LMC or carcinomatous meningitis is the infiltration of leptomeninges by the malignant cells, which is a devastating metastatic complication of solid tumors or hematological malignancies with high mortality and dismal prognosis [7]. The overall incidence of LMC for all types of systemic cancer was documented to be 3-8%, but in an autopsy series, it was noted to be up to 20%, with lung malignancy being the most common systemic malignancy for the disease [1,8].
LMC often presents as multifocal neurologic deficits because of infiltration of cranial and spinal nerve roots, direct invasion of the brain or spinal cord, obstructive hydrocephalus, or a combination of these factors. As a result, the patient may suffer from headache, nausea/vomiting, change in mental status, diplopia, facial numbness/palsy, hearing loss, loss of visual equity, paresthesia, pain in the back or neck, weakness in the legs, and bowel/bladder dysfunction [9]. In our case, the patient presented with persistent headaches with progressive involvement of the facial nerve and trigeminal nerve causing facial palsy and facial numbness, respectively. A detailed clinical history is critical to differentiate from other diagnoses that can be present with identical manifestations such as infectious meningitis, metabolic and toxic encephalopathies, sarcoidosis, paraneoplastic syndromes, and chemoradiation side effects. Heightened clinical understanding of LMC allows for earlier detection and treatment, maintenance of the quality of life, and prolonging survival [9].
The diagnosis of LMC is accomplished with gadolinium-enhanced MRI and CSF cytology. An MRI of the brain with whole-spine imaging with T1-and T2-weighted sequences with contrast is recommended if LMC is suspected of metastatic malignancies that can involve the entire central nervous system (CNS) [9]. However, the sensitivity of MRI has been reported to be between 65% and 75% only [10]. The gold standard for diagnosis is the presence of malignant cells in CSF. However, false-negative results are up to 50% in observational studies [11]. Hence, serial CSF sampling can improve sensitivity [12]. MRI findings are generally only abnormal in 75-90% of the patients with cytology-positive CSF. Therefore, neither MRI nor CSF sampling is sensitive enough when used alone for the diagnosis of LMC. Eventually, clinical along with MRI findings or serial CSF analysis should be performed for the diagnosis of LMC [13].
LMC usually has a prognosis of three to four months [14]. An evolving number of systemic anticancer therapies especially molecular targeted drugs and immunotherapies that cross the blood-brain barrier need to be individualized based on patient characteristics. In patients with non-small cell lung carcinoma (NSCLC), systemic administration of genotype-directed target therapies can result in clinical benefits. Epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors such as erlotinib and osimertinib are preferred in EGFR-mutant NSCLC [15,16]. Anaplastic lymphoma kinase fusion oncogene-positive NSCLC can be treated with ALK inhibitors such as loratinib, which has increased CNS penetration and intracranial activity [17]. Intrathecal chemotherapy has historically been a primary treatment for LMC in patients with solid tumors, although its efficacy is modest, and its superiority compared with systemic treatment has not been well established in randomized studies [18]. Bulky symptomatic disease sites can be treated with radiation therapy with whole-brain radiation for diffuse encephalitis or hydrocephalus. Persistent hydrocephalus can be treated with steroids or ventricular peritoneal shunt. Identifying patients with a quite poor prognosis can help limit unnecessary or futile interventions and maximize supportive care and comfort [18,19].
Conclusions
LMC is a rare complication of NSCLC with variable sensitivity of CSF sampling and MRI, both being the standard for diagnosis of the disease. Patients with negative MRI and CSF cytology might need serial MRIs with neuroradiology consultation and CSF cytology if the suspicion of the disease remains high as there is a high prevalence of false-negative results even when both are performed together for the diagnosis of LMC.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-05-31T15:02:40.912Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "93bb26f325bf24e148ab2a5c1ff9b6d917f7e660",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/98574-a-rare-case-of-leptomeningeal-carcinomatosis-secondary-to-metastatic-non-small-cell-lung-carcinoma.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4916aaa97c8a5527427dcfae6e1f0394e876023a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207243142 | pes2o/s2orc | v3-fos-license | Transarterial Chemoembolization Plus Sorafenib: A Sequential Therapeutic Scheme for HCV-Related Intermediate-Stage Hepatocellular Carcinoma: A Randomized Clinical Trial
tween the groups were compared using the log-rank test. Prognostic factors after recurrence were assessed using Cox regression analysis.
INTRODUCTION
Based on available treatments, prevention of hepatocellular (HCC) progression remains the most important challenge to improve prognosis. Although a radical approach seems to ameliorate HCC outcome, recurrence rates are not affected [1]. In hepatitis C virus (HCV)-infected people, HCC is stringently dependent on cirrhosis development. Cirrhosis that follows chronic hepatitis is indeed a well-established precancerous state [2]. Hepatocytes undergo intense mitogenic stimulation, induced by elevated levels of growth factors and inflammatory cytokines [3]. Mutations and promotion lead to a higher risk for developing HCC [4].
No benefit has been demonstrated with the administration of systemic or regional chemotherapy, including oral acyclic retinoic acid [5], adoptive immunotherapy [6], and intra-arterial radioiodine injection [7]. Also, no effect has been reported using cytokine-induced killer cell immunotherapy [8]. Carmofur, a pyridine analog, was found to lead to a longer disease- The Oncologist ® Hepatobiliary free survival (DFS) interval, but it had no effect on overall survival (OS) [9]. No significant effect on DFS or OS outcomes was recorded with i.v. epirubicin and oral carmofur [10]. Similar results were achieved with oral tegafur, a 5-fluorouracil prodrug [11], whereas the OS duration appeared to be shorter in the group receiving uracil and tegafur [12]. Thus, the development of new and effective therapies for HCC is urgently needed.
Sorafenib, a multikinase inhibitor acting on vascular endothelial growth factor receptor (VEGFR), platelet-derived growth factor receptor (PDGFR), and Raf signaling, is able to inhibit tumor growth and neoangiogenesis [13]. The successful clinical application of sorafenib stems from the demonstration of a survival advantage following its systemic administration to patients with advanced HCC. In a multicenter, phase III, double-blind, placebo-controlled trial carried out on 602 patients with advanced HCC, the median survival time and the time to radiologic progression were nearly 3 months longer for patients treated with sorafenib than for those given placebo [14]. A further phase III trial confirmed these results in patients from the Asia-Pacific region [15]. Overall, the survival benefit of sorafenib was also seen in some patients who experienced failure with previous local HCC treatment.
Transarterial chemoembolization (TACE) has become the standard of care for patients with HCC not suitable for surgical or ablative treatments when metastases and advanced liver disease are lacking [16]. This procedure, however, is a potent stimulator of local angiogenic factors capable of promoting tumor regrowth, thus increasing the risk for metastasis and worsening outcome [17,18]. Whether or not sorafenib might be used to target upregulation of TACE-induced angiogenic factors and hence potentially enhance its efficacy remains a reasonable hypothesis [19].
Here, we describe the results of a prospective, placebocontrolled, randomized, double-blind clinical study that we conducted to evaluate whether or not TACE combined with sorafenib could significantly extend the TTP of HCV-infected patients with intermediate-stage HCC.
Patient Recruitment
The final protocol was approved by the Italian Medicines Agency (AIFA) on behalf of the National Health Service to support independent research contributing to the knowledge of drug efficacy, effectiveness, and safety and improving the appropriateness of drug use. The study (AIFA Register, FARM7SJ7X9) [20] was also approved by the local ethics committee of the Azienda Ospedaliero-Universitaria Policlinico of Bari (ID 1444/CE). Written informed consent was obtained from 80 adult patients with HCC without neoplastic occlusion of the portal vein or extrahepatic spread and potentially eligible for therapeutic procedures. Recruitment started in October, 2007 and was closed in January, 2011.
Patient eligibility was established on the basis of the following inclusion criteria: Barcelona Clinic Liver Cancer (BCLC) stage B HCC [1], anti-HCV and HCV RNA positivity, Child-Pugh class A cirrhosis; and Eastern Cooperative Oncol-ogy Group (ECOG) performance status score of 0 -1, no prior targeted antiangiogenic therapy or at least 4 weeks since prior systemic chemotherapy, at least 4 weeks since prior antiviral therapy, no major renal impairment, no current infections requiring antibiotic therapy, not on anticoagulation or suffering from bleeding disorders, no unstable coronary artery disease or recent myocardial infarction, a platelet count Ն40,000/L, a hemoglobin level Ն8 g/dL, total bilirubin Յ3 mg/dL; alanine aminotransferase Յ5ϫ the upper limit of normal, a prothrombin time (PT) international normalized ratio Յ2.3, an absolute neutrophil count Ͼ1,500/mm 3 , the ability to understand the protocol and to agree with it and sign a written informed consent, and the absence of pregnancy.
Exclusion criteria included any concomitant cancer distinct from HCC, renal failure requiring hemo-or peritoneal dialysis, congestive heart failure, hepatitis B virus or HIV infection, drug or alcohol abuse (Ն25 g/day), cardiac ventricular arrhythmia, thromboembolic event, hemorrhage or bleeding in the previous 4 weeks, and major surgery within 8 weeks prior to enrollment.
Study Coordination
The study was coordinated by the Liver Unit of the Department of Internal Medicine and Clinical Oncology, University of Bari Medical School, including the overall management, registration, database analysis, and quality assurance.
Data Safety Monitoring Board
An independent data safety monitoring board closely monitored the proper conduct of the study. The committee consisted of three independent physicians (one internist, one surgeon, and one oncologist) who decided on the final diagnostic classification of critical clinical events. Drug toxicities were assessed using National Cancer Institute (NCI) common toxicity criteria [21]. Toxicity was evaluated biweekly during the first month and monthly during the remaining treatment time. Drug-related adverse events of grade 3 or 4 were considered unacceptable and patients experiencing such events were withdrawn from the study.
The study was performed according to the principles for guidance of good clinical practice [22] and the current revision of the Declaration of Helsinki [23].
Study Design
AIFA FARM7SJ7X9 is a prospective, single-center, placebocontrolled, randomized, double-blind clinical study with two parallel groups receiving conventional TACE treatment plus sorafenib (Bayer HealthCare, Leverkusen, Germany) or TACE plus placebo. Patients were allocated as having intermediate-stage HCC (BCLC stage B) if they were found to have a single nodule Ն5 cm or a multifocal tumor with more than three HCC nodules. Eligible patients were assigned to receive oral treatment with sorafenib starting 30 days after TACE treatment. TACE was performed through the transfemoral route. A 5-Fr catheter was advanced to the superior mesenterial artery to confirm the patency of the portal vein trunk on postmesenteric portography. Common hepatic or celiac arteriogra-360 Sorafenib Combined with TACE in HCC Treatment phy was performed to assess the number and location of lesions, tumor size, feeding artery, and presence of anatomic variations. A coaxial microcatheter (2.7 Fr or 3.0 Fr) was selectively inserted through a 5-Fr catheter into the feeding artery as close to the lesion as possible. In cases of multiple foci occupying the hepatic lobe, the right or left or both hepatic arteries were treated. Doxorubicin (30 mg) and mitomycin C (10 mg) with 10 mL of iodinated nonionic contrast media and 20 mL of iodinated oil (Lipiodol, Guerbet, Villepinte, France) were delivered to the cannulated feeding artery. Subsequently, the feeding artery was embolized using gelatin sponge pledgets in order to temporarily occlude the arterial supply and consequently ensure prolonged permanence of the injected drug mixture in the tumor-hosting region, thus enhancing tumor necrosis. This procedure also prevented the drugs from gaining access to the systemic circulation. Patients were given intra-arterial lidocaine (10 mg) between 10-mL aliquots of chemoembolization material to reduce pain.
All patients had a biopsy-proven histological diagnosis of cirrhosis. HCC was diagnosed using imaging techniques such as dynamic multiphasic spiral computed tomography (CT) scan and dynamic contrast-enhanced magnetic resonance imaging (MRI). In typical HCC, the signal intensity appears highly attenuated in the arterial phase and poorly attenuated or washed out in the delayed phase (ϳ3 minutes after initiation of contrast injection) [24]. Extrahepatic metastatic lesions were looked for and vascular invasion of the portal vein was excluded.
Short-and long-term outcome measures included the post-TACE complication rate, treatment-related mortality rate, complete ablation rate, and tumor progression pattern. A complication was defined as any adverse event after TACE, excluding pain or a transient febrile reaction. Treatment-related mortality was defined as any death occurring within 30 days following a TACE procedure.
Tumor response was assessed by CT scan 30 days after TACE. Complete response (CR) was defined as the absence of contrast enhancement within the original tumor. Any contrastenhancing areas within the targeted tumor on a post-TACE CT scan indicated incomplete tumor ablation. These criteria were implemented and evaluated in terms of patterns of Lipiodol retention in the target lesions reflecting tumor necrosis [25]. Lipiodol uptake was considered compact if the oily contrast medium was distinctly visible and well scattered throughout the viable tumor [26].
Local progression was defined as tumor recurrence within or at the periphery of the original ablated lesion. Metachronous, multicentric intrahepatic recurrence was defined as any new tumor that occurred in Couinaud's segments different from the original tumor site. The term extrahepatic recurrence was applied to any recurrence outside the liver. All images from CT and MRI scans were independently assessed by two expert radiologists.
Sorafenib was administered in step with the diagnosis of a tumor CR on CT scan evaluation 30 days after TACE treatment. As recently emphasized [27], in this sequential therapeutic scheme, sorafenib is used as an adjuvant therapy to prevent new HCC lesions after visible areas have been eradicated by the TACE procedure and to target tumor cells that escape local treatment. Patients were given sorafenib at a dosage of 400 mg twice daily and were monitored for the occurrence of adverse events. In cases of NCI grade 3 or 4 toxicity, treatment was discontinued and patients were withdrawn from the study. Dropout patients were permitted to receive reduced doses of sorafenib.
The TACE procedure was repeated at intervals of 4 -6 weeks until complete necrosis of the tumor was detected. To avoid hepatic failure, patients who did not achieve a CR after four TACE procedures were excluded from the study. In no case did the HCC occlude the portal vein vessels or spread extrahepatically. Sorafenib administration was stopped following evidence of tumor progression.
Study Objectives
This study was designed to reveal the potential superiority of the TACE procedure plus sorafenib over TACE alone in patients with intermediate-stage HCC. The primary endpoint was the TTP, defined as the time from the date of randomization to the date of disease progression. In the absence of progression, TTP was censored at the date of the last clinical assessment. Further endpoints were the rate of adverse events and grade of toxicity.
Randomization
Patients were randomly assigned, on 1:1 basis and in a blinded fashion, to the sorafenib (400 mg twice daily) arm or placebo arm. The investigator received a set of sealed envelopes via the distributing local pharmacy. Envelopes containing information on the patient's trial medication were not opened throughout the study.
Schedule and Follow-Up
Pretreatment evaluation included demographic data, a medical history, a physical examination, an evaluation of comorbidities, and the use of concomitant medications. In addition to routine laboratory parameters, measurement of ␣-fetoprotein (␣-FP), the HCV RNA level and HCV genotype, as well as a complete radiological study were carried out to meet the inclusion criteria. During treatment, patients were visited biweekly during the first month and monthly thereafter. A contrastenhanced ultrasonography (CEUS) was adopted to detect intrahepatic HCC progression. In addition, each visit included a chest radiograph, laboratory measurements, physical examination, and performance status assessment. Sorafenib-related toxicities were monitored until tumor progression. CT-and MRI-based characterization was carried out when tumor progression was suspected using CEUS.
Statistical Analysis
All data are presented as the percentage of patients or mean with standard deviation. Categorical variables were compared using Fisher's exact test where appropriate, and continuous variables were compared using the U-test. TTP curves were obtained using the Kaplan-Meier method and differences be- 361 Sansonno, Lauletta, Russi et al.
tween the groups were compared using the log-rank test. Prognostic factors after recurrence were assessed using Cox regression analysis.
RESULTS
In total, 80 patients with HCC who met the inclusion criteria underwent the TACE procedure between October 2007 and January 2011. They were randomly assigned to either the sorafenib arm (n ϭ 40) or the placebo group (n ϭ 40) (Fig. 1). As summarized in Table 1, there were no significant differences in the baseline characteristics of the patients included in the two arms. The median patient age was Ͼ70 years in both arms and there was a male predominance. Liver cirrhosis was present in all patients, as shown using histological diagnosis. All patients had chronic HCV infection and were viremic. HCV genotype 1 was largely prevalent. Well-preserved liver function was found in all patients. The mean serum levels of ␣-FP, the number (either solitary or multiple) of HCC nodules, and the mean tumor size were fairly comparable between the two groups. Overall, further TACE treatments were required to achieve complete tumor ablation in nine and eight patients belonging to the sorafenib and control groups, respectively.
Dropout Patients
During the study period, 18 patients (nine in the sorafenib group and nine in the control group) were prematurely withdrawn from the trial. Moreover, NCI grade 3 and 4 toxicities led to drug interruption in the sorafenib group. Four patients experienced hand-foot skin reaction, three had adverse hematological events including severe anemia, neutropenia, and thrombocytopenia, and one had uncontrollable diarrhea. The remaining patient withdrew consent ( Table 2). In the control group, nine patients dropped out for the following reasons: three patients denied their approval to proceed with the study, two complained of logistical problems, and the other four missed the appointments for the study. The main adverse event was post-TACE syndrome, which occurred in nine (22.5%) and 10 (25%) sorafenib-treated and control patients, respectively. None of them required specific treatment, other than surveillance. No additional dropouts were recorded among the remaining 62 patients.
Tumor Progression
Intrahepatic tumor progression occurred in 21 (68%) patients in the sorafenib group and in all 31 patients (100%) in the placebo group. No extrahepatic spread was detected. The median TTP was significantly longer in the sorafenib group than in the control group (9.2 months Ϯ 5.8 months versus 4.9 months Ϯ 3.2 months; p Ͻ .001; hazard ratio, 2.5; 95% confidence interval, 1.66 -7.56) (Fig. 2).
The proportion of patients who experienced intrahepatic tumor recurrence within 6 months of the TACE procedure was significantly higher in the control group than in the group of sorafenib-treated patients. Such early progression occurred in 22 (71%) control patients and in only seven (22%) patients in the study arm (p ϭ .005). However, the proportion of HCC patients with local progression was not significantly different between the two groups in that it occurred in 14 (45%) and 16 (52%) sorafenib-treated patients and control patients (p ϭ .3), respectively. Metachronous, multicentric tumor progression occurred in seven (22%) and 15 (48%) patients belonging to the sorafenib and control groups (p Ͻ .05), respectively.
Cox regression analysis stratified by disease-free patients and relapsers in the sorafenib-treated group indicated that age, HCV RNA serum level, HCV genotype, ␣-FP level, number of tumor nodules, mean tumor dimensions, and liver function parameters (including serum bilirubin, PT, and serum albumin) were not prognostic predictors of HCC recurrence (Table 3).
DISCUSSION
Our data indicate that, compared with placebo, conventional TACE followed by sorafenib administration led to a significantly longer median TTP in HCC patients. To substantiate the beneficial effect of sorafenib, given the confounding role of underlying cirrhosis, it was critical to select patients with wellpreserved liver function (Child-Pugh class A), with an ECOG performance status score of 0 -1, and without neoplastic invasion of intrahepatic vascular structures or extrahepatic spread.
The design of the present study included a double-blind randomization phase to minimize the likelihood of erroneous conclusions regarding the efficacy of sorafenib, and results were compared with the outcome in a homogeneous control group that met the same inclusion criteria. Early evidence of HCC progression is a critical point in the assessment of sorafenib efficacy. Indeed, short surveillance intervals and CEUS provided unequivocal advantages in that HCC nodules were detected when they were Ͻ20 mm in diameter (15.3 mm Ϯ 2.1 mm).
Sorafenib is likely to delay disease progression in patients with resected or ablated HCC by inhibiting both tumor growth and neoangiogenesis as a result of blocking the molecular components of Raf-mitogen-activated protein kinase/extracellular signal-related kinase (ERK) kinase-ERK signaling, VEGFR-1, VEGFR-2, VEGFR-3, and PDGFR- [14,28], which are the key pathways in the pathogenesis of HCC [29]. We found that the rate of metachronous, multicentric progres- Among 62 patients, 31 received sorafenib and 31 received placebo. The median TTP was 9.2 months in the sorafenib group and 4.9 months in the placebo group.
Grade 3or4
Alopecia 0 0 0 0 Anorexia 3 (7.5) 1 (2.5) 4 (10) 2 (2.5) Diarrhea 4 (10) 1 (2.5) 3 (7.5) 0 Fatigue 9 (22.5) 3 (7.5) 3 (7.5) 2 (2.5) Hand-foot skin reaction 4 (10) 4 (10) 0 0 Hematological event 5 (13) 3 (7.5) 0 0 Hypertension 6 (15.3) 0 4 (10) 0 Nausea 7 (17.5) 1 (2.5) 3 (7.5) 0 Rash/desquamation 8 (20) 4 (10) 1 (2.5) 0 sion of HCC was substantially lower in sorafenib-treated patients, whereas the proportion of local progression was comparable with that found in the control group. The clinical importance of these findings is further emphasized by the knowledge that HCC progression distant from the primary site appeared late in time. It is likely that metachronous, multicentric HCC nodules originate from de novo hepatocarcinogenesis and are effectively influenced by sorafenib treatment, which has a smaller effect on local HCC progression. Indeed, long-term follow-up studies in patients with HCV-related HCC showed a higher frequency of metachronous, multicentric progression than in patients with HCC arising from different etiologies [30]. This likely reflects the high carcinogenic risk of nontumoral areas in HCV-related HCC. It is known that the pathophysiology of hepatocarcinogenesis is tightly linked to the evolution of underlying cirrhosis, which accelerates cancer formation through different pathways, such as chromosomal instability and alterations in the microenvironment that stimulate cell proliferation in HCV-related damage [31]. In addition, HCV-related hepatocyte necrosis and chronic inflammation may be closely involved in the pathophysiology of HCC progression in chronically HCV-infected patients who have had complete HCC resection or ablation [32]. Hence, it was postulated that HCV elimination would help prevent HCC progression by clearing the carcinogenic field and eliminating the chances of novel tumorigenesis [33].
The question of why certain molecularly defined HCC subgroups show a poor or no response to sorafenib is difficult to answer. No predominant or pathognomonic molecular mech-anisms have been described to explain why a single targeted agent does not achieve clinical CR in HCC patients. Molecular alterations may differ depending on the etiology, activity, and duration of the underlying liver injury, thus influencing the response to therapy [34]. No independent predictor of HCC progression was detected among several clinical and laboratory parameters, including tumor burden, number of HCC nodules, and ␣-FP level. Indeed, these conventional indicators of HCC progression do not seem to be appropriate. Promising innovations, which delineate gene signatures pertaining to premalignant conditions [35], are coming from molecular biology. Serum or tissue-based molecular biomarkers are eagerly awaited to predict HCC progression.
Common side effects, including hand-foot syndrome, erythema, hyperbilirubinemia, hematological toxicity, and diarrhea, were noted in 22% of sorafenib-treated patients. Though recognized as common side effects of antikinases molecules, they are a leading problem in the clinical management of these patients. Dose reductions and pauses in the administration of sorafenib may prevent the attainment of therapeutic benefit in the adjuvant setting. Different treatment strategies should be recommended for this subgroup of patients, who deserve further consideration in terms of tailored dose and treatment schedules.
In conclusion, our data indicate that sorafenib administered as a sequential modality holds promise to become a useful adjuvant treatment to support the current TACE procedure for patients with HCV-related, intermediate-stage HCC. Obviously, our results need confirmation in a larger, well-designed, Additional administration modalities can also be envisaged and should be thoroughly examined. Recent reports indicate that improvements in the control of HCC growth and in the prevention of HCC progression can be achieved using concurrent sorafenib and transarterial therapy in patients receiving sorafenib 2-4 weeks before transarterial therapy [19], using continuous administration of sorafenib starting 7 days prior to TACE with doxorubicin [36], and using sorafenib combined with concurrent TACE with doxorubicin-eluting beads [37]. | 2018-01-26T19:45:36.747Z | 2012-03-01T00:00:00.000 | {
"year": 2012,
"sha1": "760d9f50d12fd064205c3d09afe1b50bf4f13d7f",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/oncolo/article-pdf/17/3/359/41872904/oncolo_17_3_359.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "08a9eb16b2f883dff769db66e1c3263896540adc",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251963575 | pes2o/s2orc | v3-fos-license | Wind Predictions in the Lower Stratosphere: State of the Art and Application of the COSMO Limited Area Model
: In the last few decades there has been increasing interest in the commercial usage of the stratosphere, especially for Earth observation systems. Stratospheric platforms allow Earth monitoring at a regional scale with persistency toward a limited area. For this reason, accurate meteorological forecasts are needed in order to guarantee stationarity. The main aim of this work is to provide a review of wind prediction techniques in the stratosphere, achieved by the most popular global models, such as ECMWF IFS, NCEP GFS and ICON. Then, the capabilities of the COSMO limited area model to reproduce the wind speed in the stratosphere are evaluated considering a model configuration with very high resolution (about 1 km) over a domain located in Southern Italy, assuming the radio sounding data at Pratica di Mare airport as the reference. Vertical profiles were analyzed for selected days, highlighting good performances, though improvements can be achieved by adopting a fifth-order interpolation of the model data. Finally, monthly wind speed time series for selected heights were post-processed by means of fast Fourier transform, revealing the existence of main frequencies and the presence of a scaling regime and a power law of the form f − β over a broad range of time scales, in the Fourier space. The exponent spectral β is close to the exact 5/3 Kolmogorov value for all the datasets.
Introduction
The Earth's atmosphere is conventionally subdivided into layers. Considering the nature of the change in temperature with height as the main sign of subdivision, the stratosphere is the second layer of the atmosphere and is positioned above the troposphere; its initial and final heights are related to temperature variations with altitude. In the past, the commercial usage of the stratosphere has been limited, but in the last few decades there has been increasing interest, especially for Earth observation systems, in order to fill the gap between the worlds of space (satellite, global scale) and aeronautics (aircraft, drones). Stratospheric platforms will play a relevant role in several domains connected with the environment, health and food. Various activities can be supported by the next generation of tools able to detect in automatic ways and in real time data and images with better accuracy and continuity of observations. The basic idea of stratospheric platforms is to extend Earth monitoring at a regional scale with persistency toward limited areas. For this reason, accurate meteorological forecasts are needed in order to guarantee stationarity. The minimal wind conditions during a significant part of the year make the stratosphere an optimal region for high altitude airships. In fact, balloons can operate here for months, with the vertical motion obtained by varying the quantity of air per volume and the horizontal motion associated with winds. In particular, the existence of opposite winds at different altitudes allows a station to be relatively static; i.e., the balloon is maintained at a distance less than 50 km from its station.
In reference [1], Mahalov et al. showed that the stratospheric wind fields are characterized by sporadic high frequency fluctuations and long-lived energetic eddies characterized by being a few hundred meters in scale vertically. As a consequence, the thin clear air turbulence layers negatively impact the control, stability and performance of the newest generation of unmanned air vehicles. These layers cannot be resolved by the latest generation of mesoscale meteorological models, since weather is a complex phenomenon that includes hundreds of variables and aspects. The complexity of weather phenomena, along with the need for discretization and approximate techniques, implies that there is still a variety of processes that are not well resolved by the current operational models. An accurate representation of the stratosphere in numerical weather prediction (NWP) models is important not only to support balloon navigation, but also in order to enhance proper data assimilation [2], since observations in this area are relatively sparse. In fact, only a few direct measurements of stratospheric weather conditions are available, so the majority of data are taken from numerical models. Many weather data sources are currently in operation (e.g., ECMWF IFS, NOAA GFS, NCAR BCI and NASA), but they mainly focus on weather near the ground level, and only a few of them go into the stratosphere. Moreover, their resolution is generally not sufficient to support operations. In the last few years, several general circulation models (GCMs) have been upgraded in order to simulate the upper atmosphere too. For example, the HAMMONIA model [3] extends the hydrostatic spectral ECHAM4 model by including specific parameterizations. The Whole Atmosphere Community Climate Model (WACCM) [4] is an extension of the Community Climate Model (CAM) up to 160 km and is a hydrostatic finite volume model. The Japanese Atmospheric General Circulation Model for Upper Atmosphere Research (JAGUAR) [5] allows simulations of up to about 150 km vertically. In the period 1990-2008 [6], the horizontal resolution of global models increased by a factor of 10, while the vertical resolution improved by about a factor of 5. Given that the time step must be scaled with the horizontal resolution, the computational burden has increased by a factor of about 5000 in the considered period. It is evident that further resolution improvements could be barely achieved. For this reason, limited area models (LAM) are used to obtain detailed information over a specific geographic area of interest, and of course they allow the usage of a much higher resolution if compared with GCMs, so they are widely used to support civil aviation. This paper has a twofold objective: First, we provide a review of wind prediction techniques in the stratosphere, achieved by the most popular GCMs, highlighting limitations and perspectives of innovations. Then, since up to now LAMs have been poorly evaluated in the stratosphere, in this work the capabilities of the limited area model COSMO to reproduce the wind speed in the stratosphere have been assessed. The Italian Aerospace Research Center (CIRA) is a member of the COSMO Consortium and has developed a specific model configuration characterized by a horizontal resolution of about 1 km, which runs daily over an area located in southern Italy, including the site of the military airport of Pratica di Mare, where radio sounding measurements are run twice a day, so that a large amount of observational data (up to a height of about 25 km) is available for comparison. Even with the limitations due to the consideration of a single location, the author believes that this study represents a step forward, in order to develop a robust tool suitable to support balloon navigation. A radiosonde is a battery-powered telemetry instrument brought into the atmosphere by a weather balloon in order to measure different atmospheric parameters, which are transmitted by radio to a ground receiver. Radio soundings are generally used to monitor conventional upper-air conditions, since they represent a powerful benchmark for NWP evaluations thanks to their high vertical resolution. This paper is organized as follows: Section 2 contains a description of the main features of the stratosphere. Section 3 contains a review of the main models and associated wind prediction techniques used for the stratosphere. In Section 4, the COSMO model and its application to a domain located in southern Italy is briefly described. In Section 5, the main results are presented. Conclusions are then reported in Section 6.
Physical Characterization of the Stratosphere
The wind velocity in the stratosphere can be estimated from temperature data collected by satellites. At these altitudes, winds are considered geostrophic, but at the mid-latitudes, they are characterized by a westerly component in winter and an easterly component in summer. The highest velocity values are about 40-50 m/s at 50 km above the Earth's surface. The National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) have developed a series of polar-orbiting observation satellites since 1978, providing global data to the NOAA weather forecasting system with a maximum delay of 6 h, which are suitable for both real time applications and for climate research programs.
The main features of the stratosphere have been described by using radiosonde and numerical models. In 1950, Brasefield [7] developed a balloon-borne radiosonde capable of measuring temperature, pressure and winds at up to 45 km in altitude. The radiosonde flights were performed at Belmar (New Jersey), at latitude 40.2 • N. From these measurements, it was found that below 18 km (60,000 ft), winds are predominantly westerly, and the maximum speed recorded was about 12 km (40,000 ft). Between 18 and 36 km, they are easterly in summer and westerly in winter. The vertical temperature profile at up to 36 km was obtained by averaging the values recorded by about 20 flights. It can be noted that the temperature decreases at up to about 15 km, then a constant value (about −60 • C) is registered from 15 to 18 km; and finally, temperature rises again at a rate of about 1.5 • per km, up to 36 km (−30 • C).
Reanalysis data, such as ECMWF ERA5 [8], are generally reliable because they are based on observations, but they can still differ from the real values, especially in the stratosphere, since they are built on data mainly from ground stations, from measurements with balloons in limited points or from commercial aircraft. In 2008, Modica et al. [9] acquired the National Center for Atmospheric Research NCAR/NCEP reanalysis data (2.5 • spatial resolution, 6 h time resolution) of stratospheric winds over the period 1979-2003 at the pressure levels 100, 70, 50 and 30 hPa, for several locations in the USA. They observed that the wind time series for a point located in Colorado at 50 hPa (about 20 km) are characterized by a regular pattern: these series were post-processed in order to produce a power spectrum, which revealed three main peaks, corresponding to periods of 1 year, 90 days and 1 day. A marked similarity of the distribution of the frequency spectrum was observed with the k −5/3 power law of Kolmogorov [10] for homogeneous isotropic turbulence. Power spectra in other locations showed similar characteristics, with a sharp peak near the annual period. Then, a wind series related to a location point in
Overview about Wind Prediction Techniques in the Stratosphere
As already mentioned, wind predictions in the stratosphere are generally achieved by global models, which cover the entire planet and are characterized by coarser resolution than limited area models. The main features of the three most popular GCMs (ECMWF IFS, NCEP GFS and ICON) are widely described in literature, so here we will focus our attention only on their capabilities in weather forecasting in the stratosphere.
ECMWF IFS [11] is the global numerical prediction system developed and maintained at the leading European Centre for Medium-Range Weather Forecasts. It is used for daily operational forecasts. It uses an assimilation process (4D), which allows the model to be constantly updated when new satellite data or other input data are available. The highest resolution configuration (9 km and 137 vertical levels) is run every 6 h (00Z and 12Z forecast for 10 days; 06Z and 18Z forecast for 90 h). The ensemble system with 51 members is run at 18 km resolution for 137 layers every 12 h. The 137 layers are positioned in such a way that 15 of them are positioned in the 15-20 km altitude range, which is of interest for stratospheric applications. The pressure at the top of the model is 0.01 hPa, corresponding to about 80 km of altitude.
It has been shown [2] that in the stratosphere, IFS suffers from temperature biases, with a cold bias in the lower part and a warm bias in the upper one. The biases are sensitive to the horizontal resolution, since IFS provides cooler values when the resolution is increased, exacerbating the cold bias and alleviating the warm one. The increasing cooling with the resolution is due to numerical errors that accumulate in the dynamical core if the vertical resolution is increased with the horizontal one. In fact, some small-scale waves are solved only in the horizontal direction, causing unrealistic oscillations in the temperature field in the vertical direction. In 2020, ECMWF implemented a substantial upgrade of IFS [12] (IFS Cycle 47r1) with changes in the model, in the data assimilation system and in the use of observations. In particular, the usage of the new data assimilation revealed that biases in the upper stratosphere (11-1.5 hPa) were significantly reduced. Regarding the bias associated with the high horizontal resolution, a possible solution would be to increase the vertical resolution too, but this approach is generally too expensive; for this reason, a cheaper alternative would be to increase the order of accuracy of the vertical interpolation in the semi-Lagrangian advection, specifically from third order (cubic) to fifth order (quintic). Specifically, a Lagrange polynomial of degree 5 is used to interpolate a field using six neighboring points. This interpolation is able to reduce the unphysical cooling in the stratosphere at high horizontal resolution.
The National Center for Environmental Prediction (NCEP) GFS is a global weather model operated by the American Meteorological service. It is run at a horizontal resolution of 13 km four times a day, and produces forecasts for up to 16 days in advance. In 2021 the number of vertical layers increased from 64 to 127, and the model top was extended from the upper stratosphere (55 km) to the mesopause (80 km).
The performances of IFS and GFS with respect to the data directly observed in the stratosphere were measured by LOON LLC (in the following LOON), an Alphabet Inc. subsidiary (https://x.company/projects/loon, accessed on 14 July 2022), resulting in better accuracy of IFS, especially in the first five forecast days, also due to the higher number of altitude levels of IFS in the LOON's flight range (15-20 km). For real time operations, LOON has created a tool that merges the recent balloon observations with wind forecasts. This tool is based on Gaussian processes and provides greater weights to the data that are close to the object of study. This algorithm is able to reduce the error by about 75% for the first forecast hours, but of course the error tends to increase with the time. In reference [13], Candido et al. demonstrated the possibility of improving the prediction of winds in the lower stratosphere using machine learning. Specifically, they employed analog-based methods (AnEn), which have been widely used for several years for the prediction of weather parameters. These methods are based on the idea of using past situations similar to the current one, in order to estimate the future evolution of the parameter under study [14]. Even if theoretically a very long time series of historical data would be needed, they demonstrated that a forecast improvement can be achieved even with only two years of previous forecasts in reasonable computational time, which can be furtherly reduced by training a deep neural network [15]. They used the IFS forecasts produced from July 2016 to June 2019 to train the system, and the period from July 2019 to December 2019 to validate the model. Their results showed that AnEn is characterized by a lower root mean square error than IFS when compared to the measurements from LOON stratospheric balloons, for both wind speed and direction.
ICON (Icosahedral Nonhydrostatic Model) [16] was developed by the German Weather Service (DWD) and the Max Planck Institute for Meteorology (MPI-M) as a next generation numerical weather prediction (NWP) model system and is generally considered more accurate than the ECMWF due to its better resolution, even if only in Europe. The spatial discretization of the equations is performed using an icosahedral-triangular C grid. It provides the possibility of local refinement, allowing very high resolution, by using a grid-nesting option (both one and two-way nesting). ICON is operationally run at DWD at a resolution of about 13 km, and vertically the model is characterized by 90 levels, up to a height of 75 km. ICON is run every 6 h (00Z and 12Z forecast for 180 h; 06Z and 18Z forecast for 120 h). A nesting over Europe is run at a resolution of about 7 km with 60 levels up to a height of 22.5 km, and a coupled two-way interaction between the ICON-EU regional model and the global ICON.
Borchert et al. [17] proposed an extension of ICON to the upper atmosphere (UA-ICON), in order to understand its influences on the tropospheric weather and climate. The main motivation was to increase the accuracy of simulations when the model top is positioned at a height greater than 100 km. If the model top is positioned in the lower thermosphere, the dynamical core must be modified, and specific parameterization schemes are required. In fact, the basic version of ICON assumes the shallow-atmosphere approximation to be valid, meaning that the terms associated with the spherical curvature of the atmosphere and the variations of the gravitational field are neglected. This approximation introduces systematic error that grows in time, leading to non-negligible biases. For this reason, it is necessary to remove this approximation and use a "deep-atmosphere" scheme, which can be supported by the computational power currently available. Moreover, when the top is so high, specific parameterization schemes are required in order to avoid meaningless results and numerical instabilities, for the presence of physical phenomena that are negligible in the lower atmosphere, but are relevant in the upper part, such as the molecular diffusion of momentum and heat, and the broader spectrum of solar irradiance at higher frequencies.
Further, the presence of the ionosphere at about 60 km of altitude, even due to the solar radiation, cannot be neglected.
In the already mentioned work [17], Borchert et al. performed climatological test cases with UA-ICON by adopting a R2B4 grid (horizontal resolution of about 160 km) and 120 vertical layers, up to an altitude of 150 km and a time step of 4 min. They also performed two additional simulations with standard ICON (model top at 80 km), in order to investigate if the differences are due to the vertical extension and/or to the different physics and dynamics: the first one (referred to as ICON) was characterized by disabling deep-atmosphere dynamics and upperatmosphere parameterizations; the second one (referred to as ICON-UA) had both enabled. A third additional configuration (referred to as UAphys-ICON) was derived by UA-ICON simply by switching off the deep atmosphere modification. Evaluation was conducted in terms of temperature and wind over the period 2002-2016 against satellite data provided by the SABER instrument on NASA's TIMED satellite and URAP Project [18]. Evaluation in terms of multiyear zonal mean temperature (contour maps in the latitude-altitude plane) showed in general good agreement. The four simulations were compared with one another in order to quantify the effects of the vertical extension up to the lower thermosphere and of the upper atmospheric physics. Regarding the zonal wind, UA-ICON qualitatively well reproduces the structure of the wind in the part of the atmosphere observed by URAP. The comparison of UA-ICON simulations with the other model configurations revealed that the addition of upper-atmosphere physics and dynamics affects the stratospheric temperatures (with increases up to 5 K), and the vertical extension has relevant effects down to about 60 km. The application of deep-atmosphere dynamics caused a significant decrease in temperature in the upper-mesosphere-lower-thermosphere.
Gravity waves are a mechanism for momentum transfer the from troposphere to the stratosphere. They arise when parcels of air are forced upward, e.g., by a tall mountain range, thereby moving from a dense atmospheric layer to a thinner one. Initially, waves propagate without appreciable variation in the average speed, but when they reach rarefied air at higher altitudes, their amplitude grows and the nonlinear effects cause the wave to break, causing a transfer of momentum to the main stream. Gravity waves play an important role in atmospheric dynamics, but currently an accurate representation in GCMs is still challenging. The main reason is that a large fraction of gravity waves are at a scale that is below the spatial resolution of GCMs. Hindley et al. [19] found an intense hot-spot of stratospheric gravity wave activity over small mountainous islands in the Southern Ocean, but due to their small size, they are inaccurately simulated in GCMs, which results in a large underestimation of momentum. Using a high-resolution configuration (about 1.5 km) of the Met Office Unified Model, they found good agreement between simulated wintertime waves and coincident 3D satellite observations.
It is well known that NWP models are nonlinear dynamical systems in which the evolution depends on the initial conditions. The chaotic nature of the atmosphere and the involvement of nonlinear dynamics imply that small errors in estimating the initial state of the atmosphere grow rapidly with time. The current state (i.e., the analysis) of the atmosphere adopted as an initial condition is derived with a Bayesian inversion problem using observations, previous information from forecasts and related uncertainties as constraints. These calculations, involving global minimization, are performed in four dimensions to produce an analysis that could be physically consistent in space and time and that could deal with big amounts of observational data, heterogeneously distributed in space and time (such as the large amount of satellite data used for Earth observation since the 1980s). In the last decade, the main components of the process have been substantially refined-for example, the increasing use of satellite radiance data (by combining the forecast model with computationally efficient radiative transfer models) and the better refined characterization of short-range forecast. The computational affordability will continue to be a limitation, because a relevant proportion of the cost of producing a forecast is associated with data assimilation. The limited availability of new observations poses science challenges for NWPs, since fundamental variables are still missing, especially in the stratosphere, because observations are from sensors on the ground, relatively sparse weather balloon measurements and commercial aircraft, which are very scarce at high altitudes. For example, wind data are primarily needed in the tropics, an area covering around 50% of the Earth and where the sparsity of observations is a serious obstacle to increased analysis accuracy. LOON's direct measurements of the winds are useful, but these data are not available for historical periods or in areas where the balloons have performed no recent measurements.
The LAM COSMO Model
As already mentioned, detailed information over a specific geographic area of interest can be achieved by using limited area models. COSMO [20] is a nonhydrostatic dynamic downscaling model for three-dimensional compressible flows developed by the European consortium COSMO (Consortium for Small-Scale Modeling). The atmosphere is treated as an ideal mixture of dry air, water vapor and liquid and solid water, subject to gravity and to the Coriolis forces [21]. Initial conditions are obtained by an intermittent analysis scheme in the assimilation cycle or by interpolation from a global driving model. Initial data typically include unbalanced information for the mass and wind field, which causes spurious high frequency oscillations during the first hours of the model integration. For this reason, initial data must be modified, for example, by using the time filtering approach proposed by Lynch and Huang (1992) [22], which applies a digital filter to remove the high frequencies.
The Italian Aerospace Research Center (CIRA) has developed a convective-scale model configuration characterized by a horizontal resolution of about 1 km, which runs daily over an area located in southern Italy, including part of the Campania and Lazio regions (12. daily over an area located in southern Italy, including part of the Campania and L regions (12.22°-14.55° E; 40.63°-41.88° N) (Figure 1). The computational domain ha × 138 points, and the number of vertical levels is 60. The time step is 10 s. Initia boundary conditions are provided by the ECMWF IFS global model at spatial resol of 0.075°. The capabilities of the COSMO model in terms of reproducing the main atmospheric variables over this area have been widely tested, in particular against data provided by the CIRA weather instrumentation, and the results of the model evaluation are presented in [23]. It was shown that the model is able to accurately reproduce the 2 m temperature, though it has difficulties in localizing intense rain events in this complex orography area. Wind values were compared with data provided by the wind profiler installed at CIRA (owned by ARPAC-the Environmental Protection Agency of Campania), revealing that they are generally well reproduced, especially at between 4 and 6 km, suggesting great potential of the model to support wind forecasts. Temperature profiles provided by the radio sounding performed at Pratica di Mare airport were compared with the model values in the grid-point closest to this location, highlighting a good reproduction of temperature and dew point profiles (maximum bias of 1 • C). In the present work, a detailed evaluation of the wind profiles provided by the model has been performed against radio sounding data in Pratica di Mare (Section 5.1), being the only stratospheric observational values available over the domain considered. Specifically, radiosonde data archived at the University of Wyoming (https://weather.uwyo.edu/, accessed on 17 June 2022) have been used, as they represent a frequently adopted reference for model assessments [24]. Evaluation was conducted considering daily data over the period 1 May 2021-30 April 2022. For each day, data at 00 and 12 UTC were considered, according with sounding data availability.
Model Evaluation
In order to quantify model performances, standard indices for performance evaluation have been calculated: mean bias (BIAS) and root-mean-square error (RMSE), defined as: where S i and O i are, respectively, the simulated and observed values at the i-th level; N is the total number of vertical levels considered. Further, the time correlation between simulated and observed values (CORR) and the ratio between model and observation standard deviations (STD_RATIO) were evaluated. All the indicators were obtained considering the COSMO values (for each level) interpolated to the closest heights where radio sounding data are available. Figure 3 shows the time series of the daily wind speed BIAS (left) and wind direction BIAS (right) over the period 1 May 2021-30 April 2022 for each day at 00 UTC and 12 UTC. On the horizontal axis, step 1 refers to 1 May 2021 h 00, step 2 refers to 1 May 2021 h 12 and so on, until step 730, which refers to 30 April 2022 h 12. These plots reveal good behavior of the model in reproducing the wind speed, the bias generally being between −1.5 and 1.5 m/s and never exceeding values ±3 m/s. Wind directions are quite well reproduced too, but there are some days (e.g., 7 June 2021 h 00) in which the bias is excessively large. Table 1 shows the numerical values of the indicators considered for wind speed, obtained by averaging the daily values (at 00 and 12 UTC) over each month of the year considered. Daily maximum values of the BIAS for each month are also shown. The analysis of the table revealed that the model is able to simulate the average features of the wind profiles; in fact, the bias values are always lower than 0.3 m/s. Of course, compensation effects may take place, as revealed by the values of RMSE and daily maximum BIAS, which are generally lower than 3 m/s anyway. Model and observational values are well correlated. The numerical correlation is higher than 0.84, and SDT_RATIO values are generally close to 1. Table 2 shows the numerical values of the same indicators for wind direction, obtained by averaging the daily values (at 00 and 12 UTC) over each month of the period considered.
Daily maximum values of the BIAS for each month are also reported. The model has quite good capabilities to simulate the wind direction, but it is evident that for some days in winter (but also in June) the model fails to simulate the direction. Winter biases could be connected with the polar vortex, which is a large cyclone able to produce intense winds. It forms in November, when the stratosphere over the North Pole starts to cool down. Typically, a polar vortex circulation is interrupted in March or April due to a temperature increase in the stratosphere, called a sudden stratospheric warming (SSW) event. The stratospheric polar vortex variability has effects on the tropospheric circulation and in particular on the winter weather, so it is clear that accurate predictions of extreme polar vortex states are important to improve forecasts in winter, including cold spells. Unfortunately, current dynamical models have some limitations, because they poorly capture low-frequency processes [25]. The analysis performed in the present work over the year running from May 2021 to April 2022 has highlighted that that for most of the cold season the polar vortex was stronger than normal, and that the model's performance slightly worsens from November to January.
connected with the polar vortex, which is a large cyclone able to produce intense winds. It forms in November, when the stratosphere over the North Pole starts to cool down. Typically, a polar vortex circulation is interrupted in March or April due to a temperature increase in the stratosphere, called a sudden stratospheric warming (SSW) event. The stratospheric polar vortex variability has effects on the tropospheric circulation and in particular on the winter weather, so it is clear that accurate predictions of extreme polar vortex states are important to improve forecasts in winter, including cold spells. Unfortunately, current dynamical models have some limitations, because they poorly capture low-frequency processes [25]. The analysis performed in the present work over the year running from May 2021 to April 2022 has highlighted that that for most of the cold season the polar vortex was stronger than normal, and that the model's performance slightly worsens from November to January.
It is worth noting that part of the model error could be due to measurement uncertainties. In fact, under the push of horizontal winds, the radiosonde could move horizontally during the measurements, but data about the horizontal movements of the balloon are not available, so it is necessary to assume that the chosen grid point on the surface is always the same during the vertical measurements. In any case, the wind velocity evaluated by the model in an assigned grid point generally differs from the near points (on the same vertical level) by no more than 3%, so this source of uncertainty is limited. It is worth noting that part of the model error could be due to measurement uncertainties. In fact, under the push of horizontal winds, the radiosonde could move horizontally during the measurements, but data about the horizontal movements of the balloon are not available, so it is necessary to assume that the chosen grid point on the surface is always the same during the vertical measurements. In any case, the wind velocity evaluated by the model in an assigned grid point generally differs from the near points (on the same vertical level) by no more than 3%, so this source of uncertainty is limited.
In order to have more detailed information about model performance, vertical profiles were analyzed for selected days, namely, 2 July 2021, 2 October 2021, 2 January 2022 and 2 April 2022, which were chosen as representatives of the four climatological seasons (JJA, SON, DJF and MAM), for monitoring the model's behavior under different weather situations. For each day, weather conditions in terms of ground temperature T and ground wind speed WS are provided. Figure 4 shows the vertical profiles provided by model data and radiosondes for the selected days. The heights considered range from 0 to 23 km, corresponding to the pressure range 1010-36 hPa. On 2 July 2021 h 00 (sunny day, T = 24 • C, WS = 3.4 m/s), radio sounding data show a regular growth of wind speed values (with inversions at 5 and 8 km) from the ground up to 12 km. It reaches the value of 42 m/s; then it decreases until the value of 5 m/s at a height of 18 km. The model shows an excellent ability to reproduce the shape of the profile, and even the inversion at 8 km is captured. Moreover, wind values in the LOON zone are properly represented too. A similar profile can be observed at h 12, which is well reproduced by COSMO. On 2 October 2021 (thunderstorms, T = 21 • C, WS = 5.5 m/s) at h 00 and h 12, the general observational trend is well simulated by the model, especially up to 15 km. In the LOON zone, the wind speed is irregular, and the model is not able to capture the relative maximum and minimum values observed at specific heights, since it only reproduces the average decreasing trend. On 2 January 2022 (foggy day, T = 10 • C, WS = 2.7 m/s) at h 00 and h 12, the model well reproduces the inversions observed up to 15 km, but the maximum/minimum values in the LOON zone were not properly simulated. For 2 April 2022 (thunderstorms, T = 11 • C, WS = 8.9 m/s) at h 00, the observed profile up to 13.5 km was fairly reproduced, though larger values of wind speed above 38 m/s were underestimated. At h 12, the observed profile is highly irregular, but the model performs quite well in the LOON zone too, even if the low vertical resolution penalizes the accuracy. Table 3 shows the average values of performance indices for the selected days (h 00 and h 12). The mean biases averaged only on the points in the LOON zone (BIAS_LOON) were evaluated too, along with the corresponding biases after interpolation (BIAS_INTP; see below). Good agreement between model and observations was found in terms of BIAS, the values being less than 1 m/s. A high correlation is generally reported, and SDT_RATIO values are satisfactory too. In the LOON zone, the model suffers from higher biases, which is particularly evident for 2 July 2021. In order to increase the model accuracy in this zone, following the approach adopted in [2] for temperature profiles, a fifth order vertical interpolation of the wind speed model values in the LOON zone is proposed. Figure 5 shows the vertical profiles of wind speed in the LOON zone for the selected days provided by the radio sounding, the COSMO model and by the fifth order interpolation of the model data. As confirmed also by numerical values reported in Table 3, it is evident that the proposed interpolation allows an improvement of the representation of the vertical profile, with a bias reduction ranging between 10 and 25%.
Meteorology 2022, 2, FOR PEER REVIEW 11 values in the LOON zone is proposed. Figure 5 shows the vertical profiles of wind speed in the LOON zone for the selected days provided by the radio sounding, the COSMO model and by the fifth order interpolation of the model data. As confirmed also by numerical values reported in Table 3, it is evident that the proposed interpolation allows an improvement of the representation of the vertical profile, with a bias reduction ranging between 10 and 25%.
Analysis of Time Series
Hourly time series of stratospheric wind speeds provided by COSMO for selected months have been extracted at Pratica di Mare at heights of 15.9 km (level 10) and 19.3 km (level 5). Figure 6 (left column) shows the time series for the months October 2021, November 2021 and April 2022. The time series were processed by using an FFT to produce a power spectrum of wind at regularly spaced bins, on order to measure the amounts of variability occurring in different frequency bands. The frequency resolution is 1/T 0 = 1.54 × 10 −3 (given that T 0 is 648 h). Figure 6 (right column) shows the corresponding mean power spectra (log-log representation). In October 2021, a regular pattern in the time series can be observed, and three main peaks are visible in the power spectrum (f = 4.63 × 10 −3 , 7.71 × 10 −3 , 1.23 × 10 −2 ), corresponding to periods of 216, 129 and 81 h respectively. In November 2021, three main peaks are visible too (f = 9.25 × 10 −3 , 1.54 × 10 −2 , 1.85 × 10 −2 ), corresponding to periods of 108, 65 and 54 h. Finally, in April 2022, only one peak is clearly visible (f = 2.7 × 10 −2 ) corresponding to a period of 37 h. For all months, the spectra in the range of low frequencies tend to have a decreasing shape with superimposed noise. The distribution in the range of higher frequencies (>3 × 10 −2 ) has been compared with the power law predicted by Kolmogorov for homogeneous isotropic turbulence (blue lines in the figures). According to the Kolmogorov theory [10], the turbulence velocity spectrum can be separated into frequency ranges, generally expressed as a turbulence source region (large scale), inertial subrange (intermediate scale) and dissipation region (small scale). The Kolmogorov law [26] predicts that if the constant and the energy loss rate of a turbulent fluid are constant, then the energy is controlled only by the scale of turbulence raised to the −5/3 power. From the figures, it is evident that at high frequencies, the wind power output spectrum displays a power law close to the Kolmogorov distribution. It is important to point out that the Fourier power spectrum is a second order statistic, which provides information on medium level fluctuations, so its slope is not sufficient to fully describe a scaling process, so additional investigations are needed to fully explore these mechanisms and develop a physical framework that can be incorporated into the study of turbulence.
Conclusions
In the last few decades there has been renewed interest in the commercial usage the stratosphere, since there it is possible to use less expensive and higher resolution Ear monitoring services with respect to satellites, covering four times the areas of standa aircraft. With the horizontal motion of stratospheric balloons being associated with wind the importance of accurate wind forecasts in this part of the atmosphere is evident. In th
Conclusions
In the last few decades there has been renewed interest in the commercial usage of the stratosphere, since there it is possible to use less expensive and higher resolution Earth monitoring services with respect to satellites, covering four times the areas of standard aircraft. With the horizontal motion of stratospheric balloons being associated with winds, the importance of accurate wind forecasts in this part of the atmosphere is evident. In this work, a review of wind prediction techniques for the stratosphere was provided, achieved by the most popular global models, such as ECMWF IFS, NCEP GFS and ICON. Then, the ability of the COSMO limited area model to reproduce the wind speed in the stratosphere was evaluated considering a model configuration at 1 km resolution. In fact, it is well known that very high resolutions are challenging in weather models for short-and medium-term forecasts. Considering the computational costs required, it is necessary to evaluate the forecast quality and to understand the model's deficiencies.
Global simulations show that the simulated zonal-mean circulation in the troposphere is only moderately sensitive to the horizontal and vertical resolution adopted, whereas it appears that the simulation of the stratosphere can be much more sensitive to numerical resolution, even if quite fine grids are adopted. Changes with resolution are probably related to the skill of fine vertical resolution models in adequately representing the interaction of the mean flow with a wide spectrum of vertically propagating gravity waves.
The COSMO model was configured for the specific area of southern Italy considered; model performances were evaluated in terms of wind profiles against one year of daily radio sounding data collected in Pratica di Mare. Good agreement was found in terms of average bias and correlations, but compensation effects were present, since the model was not able to capture maximum and minimum wind speed values in the LOON flight zone. Some improvements can be achieved by employing a fifth order vertical interpolation of the wind speed model values. The analysis of monthly time series revealed the presence of a scaling regime or power law correlation of the form f −β over a broad range of time scales, in the Fourier space. The exponent spectral β is close to the exact 5/3 Kolmogorov value for all the datasets. Of course, further evaluations of COSMO model are needed in other computational domains in Italy/Europe, where other radio sounding data are available for comparison.
Some of the challenging objectives to be achieved in the near future imply the development of subgrid-scale parametrization of nonhomogeneous, anisotropic, non-Kolmogorov, shear stratified, stratospheric turbulence and stratospheric aerosols, to be included in the next generation of mesoscale numerical weather prediction models for the lower stratosphere, to enable the forecasting of, e.g., nonlinear inertia-gravity waves that generate layers of clear air turbulence. In fact, gravity waves are recognized to influence the response to the variability of the large-scale stratospheric circulation, so their parameterization is a source of uncertainty in stratospheric predictions. Finally, since the COSMO consortium has started the migration from the COSMO-LM to the ICON-LAM as the future operational model, ICON-LAM is being calibrated at CIRA on the same geographical domain in southern Italy, and the model's performances in the stratosphere will be evaluated in a future work. | 2022-09-01T15:24:04.141Z | 2022-08-29T00:00:00.000 | {
"year": 2022,
"sha1": "576fb67a36c2729cb0d689a7ba6957714e7deb05",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2674-0494/1/3/20/pdf?version=1661771205",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d14a3434038f2c2b0a3dd3cd490fa5b9fcf92fd7",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": []
} |
14063221 | pes2o/s2orc | v3-fos-license | Two-dimensional AMR simulations of colliding flows
Colliding flows are a commonly used scenario for the formation of molecular clouds in numerical simulations. Due to the thermal instability of the warm neutral medium, turbulence is produced by cooling. We carry out a two-dimensional numerical study of such colliding flows in order to test whether statistical properties inferred from adaptive mesh refinement (AMR) simulations are robust with respect to the applied refinement criteria. We compare probability density functions of various quantities as well as the clump statistics and fractal dimension of the density fields in AMR simulations to a static-grid simulation. The static grid with 2048^2 cells matches the resolution of the most refined subgrids in the AMR simulations. The density statistics is reproduced fairly well by AMR. Refinement criteria based on the cooling time or the turbulence intensity appear to be superior to the standard technique of refinement by overdensity. Nevertheless, substantial differences in the flow structure become apparent. In general, it is difficult to separate numerical effects from genuine physical processes in AMR simulations.
Introduction
Computational fluid dynamics in astrophysics relies on numerical methods that are capable of covering a huge range of scales. Apart from smoothed particle hydrodynamics (Monaghan 1992), adaptive mesh refinement (AMR) has been applied to variety of problems. This method was developed by Berger & Oliger (1984) and Berger & Colella (1989). Among the widely used, publicly available AMR codes for astrophysical fluid dynamics are FLASH (Fryxell et al. 2000), Enzo (O'Shea et al. 2004) and Ramses (Teyssier 2002). Although there are comparative studies of AMR vs. SPH (for example, O'Shea et al. 2005;Agertz et al. 2007;Commerçon et al. 2008), the degree of reliance of AMR in comparison to non-adaptive methods has met only little attention so far.
Especially for turbulent flows, it is a non-trivial question whether the solutions obtained from AMR simulations agree with the correct solutions of the fluid dynamical equations at a given resolution level. For this reason, we systematically compare AMR and static-grid simulations for a particular test problem in this article. We chose a scenario that has been investigated in the context of molecular cloud formation, namely, the frontal collision of opposing flows of warm atomic hydrogen at supersonic speed (Heitsch et al. 2006;Vázquez-Semadeni et al. 2007;Hennebelle & Audit 2007a;Hennebelle et al. 2008;Walder & Folini 2000). Because of the cooling instability at densities ∼ 1 cm −3 and temperatures of a few thousand Kelvin, the gas becomes highly turbulent at the collision interface. Since the instabilities develop on length scales much smaller than the integral scale, this problem is computationally extremely demanding. The two-dimensional resolution study of Hennebelle & Audit (2007a) showed that the properties of the turbulent multi-phase medium evolving in these simulations is highly resolution-dependent, and numerical convergence is seen only at resolutions well above 1000 2 . In three-dimensional simulations, such high resolutions are infeasible if static grids are used. Consequently, Hennebelle et al. (2008) and Banerjee et al. (2008) applied refinement by fixed density thresholds and refinement by Jeans mass, respectively, in their three-dimensional high-resolution AMR simulations.
In this article, we consider two-dimensional colliding flows without self-gravity and magnetic fields for a systematic comparison of AMR simulations to a reference simulation on a static grid. We analyze both statistical properties and the morphology of the gas fragmentation due to the cooling instability. This work is organized as follows: In Section 2 the numerical methods are described and the setup of the simulations will be presented in detail. In Section 3, we compare the results from the different simulations. Section 4 concludes this paper with a summary of the main results and general remarks on AMR.
Numerical methods and simulation setup
The simulations presented in this article are accomplished using the open source code Enzo (Bryan & Norman 1997;O'Shea et al. 2004). The compressible Euler equations are solved by means of the staggered grid, finite difference method Zeus (Stone & Norman 1992a,b;Stone et al. 1992). We included the cooling function L defined by Audit & Hennebelle (2005) in these equations: The primitive variables are the mass density ρ, the velocity u and the specific total energy e of the fluid. The total energy per unit mass is given by where γ is the adiabatic exponent and the pressure P is related to the mass density ρ and the temperature T via the ideal gas law: The constants k B , µ and m H denote the Boltzmann constant, the mean molecular weight and the mass of the hydrogen atom, respectively. The gas is assumed to be a perfect gas with γ = 5/3 and µ = 1.4m H . The cooling function of Audit & Hennebelle (2005) includes the cooling by fine-structure lines of CII and OI, further the cooling by H (Lyα-line) and the electron recombination onto positively charged grains. The heating is due to the photoelectric effect on small grains and polycyclic aromatic hydrocarbons (PAH) caused by the far-ultraviolet galactic background radiation. For more information about this cooling function see Wolfire et al. (1995Wolfire et al. ( , 2003; Spitzer (1978); Bakes & Tielens (1994) and Habing (1968). The pressure-equillibrium curve resulting from the cooling function is plotted as black curve in Figure 1. For the numerical solution of the fluid dynamical equations, we used the radiative cooling routine implemented into Enzo. For each hydrodynamical time step, the state variables are iterated over several subcycles, and the resulting total energy increment for the whole time step is added.
For our numerical study, the two-dimensional setup of Audit & Hennebelle (2005) and Hennebelle & Audit (2007b) was adopted with small modifications. The initial gas content corresponds to the warm neutral material (WNM) in the ISM, i. e., the temperature is T = 7100 K, the pressure is P = 7 × 10 −13 erg cm −3 and the number density of neutral hydrogen is n = 0.71 cm −3 . From the left and the right boundaries, warm gas with identical thermodynamic properties flows into the computational domain, where the cosine-shaped inflow velocity profile is modulated with small perturbations, realised by randomly shifted phases. These phase shifts are kept constant for the different simulations, so that initial conditions are exactly the same for all runs to ensure comparability. Following Hennebelle et al. (2008), the top and bottom boundary conditions are periodic. The physical dimensions of the computational domain are 20 × 20 pc. The two inflows of gas collide in the middle of the domain. The supersonic collision causes a steep rise in the gas density that triggers the thermal instability, and gas undergoes a transition into the phase of the cold neutral material (CNM) in the ISM. In this phase, the gas has temperatures in the range 30-100 K and number densities within 20-50 cm −3 (Ferrière 2001). The thermal instability produces highly turbulent structures (see Figure 2) with Mach numbers up to 20. The challenge for AMR is to track these turbulent structures as accurately as possible.
A reference simulation was run with a static grid of 2048 2 cells. Then the same setup was evolved in AMR simulations with a root-grid resolution of 128 2 cells and 4 levels of refinement. The resolution between adjacent refinement levels increases by a factor of 2. Hence, the effective resolution at the highest level of refinement is 2048 2 . In these simulations, we employed three different types of refinement criteria: 1. refinement by overdensity (OD), 2. refinement by cooling time (CT), 3. refinement by rate of compression and enstrophy (RCEN).
The first two criteria are widely used in astrophysical AMR simulations. For refinement by overdensity, the mass density must exceed the initial density on the root grid by a certain factor. This overdensity, in turn, defines the initial density for refinement at the first level of refinement and so on. We chose three different values for the overdensity factor, namley, twice the initial density (default OD), as well as three times (OD-3) and fourtimes (OD-4) the initial density. For criterion CT, on the other hand, refinement is triggered for a grid cell if the cooling time τ cool := P/(γ − 1)ρ|L| becomes less than the sound crossing time over the cell width. Refinement by the rate of compression and the enstrophy uses yet another technique. It was introduced by Schmidt et al. (2009) for the simulation of supersonic isothermal turbulence. The control variables for refinement are the enstrophy and the rate of compression. The enstrophy is given by onehalf of the square of the vorticity, while the rate of compression is defined by the substantial time-derivative of the negative divergence of the velocity. The expression used by Schmidt et al. (2009) to evaluate the rate of compression (see equation (12) in this paper) is easily generalized to the non-isothermal case, where the speed of sound is not a constant. To trigger refinement by RCEN, dynamic thresholds are calculated from statistical moments of the control variables: A grid cell is flagged for refinement if the local fluctuation of a control variable becomes greater than the maximum of the average and the standard deviation of the variable. On the root grid, averages and variances are computed globally, whereas averaging is constrained to individual grid patches at higher levels of refinement.
For comparison of the simulation results, we calculated probability density functions (pdf) of several quantities. To analyze the gas fragmentation in each simulation, we adapted the clumpfind algorithm implemented by Padoan et al. (2007) to non-isothermal problems. The algorithm identifies the smallest dense regions that fulfill the Jeans criterion for gravitationally unstable gas. Since the clump samples found on the twodimensional grids used in our simulations are insufficient for the calculation of clump mass spectra, only the total number and the mean size of the clumps are used for quantitative comparisons. In addition, we computed the fractal dimension of gas at densities higher than n = 20 cm −3 (corresponding to the minimum density of gas in the cold phase) by means of the box-counting method ).
Results
Due to the gradual accumulation of gas in the simulation domain, no strict statistical equilibrium is approached. For this reason, we evolved the flow until noticeable small-scale structure has developed and the separation of the gas into two phases has emerged. As shown in Figure 1, two distinct phases are found at time t = 5 Myrs. At this time, the central flow region is in a turbulent state (a contour plot of the mass density of the gas is shown in Figure 2). Thus, we carry out our analysis for t = 5 Myrs. While the main fraction of the gas is situated in the warm phase with temperatures between 5000 and 10000 K and low densities ∼ 1 cm −3 , the cold gas with temperatures between 30 K and 100 K and densities in the range 30 -350 cm −3 can be found close to the equilibrium curve.
The pdfs of the mass density and the temperature obtained from different AMR simulations are plotted in Figure 3. In principle, all refinement criteria reproduce the distributions found in the static-grid simulation quite well, although there is a trend of slightly more cold gas at the cost of warm gas. The discrepancy is more pronounced for refinement by over-density (OD) compared to the other criteria, and it becomes worse for the higher density thresholds (OD-3 and OD-4; not shown in the Figure).
Nevertheless, it appears that the thermodynamic properties of the gas are quite robust in AMR simulations.
The gravitationally unstable clumps of gas identified by the clumpfind algorithm in the static-grid simulation at time t = 5 Myrs are depicted in Figure 4a. The corresponding results for the AMR runs are shown in Figures 4b-4d. Table 1 lists the total number and the mean size of the clumps for each simulation. Also listed are the fractal dimensions of the gas regions with number density n ≥ 20 cm −3 , which are plotted in Figures 4e-4h.
For refinement by OD, the fragmentation of the CNM is severely underestimated. The number of clumps is roughly half of the number in the static-grid simulation, and the clumps are typically larger. The lower degree of cold gas fragmentation results in a smaller fractal dimension (also see Figure 4f). If the criteria OD-3 and OD-4 are applied, the number of clumps decreases further, while their average size increases. In the case of criterion OD-4, a slightly higher fractal dimension is obtained, because the cold phase tends to fill broad, area-filling regions. The cooling time criterion CT yields an amount of dense clumps with an average size that compares well to the reference simulation (see Figure 4c), although the degree of fragmentation appears to be overestimated slightly. However, we found that this overestimation decreases with the further evolution of the colliding flows and, thus, appears to be transient. Refinement by RCEN also reproduces the number of clumps and the fractal dimension of dense gas very well. However, there are some anomalously big clumps, which contribute to an average clump size that is systematically too large. In the plot showing gas at density n ≥ 20 cm −3 (see Figure 4h), on the other hand, such anomalous structures are not visible. Although refinement by RCEN does not overproduce gas in the cold phase (as one can see from the excellent agreement of the density and temperature pdfs in Figures 3c and 3f), there appears to be a bias toward bigger clumps with this refinement method. In contrast to the phase separation and gas fragmentation, striking deviations of the turbulent flow properties in the AMR vs. static grid simulations become apparent. Generally, a lot of turbulent small-scale structure is missing in the AMR simulations. Even for the criterion RCEN, which is based on control variables related to turbulence, this is apparent from the contour plots of the squared vorticity modulus shown in Figure 5. Basically, the perturbations of the velocity field imposed at the inflow boundaries are quickly smoothed out in AMR simulations, so that turbulence is only produced by secondary (e. g., Kelvin-Helmholtz) instabilities at the collision interface in the central region of the computational domain. The reason is that all AMR cirteria, including RCEN, select relatively large fluctuations, whereas smaller perturbations are suppressed. On a static grid, on the other hand, the perturbations are transported from the boundaries to the centre and actively contribute to the production of turbulence. Consequently, small eddies are present in almost the whole domain in this case. Accordingly, the probability distribution of vorticity is markedly different (see Figure 6). In contrast, Schmidt et al. (2009) found very close agreement of the vorticity pdfs in a static-grid and an AMR simulation with criterion RCEN for turbulence in a periodic box with large-scale forcing. Our results thus indicate that the merits of different refinement schemes are non-universal but rather depend on the properties of individual flow structures.
Conclusions
We performed two-dimensional simulations of colliding flows of warm atomic hydrogen with a radiative cooling function as source term in the energy equation. The goal of our study was the systematic comparison of AMR simulations, where different criteria for refinement were applied, to a reference simulation on a static grid. While the probability distributions of mass density and temperature are well reproduced in AMR simulations, regardless of the refinement technique, differences become apparent in the fragmentation properties of the cold gas phase. As indicators, we used the total number of clumps and their average size. The clumps were identified by a clumpfind algorithm. In addition, we calculated the fractal dimension of dense gas, assuming a number density threshold of 20 cm −3 . Remarkably, the largest deviations from the clump statistics and fractal dimension extracted from the static-grid simulation, were encountered for refinement by overdensity, which is a commonly used refinement criterion in astrophysical AMR simulations. The deviations increase with the chosen density threshold. In this regard, it is important to note that Hennebelle et al. (2008) applied a density-based refinement criterion, where the thresholds were chosen even higher than those considered in our study. Good agreement, on the other hand, was obtained if the cooling time or enstrophy in combination with the rate of compression (the negative rate of change of the velocity divergence) were applied.
Substantial problems with AMR became apparent with regard to turbulent flow properties. Basically, none of our AMR runs were able to reproduce even remotely the small-scale structure of turbulence and the probability distributions of turbulent flow variables such as the vorticity modulus. This defficiency can be attributed to the selection effects introduced by adaptive techniques. The definition of thresholds for triggering refinement either selects strong local fluctuations (for example, large shear that gives rise to Kelvin-Helmholtz instabilities) or large-scale perturbations such as accumulations of mass that become Jeans-unstable in self-gravitating gas. In this respect, the test problem we investigated in this work is particularly tough, because turbulence stems from small-scale instabilities that are seeded by weak initial perturbations. The varying grid resolution in AMR simulations inevitably modulate the growth of these instabilities and, as a consequence, the production of turbulence is suppressed. This defficiency might be overcome by the application of a subgrid scale model, which transports turbulent energy contained in small eddies that are resolved on finer grids across coarser grid regions (see Maier et al. 2009).
The key point of using AMR is the computational cost for a given effective resolution. Indeed, Table 2 demonstrates that a substantial reduction of computation time is achieved with AMR, especially if refinement by overdensity is applied. So, AMR is essentially a trade-off, where fast computation has to be weighted carefully against the question whether the essential physics of the specific problem is captured. However, performing three-dimensional AMR simulations with very high effective resolution is conflicting. On the one hand the reduction of computation time will definitely be even greater, on the other hand there is the potential pitfall of inferring results that are properties of the numerics rather than the physics, since a comparison to a static-grid simulation is neither feasible nor desirable. | 2009-07-07T15:52:29.000Z | 2009-07-07T00:00:00.000 | {
"year": 2009,
"sha1": "6b3eeb84771028f9c5b9753e5e35f10c3e0c82a1",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2009/41/aa12483-09.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "799f2b43514ee8b45baad6a47a0472892dcffe22",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
230599099 | pes2o/s2orc | v3-fos-license | Preparation of biological sustained-release nanocapsules and explore on algae-killing properties
Graphical abstract The novel bio-based sustained-release nanocapsules was prepared by microcapsule technology. Comprehensive characterization and analysis were carried out to measure the sustained release and algae-killing properties of the new nanocapsules.
Introduction
Torreya grandis is a popular dried fruit that is valuable as a food and medicine [1]. However, due to the increasingly extreme climate and the decreases in resistance to diseases and pests, the frequency of pest outbreaks in T. grandis have increased year by year. Among these diseases, green algae cause the most harm to T. grandis. Currently the most common treatment for green algae is using stone sulfur mixtures, but this method has a low utilization rate and short validity period and cause serious environmental pollution. Thus, it must find new ways that are environmentally friendly and safe to control the green algae of T. grandis. In related work, the use of nanocapsules to control pesticide release has attracted extensive attention, as it can not only improve the use rate of pesticides, but can also reduce its instantaneous toxicity during application and prolong its validity period [2]. The studies on the prevention and control of T. grandis diseases are previously reported, but there are few reports on sustained-release pesticide capsules that can kill T. grandis algae. Thus, developing such a method would find useful applications.
Sustained-release pesticide capsules are commonly prepared using interfacial polymerization [3] and in-situ polymerization [4], Wang et al. [5] used hydrophobic polyacid chloride and hydrophilic polyamine in an immiscible phase in which rapid polymerization of the interface is used to prepare polyamide pH-sensitive microcapsules. Many studies have found that capsule size affects the properties of sustained-release capsules. For example, Zuo et al. [6] prepared nanocapsules using in-situ polymerization, with polypyrrole and glycerol as shell materials and ammonium persulfate as a core material. Because sustained-release pesticide nanocapsules show a unique nanosize effect, it has good biocompatibility, targeting, and sustained release [7], therefore it is important to study sustained-release pesticide nanocapsules as new agents for T. grandis.
The walls of such nanocapsules influence their release properties. In recent years, these materials have typically been made of high-molecular-weight polymers [8][9][10]. Although these capsules have good toughness and long sustained release, they are difficult to degrade and can cause secondary pollution to the environment [11]. Thus, it is important to explore new, environmentally friendly materials, such as embedding pesticides in biodegradable carriers. For example, glycosyl polymers, such as chitosan [12], alginate [13,14], and starch [15], have attracted interest because of their good biocompatibility and biodegradability [16]. Such materials can be used as sustained-release capsule walls, overcome the shortcomings of traditional materials. However, polysaccharides contain significant amounts of hydrophilic groups. These groups must be hydrophobically modified to form amphiphilic-derived polysaccharides [17][18][19], which then self-assemble in solvent to form a core-shell micellar structure [20,21]. When loaded with hydrophobic pesticides, such a structure can control the release of the pesticides [22], which solved the low use rate, short expiration date, and environmental pollution of chemical pesticides. Thus, preparing a new nanocapsule for killing algae will greatly benefit forestry and economics.
In this work, sodium carboxymethyl cellulose (CMC), sodium alginate (SA), and chitosan (CTS) were used for acylation reaction with the photosensitive catalytic material, iron octadaminphthalocyanine, to generate a new biological matrix. The iron octadaminphthalocyanine contained phthalocyanine (PC), a synthetic derivative of porphyrins that are non-polluting to the environment. It has good absorption in visible region and can be used as photosensitive catalyst [23]; Through chemical grafting reaction cinnamaldehyde and 2-aminobenzimidazole are combined to obtain an environmentally friendly algae killer. Using this algaekilling compound as the core, the bio-based sustained-release nanocapsule was created, with a dual function in photoactive and pharmacological activity.
Preparation of biologically based photosensitive catalyzed active molecular capsule wall
Pyromellitic dianhydride (2.18 g), urea (15.03 g), and ferric chloride hexahydrate (1.35 g) were mixed in a mortar, then 2.36 g catalyst ammonium molybdate was added, fully ground. The resulting mixtures are heated at 180°C for 0.5 h. After the reactant melted, the reaction was further heated at 230°C for 5 h. The black part was soaked in 6 mol•L -1 hydrochloric acid for 12 h and then filtered. The filter cake was stirred in distilled water at 85°C for 35 min and then filtered again. This stirring filtration was repeated until there was no solid precipitation of the filter solution. By drying the filtered residue, iron octadaminphthalocyanine (T) was obtained.
Sodium carboxymethyl cellulose (1.03 g) was dissolved in anhydrous ethanol (8.04 g). After 50 mL 20% sodium hydroxide solution was added and stirred for 0.5 h, iron octadaminphthalocyanine and 1-ethyl-(3-dimethyl aminopropyl) carbodiimide hydrochloric (EDC) were added to the resulting solution and stirred at 40°C for 3.5 h, hot filtered, washed with 80% ethanol 5 times, and dried at 100°C for 4.5 h. In this way, the wall material (T-sodium carboxymethyl cellulose) were produced.
The capsule wall material of the alginic acid based photosensitive catalytic active molecular was the prepared in the same way as T-sodium carboxymethyl cellulose, and then obtained a new type of photosensitive catalytic active molecular capsule wall material (T-sodium alginate).
Chitosan (4.99 g) was dissolved in 50 mL 42% of NaOH solution, and stirred for 2 h in an ice bath to fully swell the chitosan. Isopropanol solution (25 mL) containing chloroacetic acid was added drop by drop. The resulting solution was stirred for 4 h at 0-15°C; Then the pH was adjusted to 7.0, and the insoluble substance was removed by centrifuging. Appropriate amount of ethanol was added into the obtained supernatant, precipitating the product and allowed to stand for 2 h, then centrifuging, the product was washed with absolute ethyl alcohol 3 times, and vacuum drying to obtain the O-CMC. The obtained O-CMC was dissolved in distilled water. Then, sodium hydroxide solution was added, stirred for 0.5 h, and then iron octadaminphthalocyanine and 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride (EDC) were added, heated and stirred for 3.5 h, filtered while hot, washed with 80% ethanol, and dried at 100°C for 4.5 h, producing the photosensitive catalytic active molecular capsule wall material (T-carboxymethyl chitosan).
Preparation of algicide with a dual function in photoactive and pharmacological activity
The p-methoxy cinnamaldehyde was prepared as followsÁTHF solvent (20 mL), Ba(OH) 2 (3.6 g), vinyl acetate (1.03 g), and 4methoxybenzaldehyde (1.36 g) were mixed in a three-necked round-bottom flask and refluxed for 10 h. Then the reactant was poured into ice water and filtered. The filtration was extracted with chloroform, then solvent was evaporated to give p-methoxy cinnamaldehyde.
The 2-aminobenzimidazole was prepared as follows. The mixture of O-phenylenediamine (1 mol) and 1 mol hydrochloric acid (30%) was heated to 90°C. Then 1.2 mol cyanamide (45%) was added at a rate of 7 drops per min. After 40 min, 1.3 mol sodium hydroxide solution (35%) was added and continued to react for 40 min. The obtained product was suction-filtered, washed, and dried in vacuum for 12 h to give pale brown 2-aminobenzimidazole.
p-methoxy cinnamaldehyde (0.8 g) and 2-aminobenzimidazole (0.9 g) were dissolved in methanol (30 mL) and stirred for 1 h at 65°C. The mixed solution was freeze-dried at À40°C for 10 h to give the cinnamic aldehyde novel algae-killing compound (N-2-
Characterization of bio-based photosensitive catalytic active molecular materials and algicide agents
The absorption peak of the algae-killing compound was measured by an Ultraviolet spectrophotometer (UV-2550 UV, Shimadzu, Japan). The scanning wavelength was 200-900 nm. The IR spectra of iron octadaminphthalocyanine (T), T-CMC/SA/CMCS, and algae-killing compounds were determined by a Fourier transform infrared spectrometer (IRPrstige-21, Shimadzu, Japan) using the tableting method. The scanning wavelength was 400-4000 cm À1 . The 1 H NMR spectrum and 13 C NMR spectrum of iron octaphthalocyanine (T) were determined by Nuclear magnetic resonance spectrometer (Agilent 600 M, Agilent, USA). The conditions of 1 H NMR spectroscopy were as follows: data points were acquired 2048 times, scanning was done four times, the resonance frequency was 599.72 MHz, and adamantane was used as refer-ence for the 1 H chemical shift. The 13 C NMR spectrum measurement conditions were as follows: data points were acquired 2048 times, scanning was done 3000 times, the resonance frequency was 150.72 MHz, and tetramethyl silane (TMS) was used as a reference for the 13 C chemical shift.
The particle size of the algae-killing nanocapsules were measured by Zeta potential analysis (DT-300, Quantachrome, USA) at a temperature of 25°C and a scattering angle of 90°. The morphology was observed by High-resolution transmission electron microscope (JEM-2100, Joel, Japan). The thermal decomposition temperature was measured by Differential scanning calorimeter (DSC Q2000, TA, USA). The temperature range was 20-200°C, heated at a rate of 5°C/min in a nitrogen atmosphere.
Establishment of standard curves
High-performance liquid chromatographer (Agilent 1200, Agilent, USA) was used. The test conditions were as follows: a C 18 reversed-phase chromatography column (Waters Company) was used in a mobile phase, with a methanol: water volume ratio of 3:2, flow rate of 0.8 mL•min À1 , detection wavelength of 319 nm; column temperature of 20°C, and sample volume of 10 mL.
Algicide (0.1 g) was dissolved in the methanol-water solution (4:1, v/v) and diluted to 100 mL to obtain mother liquor of the algicide methanol-water solution (1 mg•ml À1 ). The mother liquor was diluted to 0.06, 0.04, 0.02, 0.01, and 0.008 mg•mL À1 , shaken well, then passed through a 0.22-mm nylon membrane, and measured by HPLC, which was used to draw a standard curve.
Encapsulation rate determination
Sustained-release nanocapsules (3 mL, with a concentration of 1 Â 10 4 mg•mL À1 ) were centrifuged at 12,000 r•min À1 for 1 h. the supernatant (0.1 mL) was diluted to 10 mL with a methanolwater mixture (4:1, v/v), and then passed through a 0.22-mm nylon membrane. The content of the algae-killing compound in the supernatant was determined by HPLC.
Test of sustained-release performance of bio-based nanocapsules Nanocapsules (0.5 g) were added to a dialysis bag and mixed with a methanol-water mixture (600 mL, 1:1, v/v). At 0.5, 1, 2, 4, 7, 11, 16, 24, 36, and 48 h, respectively, the mixture (1 mL) was taken and diluted to 25 mL with methanol-water mixture. The sample was passed through the 0.22-mm nylon membrane, and the concentration of the mixture was analyzed by highperformance liquid chromatography (HPLC). Methanol-water mixture (1 mL, 1:1, v/v) was added to the original solution to keep the volume of the release medium constant.
Determination of the phytotoxic of new algicide to T. Grandis green algae
The culturing was done according to the literature [24]. The green algae of T. grandis was cultured in an SE liquid/solid medium. The culture methods include the mixed culture, isolation and purification culture, and drug culture. The experimental operations are as follows: Mixed culture of green algae: a proper amount of T. grandis green algae mixture was cultured in a triangular flask at 23 ± 2°C with a light intensity of 6000 lx (120 mmolÁm À2 Ás À1 ), continuously lit for 24 h per day over 2 weeks.
The culture of green algae was separated and purified as follows. The mixed culture of green algae was inoculated on an SE solid culture medium by plate scribing. After 1 week of continuous culturing, the green algae were inoculated into the SE liquid medium for the amplification culture.
The culture medium containing medicine was prepared as follows. First, an algicide solution was prepared in methanol with a concentration of 1000 mg•L -1 . The mother liquid (0.1 mL, 0.2 mL, 0.4 mL, 0.6 mL, and 1 mL) were then added to the green algae culture medium, with a concentration gradient of 0.05 mg•L -1 , 0.1 mg•L -1 , 0.2 mg•L -1 , 0.3 mg•L -1 , and 0.5 mg•L -1 . At the same time, a control group was set up and repeated three times. Sealed the Petri dish with preservative film and cultured in an incubator at a temperature of 28°C. The color of the algae liquid was regularly observed during culturing by taking pictures.
Results and discussion
The synthesis of new biologically based materials and algae-killing compounds The synthesis route of iron octadaminphthalocyanine(T), T-CMC (R 1 ), T-SA(R 2 ), and T-CMCS(R 3 ) was illustrated in Fig. S1. The iron octaaminophthalocyanine was prepared by solid melt tetrapolymerization. The carboxy group on the CMC, SA, and O-CMCS molecular chain reacted with the amino group of the iron octamethylphthalocyanine in the presence of EDC/NHS as activating agent. Thus, Iron octaaminophthalocyanine was grafted onto the molecular main chain of CMC, SA, and O-CMCS, and then produced T-CMC/SA/CMCS.
The synthesis route of the algae-killing compound is illustrated in Fig. S2. Cinnamaldehyde is often used in food spices because it kills bacteria and algae and does not cause pollution. The p-methoxy cinnamaldehyde was synthesized from p-methoxybenzaldehyde and vinyl acetate by a Prins addition reaction.
The synthesized algicidal compound could be decomposed into two monomeric substances under the action of the alkaline plant cell liquid, achieving the expected control. The algicidal principle is shown in Fig. S3. As shown in Fig. S4, under the condition of a weak acid, the new biological matrix contains many -COO-groups with negative charge. The change in conformation of the molecular chain in the solution caused more -COO-to be exposed to the outside of the microcapsule, and the hydrophobic end of iron octadaminphthalocyanine (T) was taken as the inner layer of the capsule wall, thus encapsulating the new algicide inside. The resulting nanocapsules were collected and the yield of the product was calculated to be 72%.
Structure characterization of biological-based materials and algaekilling compounds UV spectral analysis of cinnamaldehyde algae-killing compounds Fig. 1 shows the UV spectrum analysis of cinnamaldehyde algae-killing compounds.
As shown in Fig. 1A, the maximum absorption wavelengths of 2-aminobenzimidazole and p-methoxy cinnamaldehyde were estimated by Woodward's rules: where k max is the maximum absorption wavelength of the compound, k basal is the base value of the compound matrix, and n i k i is the correction value for the number and type of substituents. The maximum absorption wavelength of 2-aminobenzimidazole and p-methoxy cinnamaldehyde was calculated according to the rules, both of which corresponded to the ones shown in the figure, indicating that 2-aminobenzimidazole and p-methoxy cinnamaldehyde were successfully synthesized. Fig. 1B shows the UV spectra of the cinnamaldehyde algaekilling agent. Because this compound contains 9 conjugated double bonds, the maximum absorption wavelength was calculated according to the Fieser-Kuhn rule: where n is the number of conjugated double bonds, M is the number of substituted alkyl groups and ring groups on the conjugated system, R endo is the number of double bonds in the ring on the conjugated system, and R exo is the number of double bonds outside the ring on the conjugated system. The calculated k max was 319.3 nm, and the error shown in the figure was within 10 nm. Thus, the cinnamaldehyde intelligent sustained-release algaekilling compound were successfully prepared.
Infrared spectrum of the bio-based molecular capsule wall materials Fig. 2 shows an infrared spectrum analysis of the bio-based molecular capsule wall materials.
As shown in Fig. 2, iron octaaminophthalocyanine (Fig. 2e) shows a 1, 2, 4, 5-tetrasubstituted C-H out-of-plane vibration of the benzene ring near 855 cm À1 , and a medium-strong absorption peak of C = C and C = N near 1667 cm À1 . The absorption vibration peak appeared near 1142, 1161, and 1585 cm À1 , indicating the formation of the macrocyclic skeleton of phthalocyanine [25]. The C = O absorption peak of ketone compound appeared near 1715 cm À1 . The single-bond vibration of -NH 2 appeared near 750 cm À1 . It was tentatively inferring that iron octaaminophthalocyanine was synthesized.
The broad peak of carboxymethylcellulose sodium (CMC, Fig. 2a) near 3430 cm À1 corresponds to the O-H stretching vibration; the stretching vibration of C-H saturated single bond occurred near 2909 cm À1 . There was the carboxymethyl -CH 2 bending vibration and the C = O stretching vibration near 1595 cm À1 , and 1060 cm À1 ; sodium alginate (SA, Fig. 2b) showed the O-H stretching vibration peak near 3600-3000 cm À1 , and the C = O stretching vibration of carboxyl group (-COOH) appeared near 1646 cm À1 ; chitosan (CTS, Fig. 2c) showed a broad, strong absorption peak near 3425 cm À1 , which was the result of partial overlap of the characteristic absorption peaks of -OH and -NH 2 . The in-plane bending vibration of -NH 2 appeared near 1650 cm À1 , and the vibration absorption peak of the alcoholic hydroxyl group appeared near 1100 cm À1 .
Compared with CTS (Fig. 2c), in addition to the vibration absorption peak of chitosan, O-CMCS (Fig. 2c 1 ) also showed a superposition peak of -COO-asymmetric stretching vibration and -NH 2 in-plane bending vibration near 1610 cm À1 . There was a symmetric stretching vibration peak of -COO-near 1421 cm À1 , and the alcohol hydroxyl absorption peak near 1100 cm À1 weakens, indicated that O-CMCS with carboxymethyl (-CH 2 -O-COOH) structure had been successfully synthesized.
Infrared spectroscopic analysis of cinnamaldehyde algae-killing compounds
The infrared spectrum analysis of cinnamaldehyde algae-killing compounds was shown in Fig. 3. Fig. 3 shows that p-methoxy cinnamaldehyde (Fig. 3a) had multiple peaks at 2000-1667 cm À1 corresponding to benzene ring and peaks at 850-800 cm À1 corresponding to counterpoint double sub-stitution of the benzene ring. The peak near 2700 cm À1 and 2800 cm À1 respond to C-H bond of the aldehyde structure. The peak near 1700 cm À1 respond to the C = O vibration absorption, indicating an aldehyde structure in p-methoxy cinnamaldehyde (Fig. 3a). A strong peak near 1100 cm À1 corresponds to C-O-C. A weak symmetrical absorption peak occurred near 900 cm À1 . The peaks at 3000-2900 cm À1 correspond to the C-H stretching vibration, which indicated that there was an O-CH 3 structure in p-methoxy cinnamaldehyde (Fig. 3a). 2-aminobenzimidazole (Fig. 3b) showed peak near 770-735 cm À1 which responds to the adjacent double substitution structure of the benzene ring; The peak near 1660-1575 cm À1 is attributed to -C = N. The double peaks near 3400-3100 cm À1 respond to -NH 2 , which indicates the presence of imidazole group. Compared with the infrared spectra of p-methoxy cinnamaldehyde (Fig. 3a) and 2-aminobenzimidazole (Fig. 3b), the infrared absorption of cinnamaldehyde algae-killing compound (Fig. 3c) showed no peaks near 2800 cm À1 and 2700 cm À1 , and the double peaks at 3400-3100 cm À1 changed to a single peak, indicated that a cinnamaldehyde algicide was successfully synthesized.
Nuclear magnetic resonance spectroscopy analysis of iron octaaminphthalocyanine
The nuclear magnetic resonance spectra of iron octaaminphthalocyanine was shown in Fig. 4. As shown in Fig. 4, the peak of d = 5.5-5.7 ppm in 1 H NMR spectrum of iron octaaminophthalocyanine (T) was the chemical shift of -NH 2 connected to the framework of phthalocyanine iron. The peaks of d = 7.9-8.1 ppm and d = 8.3-8.5 ppm responds to the two kinds of H above the octaaminophthalocyanine ferrobenzene ring, respectively, and the peak area ratio was 4:1:1. Because of the electronegativity of the carbonyl group on the phthalocyanine skeleton and the intramolecular hydrogen bond, the H on the benzene ring moved to the low field (from d = 7.3 ppm). The chemical shift of H on the c position was affected by the conjugation effect of the N = C double bond, and the electron cloud density was lower than that of the b position, so the peak appeared at the lower field. Because there was no hydrogen on the adjacent carbon of the three kinds of hydrogen, there was no split peak. These results show that iron octaaminophthalocyanine with a phthalocyanine iron skeleton was successfully synthesized [28,29].
The 13 C NMR spectrum of iron octaaminophthalocyanine was analyzed. The peak of C1 (amide carbon atom) appeared at d = 164 ppm, and the carbon skeleton nuclear magnetic spectrum peak of C2, C3, and C4 appeared at d = 134, 114, and 101 ppm, respectively. The chemical shifts of C2, C3, and C4 decreased because the skeletal shielding effects of C2, C3, and C4 were enhanced. The C5 atom had the strongest skeleton shielding effect, so its deviation from the theoretical value was the greatest. The spectrum peak of the phthalocyanine skeleton appeared at d = 83 ppm. These results show that the phthalocyanine skeleton was successfully prepared.
Morphology and thermal stability of drug-loaded capsules Fig. 5 shows the particle-size and thermal analysis of the drugloaded nanocapsules, and Fig. 6 shows TEM imagery. Fig. 5 shows that the average particle size of the drug-loaded nanocapsules was 276 nm and larger than the blank particle size, the polydispersity coefficient (PDI) was 0.133, found by particle size measured by zeta potential analyzer (Fig. 5a), indicating that the particle size of the nanocapsules was relatively uniform. The thermal analysis diagram (Fig. 5b) shows that the decomposition temperature of the nanocapsules was above 40°C, indicating good heat resistance. The particle size of the nanocapsules, as shown in Fig. 6, was about 10-30 nm, but the particle size measured by particle size measured by zeta potential analyzer was larger than that measured by transmission electron microscopy, mainly due to the partial aggregation of the nanocapsules in solution which increased the measured particle size. As shown in Figs. 5 and 6, the expected drug-loaded sustained-release nanocapsules were obtained.
HPLC standard curve
The standard curve and HPLC of cinnamaldehyde algae-killing compound were shown in Fig. S5.
According to the HPLC of the cinnamaldehyde algicide standard (Fig. S5A) and the cinnamaldehyde algae-killing nanocapsules (Fig. S5B), the retention time of the cinnamaldehyde algicide was 12 min. a linear regression based on the peak spectrogram area for the mass concentration of the cinnamaldehyde algicide were performed, establishing the standard curve of HPLC. A regression equation with a good linear relationship was obtained: y = 29461x À 58.997 (SD = 654.1902, R 2 = 0.9914).
Entrapment efficiency of biologically based nanocapsules
Using Eq. (1), the encapsulation efficiency of the sustainedrelease nanocapsules h was calculated: Where in M 1 is the total algae-killing compound mass (mg) and m 1 is the mass of the algae-killing compound in the supernatant (mg). According to Eq. (1), the encapsulation efficiency of the capsule was 48.77%, and its drug-carrying performance was improved.
Sustained-release properties and release kinetics of drug-loaded nanocapsules
The release kinetics of the sustained-release behavior of drugloaded nanocapsules was studied. The Peppas equation [30], Eq.
(2), were used for fitting and obtained the model parameters shown in Table 1. The correlation of the drug release characteristic index n of the Peppas equation is found as: Where M t is the cumulative release at time t, M 1 is the cumulative release at 1, k and n are model parameters, and t is the release time.
When n 0.45, the drug release mechanism is Fick diffusion, when 0.45 < n < 0. 89 it is non-Fick diffusion, and when n ! 0. 89 it is skeleton dissolution [31].
This model was suitable for data analysis at t 24 h. As shown in Table 1, with a correlation coefficient of r ! 0.9, the release of the nanocapsules agreed with the model equation: n 0.45, so the drug release followed the Fick diffusion, mainly drug diffusion. The release mechanism was generally as follows: the nanocapsules which formed in the water were affected by changes in the environment such as the temperature and water flow. Water entered into the nanocapsules meanwhile the algae-killing compound was released from the nanocapsules so that reduced the stability of the drug-loaded nanocapsules. Fig. 7 shows the cumulative drug release rate of the drug-loaded nano capsules and the dissolution rate of the algae-killing agent.
As shown in Fig. 7 (A), the nano capsules released during the initial phase, with the release of the algicide reaching 29-72%, and the dissolution rate of the algae-killing agent from capsules is shown as in Fig. 7 (B). This quick release had two main reasons. One reason is that during the initial stage of microcapsule release, there was a large concentration difference of the drug inside and outside the capsule wall, which aided the diffusion of the substance; The other reason is that the surface layer of the microcapsule may have had some of the drug adhered to it, and this drug was more likely to spread than the drug in the core [32]. After the quick release, the nanocapsules entered a slow release phase for 40-60 h, indicating good sustained release; the release rate of the algae-killing compound became slower and slower at this stage, and finally it stopped being released. The final cumulative release of the drug was 83%. Compared with spraying algicide directly into the environment, the sustained release capsules prepared in this study can help reduce the harm of sudden release of pesticides and prolong the efficacy period.
Phytotoxic of cinnamaldehyde algicide
To obtain the phytotoxic of algae-killing compound to the T. grandis green algae, the algae with different concentrations of algae-killing compound were cultured, and the growth of the algae in a long-term culture was observed. Pictures were taken at the same time every day. Fig. 8 compares the growth of green algae on the first and ninth days. As shown above, on the ninth day, the 0.05 mgÁmL À1 treatment barely affected the growth of the green algae; when the concentration reached 0.2 mgÁmL À1 , the color of the green algae changed, indicating that the 0.2 mgÁmL À1 concentration had an inhibitory effect; when the concentration reached 0.5 mgÁmL À1 , the color of the green alga was close to yellow-white, indicated that the algicide at this concentration had killing effect on green algae.
The algae-killing compounds in this experiment are insoluble in water but dissolve in the organic solvent toluene. After culturing for some time, toluene volatilizes and the culture medium was left as a yellow suspension [33]. Because algae-killing compounds are difficult to dissolve in water and produce yellow precipitation, it is not perfect and rigorous to analyze the growth of green algae by only observing its color. It is more persuasive if we consider chlorophyll a, superoxide dismutase activity. However, this experiment can still provide a certain theoretical basis for the treatment of T. grandis green algae.
The results show that when the concentration of algae-killing compound was 0.2 mg•mL À1 , the growth of the green algae was inhibited, and when its concentration was 0.5 mg•mL À1 , the algae removal was obvious.
Conclusions
The bio-based molecular capsule wall was prepared by using carboxymethyl cellulose, chitosan and sodium alginate as raw materials, and using iron octaaminphthalocyanine as a modifier. The algae-killing compound was synthesized by using p-methoxy cinnamaldehyde and 2-aminobenzimidazole. The formed nanocapsules had a particle size of 10-30 nm, were stable below 40°C, and showed good heat resistance. The encapsulation efficiency was 48.77%, and the final cumulative release rate of 60 h was 83%, indicating good drug loading and sustained release. This study also experimented with killing the algae of T. grandis. On the ninth day of the experiment, the algae-killing compounds at a concentration of 0.2 mg•mL À1 inhibited the green algae, and at a concentration of 0.5 mg•mL À1 they showed obvious inhibition and better control. Based on the structural analysis of the product and the qualitative and quantitative study of the nanocapsules, combined with the algae-killing experiments, the bio-based algae-killing nanocapsules were successfully prepared.
This study provides new encapsulating materials and algaekilling compounds for controlled-release pesticides. It also gives a new way to control T. grandis algae in an environmentally friendly way, which is important both in research and application.
Compliance with Ethics Requirements
This article does not contain any studies with human or animal subjects.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-12-17T09:11:09.367Z | 2020-12-15T00:00:00.000 | {
"year": 2020,
"sha1": "9d2e9e49e6f9e474c2c2d651b846ab72aef92cb0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jare.2020.12.006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0c2c7a68df8c77fc9b9d8794cf0a6509ae4e6d9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
118656334 | pes2o/s2orc | v3-fos-license | $\epsilon$-Expansion in the Gross-Neveu Model from Conformal Field Theory
We compute the anomalous dimensions of a class of operators of the form $(\bar\psi\psi)^p$ and $(\bar\psi\psi)^p\psi$ to leading order in $\epsilon$ in the Gross-Neveu model in $2+\epsilon$ dimensions. We use the techniques developed in arXiv: 1505.00963.
Introduction
In recent work [1] (see also [2]) the techniques of conformal field theory have been used for the computation of leading order anomalous dimensions of composite operators in interacting CFTs defined in terms of epsilon expansions about d = 3, 4 spacetime dimensions. The novelty of this technique lies in using conformal symmetry judiciously without taking recourse to any perturbative methods and Feynman diagrams, which has so far been used in such calculations.
The goal of this work is to compute the leading order -in the epsilon expansionanomalous dimensions of a class of composite operators in the Gross-Neveu model in 2 + ǫ dimensions. Our analysis involves two and three point functions and the OPE of relevant operators, and uses only conformal symmetry. We thus accomplish this without relying on Feynman diagrams and conventional perturbation theory techniques. The analysis follows closely the methods of [1] who first used the method to determine anomalous dimensions of similar operators in the O(N ) vector model. This note is organised as follows. We provide the basic set up in section 2. In section 3 we use methods similar to [1] to compute the anomalous dimensions of the operators ψ and ψψ. The result of this section is in agreement with that available in the literature. After this simple illustration of the technique, we turn to the general case of higher composite operators. In the appendix we compute the required combinatorial coefficients in the free theory OPE using a recursive diagrammatic approach [2]. In section 4, the two and three point functions, as well as the OPE, of the interacting theory are used and matching with the expected free theory results ultimately leads to a pair of recursion relations involving the leading order anomalous dimensions. The final result for the leading order anomalous dimensions are given in equation (4.45). In section 5, we compute the anomalous dimensions of scalars which are not singlet under U (Ñ ). As far we know, these have not been computed before in the literature and the results of section 4 and 5 are new.
The Gross-Neveu model
The Gross-Neveu model [3] is a renormalizable field theory in two dimensions. It is described by a U (Ñ ) symmetric action forÑ massless self-interacting Dirac fermions {ψ I ,ψ I }. We will consider the Gross-Neveu model in 2 + ǫ dimensions [4] S = d 2+ǫ x ψ I / ∂ψ I + 1 2 gµ −ǫ ψ I ψ I 2 , I = 1, ....,Ñ . (2.1) Here g is the coupling constant which is dimensionless in two dimensions. This theory has a weakly coupled UV fixed point given by the non-trivial zero of the beta function, Here Tr I is the trace of identity in Dirac fermion space, and in two dimensions N = 2Ñ . The fixed point occurs at 3) The special case of N = 2 for which the β function vanishes identically corresponds to the Thirring model. In this paper we consider the case for which N > 2.
The dimensions of the fermion ψ I , ∆ 1 , and composite scalarψ I ψ I , ∆ 2 , are given by The anomalous dimensions of the fundamental fermions and the composite scalar in the ǫexpansion have been computed in perturbation theory using the standard Feynman diagram techniques and, to leading order in ǫ, are given by The purpose of this note is to derive the above expressions, and similar ones for higher dimensional composite operators, using conformal field theory techniques without doing Feynman diagram computations 1 . For this we assume that the fixed point is a conformal fixed point. In two dimensions, the fermion propagator is given by We normalise our fields In order to simplify the notation in the analysis below, we use the normalised elementary field but denoted by the old variable. In this normalisation, the two point function is In the free theory the fermions satisfy / ∂ψ I = 0, ∂ µψ I Γ µ = 0 which are the shortening conditions for the multiplets {ψ I } free and {ψ I } free . In addition all other bilinears of ψ I andψ I are primary operators. At the interacting fixed point {ψ I } fixed pt and {ψ I } fixed pt are no longer short multiplets. The primary operators in the free theory ψ I ψ J ψ J and ψ I ψ J ψ J now become descendants of the {ψ I } fixed pt. and {ψ I } fixed pt. respectively. This phenomena of multiplet recombination was observed in φ 4 -theory [1] where two conformal multiplets in the free theory join and become a single conformal multiplet at the interacting fixed point. As in [1] we assume that every operator O in the free theory has a counterpart V O at the interacting fixed point. The operators V O and their correlation functions in the interacting theory, approach, respectively, O and their free correlation function in the ǫ → 0 limit. In the Gross Neveu model at the IR free point, various operators are constructed out of products of elementary operators ψ andψ. We will denote operators in the interacting theory as V 2p , V 2p+1 andV 2p+1 such that in the limit ǫ → 0 (IR free point) We also require that the multiplet recombination is achieved by 1 See [6,7] for various aspects of the Gross-Neveu model in 2 + ǫ dimensions 2 In general, for non-integer dimensions, gamma matrices are infinite dimensional and there are infinite number of antisymmetrized products. However for the calculation of anomalous dimensions to the leading order in ǫ, for the class of operators (ψψ) n and ψ(ψψ) n , this complication will not play any role.
for some unknown function α ≡ α(ǫ) which will be determined below. As an equation of motion, this follows from the Gross-Neveu lagrangian, but in the non-lagrangian approach we follow it is to be interpreted purely as an operator relation indicating that the operator V 3 is, in the interacting theory, a descendant of the primary operator V 1 .
Let us illustrate, schematically, how multiplet recombination is used together with the OPE to determine the leading order anomalous dimensions (This is the method developed by [1] and used in later sections here). Suppose the interacting theory has an operator relation of the form ∂V O = αV O ′ (as explained above, O's denote operators in the free theory whose counterparts in the interacting theory are the V O 's ). We first find an OPE in the free theory of primary composite operators O p , O p ′ which contains, in the leading terms, O and O ′ : (2.13) The dots above denote descendant terms (derivatives acing on O) and other subleading terms contain other operators in the spectrum (we have supressed various powers of x). The leading order terms suffice for our purpose. ρ is a combinatorial coefficient, determined by Wick contractions in the free theory (see the appendix). Now in the interacting theory, the corresponding OPE would read: (2.14) Note that, using the operator relation, the second term on the right hand side above is proportonal to V O ′ . In the interacting theory, this operator (V O ′ ) is a descendant and this crucial fact, together with matching with the free theory in the ǫ → 0 limit, implies qα = ρ. The coefficient q will be determined later in terms of anomalous dimensions of the operators in the OPE and will be seen to be singular in the ǫ → 0 limit. The coefficient α is determined below (to the required leading order in ǫ) and the above relation will be seen (in section 4.3) to lead to a recursion relation for the leading order anomalous dimensions.
We turn now to the determination of α. We have Differentiating the above expression and contracting with Γ µ matrices, we get Now requiring that in the limit ǫ → 0, V I 3 (x)V J 3 (y) approaches the free theory correlation we get the expression for α 3 Anomalous dimension of ψ,ψ andψψ In this section we will compute the anomalous dimensions of the fundamental fermion and the composite scalar. The results of this section are in perfect agreement with the leading order anomalous dimension computed from Feynman diagram techniques. In the next section we will generalise this to higher dimensional operators and derive some new results for the anomalous dimensions. We consider the OPE between ψ andψψ in the free theory. 3 For this we do not need the full OPE except those terms which are sensitive to the multiplet recombination, We will compare the above expression for the free OPE with the OPE at the interacting UV fixed point. For this we need the three point function at the interacting fixed point. According to [8], we have 4 In the above f is a constant. From this we can compute the following OPE Here 4) It is important to note here that in the above OPE, we have kept only contributions coming from the conformal family of V I 1 which includes V 3 as its descendant. Although the multiplet recombination does not necessarily require a lagrangian description, here our knowledge of the shortening condition follows from the equation of motion (2.12). Even though we use different methods, we are dealing with perturbative fixed points which do have a lagrangian description. Thus our analysis is not entirely lagrangian independent 5 . A, B i , C i , .. are functions of conformal dimensions which we determine by considering x 1 → x 3 and expanding (3.2) in powers of x 13 , Comparing with (3.3) we can get all the coefficients. We list here the first few coefficients Now we consider the following free correlators in the limit |x 1 | << |x 2 |, Using the OPE (3.3), we have This will match with the free correlator if f → −1 in the limit ǫ → 0. Next we compare the correlation function with the insertion of the descendant operatorV I 3 , HereV K 3 is the descendant ofV K 1 defined in (2.12) and the derivative acts on the first insertion. It is very easy to see that the first two terms containing A, B 1 in the expansion of C on the right hand side go to zero as we take ǫ → 0, Now we see that the contribution to (3.9) will come from the term with B 2 . In fact using the expansion we get In the above we used the equation of motion for the primary field (2.12). Thus we see that it will go to the free correlator if f B 2 α ∼ O(1) in the limit ǫ → 0. Since f goes to constant and α goes to zero, B 2 must diverge. We also see from (3.6) that B 2 has a chance of blowing up. If we define . (3.13) Thus B 2 will blow up if γ 1 vanishes as at least O(ǫ 2 ) Now we write γ 1 ∼ y 1,2 ǫ 2 , γ 2 ∼ y 2,1 ǫ . (3.14) Then we get Using that f → −1, we get Also in the interacting theory, the conformal dimension ∆ 3 of the descendant V I 3 (x 1 ) is related to ∆ 1 of V I 1 (x 1 ) by We will show this by explicit computation in the next section. Now we are interested in finding the OPE between V I 3 and V 2 . This can be obtained from (3.3) by acting with a derivative and using (2.12).
In order to compare with the free correlator, we also need the following OPE Proceeding as before, we find that for |x 1 | << |x 2 |, we have Thus in order to match with the free correlator, we requirefB 2 → 1. Now using that Using (3.12) and (3.14), to leading order in ǫ, we get y 2,1 + y 2 2,1 = 4y 1,2 .
Therefore the anomalous dimensions are These results are in agreement with results in [4,5].
Anomalous dimensions of (ψψ) p and (ψψ) p ψ
In this section we will compute the leading order anomalous dimensions of a class of higher dimensional composite operators in the interacting theory described by the UV fixed point of the Gross-Neveu Model. In the free theory limit (ǫ → 0) these operators are of the form (ψψ) p and ψ(ψψ) p with p > 1. Let us denote these operators in the interacting theory as V 2p and V 2p+1 such that in the limit ǫ → 0 (axiom)
The structure of the OPEs
We will need the following OPEs in the free theory where I is an U (Ñ ) index. f 2p , f 2p+1 and ρ 2p , ρ 2p+1 are combinatorial coefficients. Counting all possible Wick contractions gives their values to be See the appendix for details of the calculation. Now let us consider the corresponding OPEs in the interacting theory. The most general structure of the OPE, in the first case where the free theory limit is eq. (4.2), is .
(4.6) The dots indicate other primary operators that can appear in the OPE. Here The differential operators C(x 12 , ∂ 2 ) and D(x 12 , ∂ 2 ) have the general form For the OPE of two generic primary operators (one bosonic and the other fermionic) both of these structures can occur. However now we will show that, for the V 2p (x 1 )V I 2p+1 (0) OPE, only the first structure in eq. (4.6) is consistent with our axiomatic requirement that in the limit ǫ → 0 correlators of the interacting theory should match with corresponding correlators in the free theory.
For this consider the 3 pt. function . Then using the OPEeq. (4.6) -we have, In the free theory the OPE, eq. (4.2) gives, Now since in the ǫ → 0 limit we require and, Hence we clearly see that only the first structure in eqn. (4.6) needs to be considered. In other words all the coefficients appearing in D(x 12 , ∂ 2 ) can be set to zero in this case.
Next consider the OPE of V I 2p+1 and V 2p+2 . Again just on grounds of conformal symmetry we can write down an expression similar to eq. (4.6). But once again it is easy to show using the free theory OPE, eq. (4.3) that our axiom allows only the second structure of eq. (4.6) for the OPE of V I 2p+1 and V 2p+2 . Note that the above distinction is important when both operators involved in the OPE are primary operators. When one of the operators is a descendant the structure of the OPE simply follows by acting with derivatives on the OPE of the primary operators. For example when p = 1 the OPE of V 2 and V I 3 can be obtained from the OPE of V I 1 and V 2 by differentiating the latter.
Determining the coefficients in the OPE
We will now obtain the expression for the coefficients in eq. (4.9). The method for doing this is simple. The form of the 3 pt. function which is fixed in the usual way by conformal invariance is matched against the form obtained by taking the OPE of the first two operators within the 3-pt. function. We start with the following 3 pt. function, The form is determined by conformal invariance which allows for both the above structures 6 . Now using the OPE . (4.17) we get, In obtaining the second line above we have used the following results: But from eqn. (4.16), Comparing the above equation with eq. (4.18) we get, Since the tensor structure of the first term in eq. (4.22) does not have any matching with the tensor structures appearing in eq. (4.18), we can set g 1 = 0. Finally we have, . (4.24) Next we consider the following 3 pt. function In this case using the OPE we get .
(4.27) Therefore, In the limit |x 1 | ≪ |x 3 | eq. (4.25) becomes (4.29) Comparing the above equation with eq. (4.28), we obtain, This gives, Here, similar arguments as above would set g ′ 2 = 0. This again shows that in the 3 pt. function of two primary fermion operators and a primary scalar operator in general one must keep both tensor structures. Which structure contributes in a specific case depends upon the particular primary operators under consideration. When one of the operators involved in the 3 pt. function is a descendant, the allowed structure is of course determined by the correlator of primary operators.
Recursion relations for the leading order anomalous dimensions
In the ǫ → 0 limit, the OPEs of the interacting theory should go over to the free theory OPEs -eqs. (4.2, 4.3) -and the corresponding 3 pt. functions must match as well. This matching gives We use the following relations Writing γ k (ǫ) = y k,1 ǫ + y k,2 ǫ 2 + .... we get, Using y 1,2 = N −1 4(N −2) 2 this gives, Similarily we get for the other case, Solving the recursion relations eqs. (4.40) and (4.44) (with σ = 1) we get our desired result, Thus we have for the scaling dimensions of these composite operators, Note, in particular, that the classically marginal operator (ψψ) 2 receives corrections to its conformal dimension only at O(ǫ 2 ), since for p = 2 the second and third terms in the expression for ∆ (ψψ) p cancel. This is analogous to the bosonic case treated in [1] where the classically marginal operator (φ.φ) 2 has the same property.
Other scalar primaries
In this section we will consider a scalar primary which is not a singlet under the symmetry group U (Ñ ) and calculate its anomalous dimension. In the free theory we consider a scalar of the form In order to calculate the OPE we need the following correlation functions in the free theory: Therefore for x ∼ 0, we get Thus we get the following OPE in the free theory, Now we proceed as before. We assume that there exists an operator V O (LM ) (x) at the fixed point corresponding to the operator O (LM ) (x). Based on the symmetries, the 3-point function involving the scalar at the fixed point is given by The OPE obtained in (3.3) should hold in the case of scalar fields. All the corresponding coefficients are given in (3.6) except that ∆ 2 is replaced by ∆ (LM ) . Thus in this case, with E(x 13 , ∂ z ) = A ′ + B ′ 1 x µ 13 + B ′ 2 / x 13 Γ µ ∂ µ + C ′ 1 x µ 13 x ν 13 + C ′ 2 x µ 13 / x 13 Γ ν + C ′ 3 x 2 13 Γ µ Γ ν ∂ µ ∂ ν +.........
We proceed as in previous cases. We find thatf should approach −1 in the limit ǫ → 0. Furthermore the 3-point function with the descendant has the form Therefore the leading order anomalous dimension is (5.14) We could not find a check for this result in the literature. It would be interesting to compare this new result against a perturbative computation of the anomalous dimension.
Discussion
In this note we have computed, to first order in the epsilon expansion, the anomalous dimensions of a class of composite operators in the Gross-Neveu model. As emphasised earlier, we have done the computation without using the usual perturbative techniques. The primary input was conformal symmetry, which fixed for us the two and three point functions and the required OPEs. The main results, which, to our knowledge, have not been known before, are given in eq. (4.45), . It is to be emphasised that the methods used here can fix only the leading order anomalous dimensions. It would be interesting to extend the computations to second order in ǫ. As discussed in [1] for the case of the O(N ) bosonic vector model, two and three point functions would not suffice for the higher order computation and one would require conformal bootstrap of the four point functions to extract further information. How conformal symmetry can be used together with OPE associativity to deduce anomalous dimensions at higher orders in ǫ remains an interesting open problem (see [10,11] for recent work in this direction).
To compute f 2p we need to contract p pairs of ψ's and finally leave ψ I . We will get the following three diagrams. In these diagrams the top row has p blocks at x and bottom row has p blocks together with one × at origin. X X | 2016-03-28T14:53:19.000Z | 2015-10-16T00:00:00.000 | {
"year": 2016,
"sha1": "6d08c67d03d47fe23bdac3f104b4effeded26771",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP03(2016)174.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "6d08c67d03d47fe23bdac3f104b4effeded26771",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
37847082 | pes2o/s2orc | v3-fos-license | An electrophysiological method for quantifying neuropathic pain behaviors in rats: measurement of hindlimb withdrawal EMG magnitude
In behavior methods to quantify neuropathic pain, visual observations of limb-withdrawal reflexes to stimuli are not always clear-cut, so this method is partly subjective. Our current data suggest that measurement of electrophysiological EMG magnitudes enables more reliable and objective assessment for quantifying nocifensive behaviors related to neuropathic pain.
Introduction
Neuropathic pain can be caused by peripheral injury and inflammation and is characterized by mechanical, cold, and heat hyperalgesia, or allodynia with spontaneous pain [1]. Several independent mechanisms in the peripheral and central nervous systems are responsible for the specific sensory symptoms. Thus, a thorough analysis of sensory symptoms using mechanical and thermal stimuli may help to find underlying pathophysiological mechanisms that are mainly active in a particular neuropathic patient [2]. Animal models, which mimic human peripheral neuropathic pain, involve partial sciatic nerve injury (PSI), chronic constriction injury (CCI), and spinal nerve ligation (SNL) in rats and have a common feature of partial nerve injury [3]. The abnormal sensory symptoms in these models are analyzed by visual observation of hindlimb withdrawal in response to graded mechanical, heat, or cold stimuli [4]. However, this visual observation method in an all-or-none manner is not always clear-cut and interpretation may vary between investigators, and so may be influenced by subjectivity.
Electromyography (EMG) is generally used to quantify muscle functions and has been frequently used to quantify sleep latency after peripheral nerve injury [5] and visceromotor responses to visceral mechanical stimuli [6]. This may be used as a new technique for quantifying hyperalgesia in neuropathic rats, which will provide more objective and reasonable assessments than visual observations. There are few reports showing the feasibility of using EMG magnitude for hindlimb-withdrawal responses in neuropathic pain rats. In this brief report, we describe a simple EMG method for quantifying hindlimb-withdrawal responses in rat neuropathic pain behaviors.
Animals
Adult male Sprague-Dawley rats (n = 24, 350-400 g) were used. They were housed one animal per cage with free access to water and food. The animals were kept under a 12-h light/dark cycle at 22°C. All procedures involving the use of animals conformed with the guidelines of the International Association for the Study of Pain and the National Institutes of Health and were approved by the Catholic University Institutional Animal Care and Use Committee.
Electrode implantation and neuropathic pain surgery
The rats were divided into normal (n = 11) and neuropathic (n = 11) groups. In the normal group, only the EMG electrodes were implanted, without nerve injury. In the neuropathic group, electrode implantation and neuropathy surgery were performed at the same time. The surgery for partial sciatic nerve injury (PSI)-induced neuropathy was carried out on the left hindlimb as described previously [7]. Briefly, under isoflurane anesthesia, a segment of the sciatic nerve was exposed at the mid-thigh level, and the tibial and sural nerves were cut, whereas the common peroneal nerve was left intact. During the operation, EMG electrodes were implanted. The EMG electrodes were prepared as follows; a stainless-steel needle (0.15 mm diameter, 6 mm length, shaft of acupuncture needle) was inserted about 3 mm into a Teflon-coated multistrand wire (0.45 mm diameter) and fixed with acrylic adhesive (Fig. 1a). A pair of electrodes was implanted into the biceps femoris muscle 10 mm from the left hindlimb. The electrodes were fixed by suturing on to the muscle surface at the site of implantation using 6-0 Nylon to prevent them from being dislodged. A subcutaneous tunnel was made for exit of the electrode at the back of the animal's neck. The wound was closed (Fig. 1b). All implanted electrodes were well retained during all experiment periods (about one month) without any side effects.
Electromyographic recording
Behavioral testing with EMG recording was performed 14 days after surgery for electrode implantation and/or neuropathy. The rat was placed on a metal mesh floor under a custom-made transparent plastic dome (8 9 8 9 18 cm). Before beginning the behavioral test, the EMG electrode was connected to a polygraph (model 79, Grass, USA). The EMG signals were amplified and filtered (3 Hz-3 kHz). The rats were acclimated in the experimental chamber for about 20-30 min. The EMGs were recorded during the hindlimb responses to the applied mechanical, thermal, or cold (acetone) stimuli. The mean EMG magnitude (in mV) of the hindlimb withdrawal was used for the numerical analysis (Fig. 1c).
Mechanical, thermal, and cold stimulation
For mechanical stimuli, the medial area of the hind paw was mechanically stimulated from below with an ascending bending force series of von Frey filaments (bending forces 0.2, 0.4, 0.6, 0.8, 1.0 and 2.0 g, respectively). Each filament was used to stimulate the hind paw ten times at a rate of once every 3-4 min. Thermal stimulation was performed using an ascending series of warm water at 30, 40, or 50°C. One milliliter warm water from a water bath was gently sprayed on to the paw with a syringe connected to a 21-gauge needle. The time intervals between stimulations were at least 5 min. The water temperature in the Electrode implantation and neuropathic pain surgery. a EMG electrode. The electrode was constructed simply by inserting a stainless-steel needle (shaft of a stainless-steel acupuncture needle) into a Teflon-coated multistrand wire and then bonded with acrylic adhesive. b Surgery for electrode implantation and neuropathy. For neuropathy, the tibial and sural nerves were cut, whereas the common peroneal nerve was left intact. During the neuropathy operation, one pair of electrodes was implanted into the biceps femoris muscle. c Representative EMGs. The neuropathic rat (right) showed high EMG magnitudes to von Frey stimulation with 0.6 g force (arrows), compared with normal rat (left) water bath (Buchi Waterbath, Korea) was well controlled thermostatically within the preset range. For cold stimulation, acetone (99%, Sigma) was applied five times (once every 5 min) to the same area of the hind paw.
Statistical analysis
All values are expressed as mean ± SEM (standard error of the mean). Statistical significance was analyzed by oneway analysis of variance (ANOVA) or by use of paired t-tests. P values \0.05 were regarded as statistically significant.
Results and discussion
In this study, we examined the feasibility of using EMG measurements for withdrawal responses under neuropathic conditions. This is first report that EMG tests in response to mechanical, thermal, and cold stimuli after partial sciatic nerve injury.
A major feature of the neuropathic model was the marked hypersensitivity in response to normally innocuous mechanical stimuli. The neuropathic rats showed abrupt limb withdrawals (which is interpreted as pain response) to innocuous von Frey filaments (0.6-2.0 g force), whereas normal rats showed passive avoidance responses (e.g., slow movements in both lifting and lowering of the limb and slight paw re-positioning; no pain response) to these von Frey filaments (Fig. 2a). With visual observation, EMG magnitudes for abrupt hindlimb-withdrawal responses (pain response) in neuropathic rats were over 0.1 mV whereas passive responses to innocuous stimuli in normal rats were less than 0.1 mV in EMG magnitudes. Taken together, these data suggest that EMG analysis can be used to differentiate hyperalgesic (pain) responses from passive avoidance responses to innocuous von Frey stimuli.
In previous studies, the 50% paw withdrawal thresholds for the most sensitive areas in the neuropathic model were less than 2 g [3]. Consistent with our results, the least bending force of the von Frey filaments showing significant increase in EMG magnitude values in the neuropathic group was 0.6 g. However the same bending force did not cause any changes in EMG magnitudes in normal rats. These data suggest that the electrophysiological method using EMG magnitude can help determine more accurate mechanical threshold measurements than conventional visual observations in neuropathic pain rats.
In addition, in our thermal stimulation experiments, neither neuropathic nor normal rats showed any hindlimbwithdrawal reflexes to the innocuous heat stimuli of 30 and 40°C. The EMG magnitudes were close to baseline values (0) (Fig. 2b). However, when 50°C thermal stimulation was applied, the neuropathic rats displayed thermal hyperalgesia, manifested by exaggerated paw-withdrawal reflexes to stimulation, which are different behaviors from simple lifting-avoidance in normal rats. The different withdrawal in response to 50°C stimulation was well expressed as much higher EMG magnitude in the neuropathic group than in the normal group. The 50°C stimulus to elicit limb-withdrawal reflexes in this study has been used in a previous neuropathy study [8]. Although their behavioral patterns of withdrawal responses to 50°C stimulation in normal and neuropathic rats were manifested as simple lifting-avoidance or exaggerated paw-withdrawal The EMG magnitude of the hindlimb-withdrawal responses following thermal stimuli in normal and neuropathic rats. c The EMG magnitude of the hindlimbwithdrawal responses to acetone (cold stimulation). The gray line in each figure represents the minimum EMG magnitude value (0.1 mV) that showed abrupt hindlimb-withdrawal reflexes (painful response) to stimuli. Data are mean ± SEM. *P \ 0.05 versus the normal group reflexes, respectively, results from visual observation in an all-or-none manner might be variable among investigators, indicating a limitation of conventional visual observation methods. An electrophysiological method measuring EMG magnitudes may enable more reliable and objective assessment for quantifying thermal hyperalgesia. Acetone application to the hind paw is widely used to evaluate cold hyperalgesia in neuropathic rats [8]. In this study, neuropathic rats displayed large responses to acetone stimulation. Although this is a stimulus that normally evokes no response, or at most a brief response, in normal rats, it was recorded as markedly high magnitudes in electromyography of neuropathic rats compared with normal rats (Fig. 2c), indicating it can be used as an indicator of cold hyperalgesia.
To further confirm whether the increase of EMG values in response to stimuli in neuropathic rats was because of pain, we injected an analgesic, morphine (2 mg/kg), subcutaneously in neuropathic (n = 3) and normal (n = 3) rats and measured EMG magnitudes. The EMG values magnitudes evoked by mechanical (0.6 g force), heat (50°C) or, cold (acetone) stimulation in neuropathic rats were reduced to the baseline values of normal rats (less than 0.01 mV) 20 min after injection (data not shown). This confirmed that the EMG magnitudes evoked by stimuli in neuropathic rats indicate pain response.
In addition, EMG measurement of hindlimb withdrawal may help to differentiate supraspinal responses from nociceptive withdrawal response. In visual observation, lifting of the hindlimb accompanied by supraspinal responses (body repositioning and locomotion) during and/ or at the end of mechanical or thermal stimuli could not be differentiated from nociceptive withdrawal reflex. On the other hand, the EMG activity during nociceptive hindlimb response showed a distinct sharp burst with high voltages (Fig. 1c), compared with prolonged, low-level electric activity during supraspinal response. Despite these advantages, it should be noted that EMG recordings have several disadvantages, for example individual housing of each rat, extra equipment, cable connection prior to study of behaviors, and risk of infections.
In conclusion, partial sciatic nerve injury-induced neuropathic rats displayed mechanical, thermal, and cold hyperalgesia, and all of these responses were well expressed by the EMG measurements. This brief study showed that electrophysiological EMG magnitude measurements may provide more reliable and objective assessments for quantifying nocifensive behaviors related to neuropathic pain. | 2017-08-03T00:59:30.700Z | 2009-07-24T00:00:00.000 | {
"year": 2009,
"sha1": "66218c83e956cac788c7de2ac98ea3ef90982d3c",
"oa_license": null,
"oa_url": "https://doi.org/10.1007/s12576-009-0051-9",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "66218c83e956cac788c7de2ac98ea3ef90982d3c",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.